CN117697769B - Robot control system and method based on deep learning - Google Patents

Robot control system and method based on deep learning Download PDF

Info

Publication number
CN117697769B
CN117697769B CN202410168990.3A CN202410168990A CN117697769B CN 117697769 B CN117697769 B CN 117697769B CN 202410168990 A CN202410168990 A CN 202410168990A CN 117697769 B CN117697769 B CN 117697769B
Authority
CN
China
Prior art keywords
robot
data
module
neural network
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410168990.3A
Other languages
Chinese (zh)
Other versions
CN117697769A (en
Inventor
曾威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Weishitong Intelligent Technology Co ltd
Original Assignee
Chengdu Weishitong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Weishitong Intelligent Technology Co ltd filed Critical Chengdu Weishitong Intelligent Technology Co ltd
Priority to CN202410168990.3A priority Critical patent/CN117697769B/en
Publication of CN117697769A publication Critical patent/CN117697769A/en
Application granted granted Critical
Publication of CN117697769B publication Critical patent/CN117697769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a robot control system and a method based on deep learning, which relate to the technical field of data processing and comprise a central processing unit arranged on a robot, wherein the control system based on the central processing unit comprises: the data collection module is used for acquiring the data of the robot interaction with the environment from the environment; the data preprocessing module is used for preprocessing the collected data; the deep neural network design module is used for designing a neural network structure suitable for the use environment of the robot; the training module is used for training the robot to adjust network parameters based on the designed neural network structure by combining the preprocessed data so as to minimize the prediction error; the decision system building module builds a decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network. The robot can acquire information from the environment through autonomous deep learning by the robot, understand the change of the environment and make an adaptive decision.

Description

Robot control system and method based on deep learning
Technical Field
The invention relates to the technical field of data processing, in particular to a robot control system and method based on deep learning.
Background
The robot is a machine device for automatically executing work, can accept human command, can run a pre-programmed program, can also perform a schema operation according to a principle formulated by an artificial intelligence technology, has the task of assisting or replacing the work of human beings, is a product of advanced integration of a control theory, mechano-electronics, a computer, materials and bionics, and has important application in the fields of industry, medicine, agriculture, service industry, construction industry, even military and the like. With the deepening of the intelligentized intrinsic knowledge of robotics, robotics are beginning to continuously permeate into various fields of human activities, and various special robots and various intelligent robots with sensing, decision-making, action and interaction capabilities are developed by combining the application characteristics of the fields.
Through searching, in the prior art, the invention with the publication number CN201910076580.5 discloses a robot control system based on deep learning, which comprises a central processor, wherein the central processor is arranged in a robot control center, the whole system is allocated and controlled, the central processor is electrically connected with a main control module, the central processor is electrically connected with an instruction matching module, the instruction matching module is electrically connected with an instruction control module, the instruction control module is electrically connected with a motion control module, the motion control module is electrically connected with a speed regulation module, an angle regulation module and a force regulation module, the step of the robot control system based on deep learning is clear, an operation interface is simple and easy to understand, and the speed regulation, the angle regulation and the force regulation can carry out omnibearing regulation and control on the robot, and meanwhile, the existence of the main control module can ensure the stable and safe operation of the system.
However, it is not difficult to find that the conventional robot control system including the above technical solution is often limited to preset rules and limited reaction modes, and only can learn for a fixed scene, when the scene is changed, the robot needs to be set with a learning rule again, and the applicability is poor, and the regulation self-adaptive learning change cannot be performed according to the scene change.
Disclosure of Invention
The invention aims to provide a robot control system and a method based on deep learning, which aim to solve the technical problems that the existing robot cannot carry out regulation self-adaptive change according to scene change and use scene limitation, and the method is characterized in that the collected data are normalized, denoised and characteristic extracted by acquiring the data interacted with the environment from the environment, a neural network structure suitable for the environment where the robot is used is designed by combining the processed data, the network parameters are adjusted by training the robot to minimize prediction errors, a decision system of the robot is constructed based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction results of the network, the robot can learn and identify complex modes from a large amount of data by a deep learning method, the robot can acquire information from the environment by deep learning, understand the environment change and make adaptive decisions.
The invention is realized by the following technical scheme:
the first aspect of the present invention provides a robot control system based on deep learning, comprising a central processing unit arranged on a robot, wherein the control system based on the central processing unit is provided with:
the data collection module is used for acquiring the data of the interaction between the robot and the environment;
the data preprocessing module is used for preprocessing the collected data;
The deep neural network design module is used for constructing a deep neural network structure according to the use environment of the robot;
The training module is used for training the robot based on the deep neural network structure by combining the preprocessed data, and adjusting network parameters to obtain the minimum prediction error;
The decision system building module is used for building a decision system of the robot based on the trained deep neural network and making the robot to make corresponding action decisions according to the prediction result of the network.
According to the invention, the data of robot and environment interaction are obtained from the environment, the collected data are subjected to normalization, denoising and characteristic extraction, a neural network structure suitable for the use environment of the robot is designed by combining the processed data, the network parameters are adjusted by training the robot to minimize the prediction error, a decision system of the robot is constructed based on the trained deep neural network, so that the robot can make a corresponding action decision according to the prediction result of the network, the robot can learn and identify complex modes from a large amount of data by a deep learning method, the robot can acquire information from the environment by deep learning, understand the change of the environment and make an adaptive decision.
Further, the central processing unit also establishes:
the network module is used for network connection of the robot;
the alarm module is used for alarming and prompting when the robot fails;
and the man-machine interaction module is used for realizing man-machine interaction and making an adaptive action decision by manually controlling the robot.
Further, the data collection module comprises a data collection unit for data collection and a data storage unit for data storage;
the data acquisition unit at least comprises:
acquiring electrical signal data, acquiring environmental data of a robot based on network collection, and establishing a robot network control system;
mechanical data acquisition, namely acquiring mechanical characteristic data in the environment where the robot is located based on mechanical sensor collection, and establishing a robot operation system;
collecting sound data, collecting and acquiring voice characteristics of the robot in the environment based on a sound sensor, and establishing a robot voice interaction system;
And (3) visual data acquisition, namely acquiring visual characteristics of the robot in the environment based on the visual sensor collection, and establishing a robot visual system.
Further, the preprocessing of the data preprocessing module specifically includes: normalizing, denoising and extracting features of the collected data;
Specific preprocessing procedures include, but are not limited to, image data processing, signal processing, and feature extraction.
Furthermore, the data preprocessing module adopts parallel computation and distributed computation.
Further, the data collection module exchanges and shares data with the robot based on the network module cloud platform.
Further, the deep neural network design module comprises a convolutional neural network and a cyclic neural network deep learning model,
The convolutional neural network is used for processing visual data, extracting the characteristics of the image through convolutional and pooling operations, and strengthening the target detection and image classification performance of the robot;
The cyclic neural network is used for processing time series data and enhancing voice recognition and natural language processing performance.
Further, the training module at least includes:
The method comprises the steps that an instruction matching training is carried out, a central processing unit receives a single instruction, a data collection module and a data preprocessing module are combined to collect processed surrounding environment data of the robot, a module robot is built based on a decision system, the module robot makes a corresponding decision according to the decision, the module robot is matched with the instruction received by the central processing unit, if the error of the instruction and the decision is within a set threshold value, the data is continuously obtained, and a network parameter is optimized based on a deep neural network design module, so that a minimum instruction matching error is obtained;
The method comprises the steps of training motion control, sending a motion instruction, combining a data collection module and a data preprocessing module to collect processed surrounding environment data of a robot, building a module robot based on a decision system, making corresponding actions by the module robot according to decisions, matching with a sent motion target, continuously acquiring data if the motion target and the motion error of the robot are within a set threshold value, and optimizing network parameters based on a deep neural network design module to obtain a minimum motion error, wherein the motion error comprises a motion speed error, a motion angle error and a motion force error;
Navigation obstacle avoidance training, sending an obstacle avoidance instruction, combining a data collection module and a data preprocessing module to collect the processed surrounding environment data of the robot, building a module robot based on a decision system, performing obstacle avoidance according to a decision by the module robot, matching with a sent obstacle avoidance target, continuously acquiring data if the error of the obstacle avoidance target and the obstacle avoidance action of the robot is within a set threshold, and optimizing network parameters based on a deep neural network design module to obtain a minimum obstacle avoidance error;
And (3) feedback training, sending a feedback instruction, collecting and processing the surrounding environment data of the robot by combining the data collecting module and the data preprocessing module, and feeding back the vision, hearing and network system, if the error of the feedback data and the actual data is within a set threshold value, continuously acquiring the data, and optimizing the network parameters based on the deep neural network design module to obtain the minimum feedback error.
Further, the man-machine interaction module is a controller set based on a central processing unit, and the controller comprises:
The user login interface is used for a manager to log in by a user and acquire the control authority of the robot;
the firewall is used for establishing a robot network security line and guaranteeing the security of a robot control system;
the man-machine interaction interface is used for providing a robot control operation interface for a login user;
and the background supervision system is used for establishing background supervision of the robot and preventing the robot from autonomously learning and contradicting control.
The second aspect of the invention provides a robot control method based on deep learning, comprising the following specific steps:
acquiring data of robot interaction with environment;
Preprocessing the collected data;
Constructing a deep neural network structure according to the use environment of the robot;
Based on the deep neural network structure, training the robot by combining the preprocessed data, and adjusting network parameters to obtain a minimum prediction error;
and constructing a decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network.
Compared with the prior art, the invention has the following advantages and beneficial effects:
The invention can acquire the data of robot and environment interaction from the environment through the data collection module, the data preprocessing module, the deep neural network design module, the training module and the decision system establishment module established based on the central processing unit;
in the data acquisition process, the invention can respectively and correspondingly acquire the environmental data of the robot by setting electric signal data acquisition, mechanical data acquisition, sound data acquisition and visual data acquisition, wherein the environmental data comprises the strength of a network signal, a received instruction and the mechanical characteristic data in the environment of the robot, and the mechanical characteristic data comprises the perceived gravity, the pressure, the voice characteristic in the environment of the robot and the visual characteristic in the environment of the robot, so that a robot network control system, a robot running system, a robot voice interaction system and a robot visual system are established, the normalization, denoising and characteristic extraction of the collected data are realized, and a neural network structure suitable for the use environment of the robot is designed by combining the processed data, and the deep learning of the robot is enhanced;
In the invention, data exchange and sharing are carried out between the cloud platform and the robot based on the network module, so that the network learning of the robot is enhanced, in addition, a convolutional neural network and a circulating neural network deep learning model are adopted, network parameters are optimized through a back propagation algorithm and a gradient descent method training method, wherein the deep neural network design module realizes continuous learning and optimization of the network through incremental learning, migration learning and other methods, and the characteristics of images are extracted through convolution and pooling operation, so that the target detection and image classification performance of the robot are enhanced; the method comprises the steps of strengthening voice recognition and natural language processing performance through a cyclic neural network, combining instruction matching training, motion control training, navigation obstacle avoidance training and feedback training, training a robot to adjust network parameters to minimize prediction errors, constructing a decision system of the robot based on the trained deep neural network, enabling the robot to make corresponding action decisions according to the prediction results of the network, enabling the robot to learn and recognize complex modes from a large amount of data through a deep learning method, acquiring information from the environment through deep learning, understanding environment changes and making adaptive decisions;
According to the invention, by setting the user login interface, the firewall, the man-machine interaction interface and the background supervision system, a robot network security line can be established, the robot control system security is ensured, in addition, the background supervision of the robot can be also established, and after the manager performs user login, the robot has absolute robot operation authority, so that the robot is prevented from automatically learning and contradicting control, and the deep learning of the robot is controllable.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, the drawings that are needed in the examples will be briefly described below, it being understood that the following drawings only illustrate some examples of the present invention and therefore should not be considered as limiting the scope, and that other related drawings may be obtained from these drawings without inventive effort for a person skilled in the art. In the drawings:
Fig. 1 is a block diagram of the structure of the present embodiment;
FIG. 2 is a block diagram showing the structure of a data collection module according to the present embodiment;
FIG. 3 is a block diagram showing the structure of a data preprocessing module according to the present embodiment;
FIG. 4 is a block diagram of a deep neural network design module according to the present embodiment;
FIG. 5 is a block diagram showing the structure of the training module according to the present embodiment;
fig. 6 is a block diagram of a man-machine interaction module according to this embodiment.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
As a possible embodiment, as shown in fig. 1, the first aspect of the present embodiment provides a robot control system based on deep learning, applied to a robot, including a central processor provided on the robot, based on which:
the data collection module is used for acquiring the data of the interaction between the robot and the environment;
The data preprocessing module is used for carrying out normalization, denoising and characteristic extraction on the collected data;
The deep neural network design module is used for designing a neural network structure suitable for the use environment of the robot;
The training module is used for training the robot to adjust network parameters based on the designed neural network structure by combining the preprocessed data so as to minimize the prediction error;
the decision system building module builds a decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network.
The network module is used for network connection of the robot and comprises a 5G-based wireless network connection and a Bluetooth connection;
the alarm module is used for alarming and prompting when the robot fails;
and the man-machine interaction module is used for realizing man-machine interaction and making an adaptive action decision by manually controlling the robot.
According to the invention, the data of robot and environment interaction are obtained from the environment, the collected data are subjected to normalization, denoising and characteristic extraction, a neural network structure suitable for the use environment of the robot is designed by combining the processed data, the network parameters are adjusted by training the robot to minimize the prediction error, a decision system of the robot is constructed based on the trained deep neural network, so that the robot can make a corresponding action decision according to the prediction result of the network, the robot can learn and identify complex modes from a large amount of data by a deep learning method, the robot can acquire information from the environment by deep learning, understand the change of the environment and make an adaptive decision.
In some possible embodiments, as shown in fig. 2, the data collection module comprises a data collection unit for data collection and a data storage unit for data storage;
The data acquisition unit at least comprises:
The method comprises the steps of collecting electric signal data, collecting environmental data of a robot based on network, wherein the environmental data comprises the strength of network signals and received instructions, and the instructions are used for building a robot network control system;
mechanical data acquisition, namely acquiring mechanical characteristic data including perceived gravity and pressure in the environment where the robot is located based on mechanical sensor collection, wherein the mechanical characteristic data are used for establishing a robot operation system;
collecting sound data, collecting and acquiring voice characteristics of the robot in the environment based on a sound sensor, and establishing a robot voice interaction system;
And (3) visual data acquisition, namely acquiring visual characteristics of the robot in the environment based on the visual sensor collection, and establishing a robot visual system.
In some possible embodiments, as shown in fig. 3, the data preprocessing module performs normalization, denoising and feature extraction using appropriate algorithms and techniques, including, but not limited to, image data processing techniques, signal processing techniques and feature extraction techniques; the data preprocessing module adopts parallel computing and distributed computing technology to accelerate the speed and efficiency of data processing.
In some possible embodiments, the data collection module exchanges and shares data with the robot based on the network module cloud platform.
In some possible embodiments, as shown in fig. 4, a deep neural network design module adopts a convolutional neural network and a cyclic neural network deep learning model, and optimizes network parameters through a back propagation algorithm and a gradient descent method training method, the deep neural network design module also realizes continuous learning and optimization of a network through methods such as incremental learning, migration learning and the like, the convolutional neural network is mainly used for processing visual data, and features of images are extracted through convolution and pooling operations, so that the target detection and image classification performance of a robot are enhanced; the cyclic neural network is used for processing time series data and enhancing voice recognition and natural language processing performance.
In some possible embodiments, as shown in fig. 5, the training module uses a neural network structure established by a convolutional neural network and a cyclic neural network based on the deep neural network design module, and the data training robot after preprocessing is assembled to adjust network parameters to minimize prediction errors, and at least includes:
The method comprises the steps that an instruction matching training is carried out, a central processing unit receives a single instruction, a data collection module and a data preprocessing module are combined to collect processed surrounding environment data of the robot, a module robot is built based on a decision system, the module robot makes a corresponding decision according to the decision, the module robot is matched with the instruction received by the central processing unit, if the error of the instruction and the decision is within a set threshold value, the data is continuously obtained, and a network parameter is optimized based on a deep neural network design module, so that a minimum instruction matching error is obtained;
The method comprises the steps of training motion control, sending a motion instruction, combining a data collection module and a data preprocessing module to collect processed surrounding environment data of a robot, building a module robot based on a decision system, making corresponding actions by the module robot according to decisions, matching with a sent motion target, continuously acquiring data if the motion target and the motion error of the robot are within a set threshold value, and optimizing network parameters based on a deep neural network design module to obtain a minimum motion error, wherein the motion error comprises a motion speed error, a motion angle error and a motion force error;
Navigation obstacle avoidance training, sending an obstacle avoidance instruction, combining a data collection module and a data preprocessing module to collect the processed surrounding environment data of the robot, building a module robot based on a decision system, performing obstacle avoidance according to a decision by the module robot, matching with a sent obstacle avoidance target, continuously acquiring data if the error of the obstacle avoidance target and the obstacle avoidance action of the robot is within a set threshold, and optimizing network parameters based on a deep neural network design module to obtain a minimum obstacle avoidance error;
And (3) feedback training, sending a feedback instruction, collecting and processing the surrounding environment data of the robot by combining the data collecting module and the data preprocessing module, and feeding back the vision, hearing and network system, if the error of the feedback data and the actual data is within a set threshold value, continuously acquiring the data, and optimizing the network parameters based on the deep neural network design module to obtain the minimum feedback error.
In some possible embodiments, as shown in fig. 6, the human-computer interaction module is specifically a controller set based on a central processing unit, where the controller is at least provided with:
The user login interface is used for a manager to log in by a user and acquire the control authority of the robot;
the firewall is used for establishing a robot network security line and guaranteeing the security of a robot control system;
a man-machine interaction interface for providing a robot control operation interface for a login user;
And the background supervision system establishes background supervision of the robot, and after a manager logs in a user, the robot has absolute robot operation authority to prevent the robot from autonomously learning and contradicting control.
As a possible implementation manner, the present embodiment provides a robot control method based on deep learning, including the following specific steps:
acquiring data of robot interaction with environment;
Preprocessing the collected data;
Constructing a deep neural network structure according to the use environment of the robot;
Based on the deep neural network structure, training the robot by combining the preprocessed data, and adjusting network parameters to obtain a minimum prediction error;
and constructing a decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and example only, and is not intended to limit the scope of the invention.

Claims (8)

1. The robot control system based on deep learning is characterized by comprising a central processing unit arranged on a robot, wherein the control system based on the central processing unit is provided with:
the data collection module is used for acquiring the data of the interaction between the robot and the environment;
the data preprocessing module is used for preprocessing the collected data;
The deep neural network design module is used for constructing a deep neural network structure according to the use environment of the robot;
The training module is used for training the robot based on the deep neural network structure by combining the preprocessed data, and adjusting network parameters to obtain the minimum prediction error;
The decision system building module is used for building a decision system of the robot based on the trained deep neural network and making the robot to make corresponding action decisions according to the prediction result of the network;
the network module is used for network connection of the robot;
the alarm module is used for alarming and prompting when the robot fails;
The man-machine interaction module is used for realizing man-machine interaction and making an adaptive action decision by manually controlling the robot;
The training module comprises at least:
The method comprises the steps that an instruction matching training is carried out, a central processing unit receives a single instruction, a data collection module and a data preprocessing module are combined to collect processed surrounding environment data of the robot, a module robot is built based on a decision system, the module robot makes a corresponding decision according to the decision, the module robot is matched with the instruction received by the central processing unit, if the error of the instruction and the decision is within a set threshold value, the data is continuously obtained, and a network parameter is optimized based on a deep neural network design module, so that a minimum instruction matching error is obtained;
The method comprises the steps of training motion control, sending a motion instruction, combining a data collection module and a data preprocessing module to collect processed surrounding environment data of a robot, building a module robot based on a decision system, making corresponding actions by the module robot according to decisions, matching with a sent motion target, continuously acquiring data if the motion target and the motion error of the robot are within a set threshold value, and optimizing network parameters based on a deep neural network design module to obtain a minimum motion error, wherein the motion error comprises a motion speed error, a motion angle error and a motion force error;
Navigation obstacle avoidance training, sending an obstacle avoidance instruction, combining a data collection module and a data preprocessing module to collect the processed surrounding environment data of the robot, building a module robot based on a decision system, performing obstacle avoidance according to a decision by the module robot, matching with a sent obstacle avoidance target, continuously acquiring data if the error of the obstacle avoidance target and the obstacle avoidance action of the robot is within a set threshold, and optimizing network parameters based on a deep neural network design module to obtain a minimum obstacle avoidance error;
And (3) feedback training, sending a feedback instruction, collecting and processing the surrounding environment data of the robot by combining the data collecting module and the data preprocessing module, and feeding back the vision, hearing and network system, if the error of the feedback data and the actual data is within a set threshold value, continuously acquiring the data, and optimizing the network parameters based on the deep neural network design module to obtain the minimum feedback error.
2. The deep learning based robotic control system of claim 1, wherein the data collection module includes a data collection unit for data collection and a data storage unit for data storage;
the data acquisition unit at least comprises:
acquiring electrical signal data, acquiring environmental data of a robot based on network collection, and establishing a robot network control system;
mechanical data acquisition, namely acquiring mechanical characteristic data in the environment where the robot is located based on mechanical sensor collection, and establishing a robot operation system;
collecting sound data, collecting and acquiring voice characteristics of the robot in the environment based on a sound sensor, and establishing a robot voice interaction system;
And (3) visual data acquisition, namely acquiring visual characteristics of the robot in the environment based on the visual sensor collection, and establishing a robot visual system.
3. The deep learning based robot control system of claim 1, wherein the data preprocessing module preprocessing specifically comprises: normalizing, denoising and extracting features of the collected data;
Specific preprocessing procedures include, but are not limited to, image data processing, signal processing, and feature extraction.
4. The deep learning based robotic control system of claim 3, wherein the data preprocessing module employs parallel computing and distributed computing.
5. The deep learning based robot control system of claim 1, wherein the data collection module exchanges and shares data with the robot based on a network module cloud platform.
6. The deep learning based robotic control system of claim 1, wherein the deep neural network design module comprises a convolutional neural network and a recurrent neural network deep learning model,
The convolutional neural network is used for processing visual data, extracting the characteristics of the image through convolutional and pooling operations, and strengthening the target detection and image classification performance of the robot;
The cyclic neural network is used for processing time series data and enhancing voice recognition and natural language processing performance.
7. The deep learning based robotic control system of claim 1, wherein the human-machine interaction module is a central processor setting based controller comprising:
The user login interface is used for a manager to log in by a user and acquire the control authority of the robot;
the firewall is used for establishing a robot network security line and guaranteeing the security of a robot control system;
the man-machine interaction interface is used for providing a robot control operation interface for a login user;
and the background supervision system is used for establishing background supervision of the robot and preventing the robot from autonomously learning and contradicting control.
8. The robot control method based on the deep learning is characterized by comprising the following specific steps of:
acquiring data of robot interaction with environment;
Preprocessing the collected data;
Constructing a deep neural network structure according to the use environment of the robot;
Based on the deep neural network structure, training the robot by combining the preprocessed data, and adjusting network parameters to obtain a minimum prediction error;
constructing a decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network;
in the process, the method further comprises the steps of carrying out network connection on the robot, carrying out man-machine interaction on alarm prompt when the robot fails, and carrying out adaptive action decision by manually controlling the robot;
The method for constructing the decision system of the robot based on the trained deep neural network, so that the robot can make corresponding action decisions according to the prediction result of the network at least comprises the following steps:
The method comprises the steps that an instruction matching training is carried out, a central processing unit receives a single instruction, a data collection module and a data preprocessing module are combined to collect processed surrounding environment data of the robot, a module robot is built based on a decision system, the module robot makes a corresponding decision according to the decision, the module robot is matched with the instruction received by the central processing unit, if the error of the instruction and the decision is within a set threshold value, the data is continuously obtained, and a network parameter is optimized based on a deep neural network design module, so that a minimum instruction matching error is obtained;
The method comprises the steps of training motion control, sending a motion instruction, combining a data collection module and a data preprocessing module to collect processed surrounding environment data of a robot, building a module robot based on a decision system, making corresponding actions by the module robot according to decisions, matching with a sent motion target, continuously acquiring data if the motion target and the motion error of the robot are within a set threshold value, and optimizing network parameters based on a deep neural network design module to obtain a minimum motion error, wherein the motion error comprises a motion speed error, a motion angle error and a motion force error;
Navigation obstacle avoidance training, sending an obstacle avoidance instruction, combining a data collection module and a data preprocessing module to collect the processed surrounding environment data of the robot, building a module robot based on a decision system, performing obstacle avoidance according to a decision by the module robot, matching with a sent obstacle avoidance target, continuously acquiring data if the error of the obstacle avoidance target and the obstacle avoidance action of the robot is within a set threshold, and optimizing network parameters based on a deep neural network design module to obtain a minimum obstacle avoidance error;
And (3) feedback training, sending a feedback instruction, collecting and processing the surrounding environment data of the robot by combining the data collecting module and the data preprocessing module, and feeding back the vision, hearing and network system, if the error of the feedback data and the actual data is within a set threshold value, continuously acquiring the data, and optimizing the network parameters based on the deep neural network design module to obtain the minimum feedback error.
CN202410168990.3A 2024-02-06 2024-02-06 Robot control system and method based on deep learning Active CN117697769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410168990.3A CN117697769B (en) 2024-02-06 2024-02-06 Robot control system and method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410168990.3A CN117697769B (en) 2024-02-06 2024-02-06 Robot control system and method based on deep learning

Publications (2)

Publication Number Publication Date
CN117697769A CN117697769A (en) 2024-03-15
CN117697769B true CN117697769B (en) 2024-04-30

Family

ID=90144736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410168990.3A Active CN117697769B (en) 2024-02-06 2024-02-06 Robot control system and method based on deep learning

Country Status (1)

Country Link
CN (1) CN117697769B (en)

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017030135A (en) * 2015-07-31 2017-02-09 ファナック株式会社 Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion
CN106951923A (en) * 2017-03-21 2017-07-14 西北工业大学 A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion
CN108227691A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 Control method, system and the device and robot of robot
CN109407518A (en) * 2018-12-20 2019-03-01 山东大学 The autonomous cognitive approach of home-services robot operating status and system
CN109760030A (en) * 2019-01-26 2019-05-17 温州大学 A kind of robot control system based on deep learning
CN109998421A (en) * 2018-01-05 2019-07-12 艾罗伯特公司 Mobile clean robot combination and persistence drawing
CN110495819A (en) * 2019-07-24 2019-11-26 华为技术有限公司 Control method, robot, terminal, server and the control system of robot
CN110838353A (en) * 2019-10-11 2020-02-25 科大讯飞(苏州)科技有限公司 Action matching method and related product
CN111753982A (en) * 2020-05-29 2020-10-09 中国科学技术大学 Man-machine integration autonomy boundary switching method and system based on reinforcement learning
CN111844034A (en) * 2020-07-17 2020-10-30 北京控制工程研究所 End-to-end on-orbit autonomous filling control system and method based on deep reinforcement learning
CN112605983A (en) * 2020-12-01 2021-04-06 浙江工业大学 Mechanical arm pushing and grabbing system suitable for intensive environment
CN113840697A (en) * 2019-05-28 2021-12-24 川崎重工业株式会社 Control device, control system, mechanical device system, and control method
CN114397680A (en) * 2022-01-17 2022-04-26 腾讯科技(深圳)有限公司 Error model determination method, device, equipment and computer readable storage medium
CN114571473A (en) * 2020-12-01 2022-06-03 北京小米移动软件有限公司 Control method and device for foot type robot and foot type robot
CN114728396A (en) * 2019-11-15 2022-07-08 川崎重工业株式会社 Control device, control system, robot system, and control method
WO2022160430A1 (en) * 2021-01-27 2022-08-04 Dalian University Of Technology Method for obstacle avoidance of robot in the complex indoor scene based on monocular camera
CN115213884A (en) * 2021-06-29 2022-10-21 达闼科技(北京)有限公司 Interaction control method and device for robot, storage medium and robot
CN115243840A (en) * 2020-10-28 2022-10-25 辉达公司 Machine learning model for mission and motion planning
CN115237113A (en) * 2021-08-02 2022-10-25 达闼机器人股份有限公司 Method for robot navigation, robot system and storage medium
US11556724B1 (en) * 2017-09-01 2023-01-17 Joseph William Barter Nervous system emulator engine and methods using same
CN115700414A (en) * 2022-11-07 2023-02-07 中建三局第一建设安装有限公司 Robot motion error compensation method
CN116007616A (en) * 2023-01-18 2023-04-25 天津大学 Self-adaptive map construction system and method based on network state decision
CN116265202A (en) * 2021-12-16 2023-06-20 腾讯科技(深圳)有限公司 Control method and device of robot, medium and robot
CN116278880A (en) * 2021-12-20 2023-06-23 华为技术有限公司 Charging equipment and method for controlling mechanical arm to charge
CN116300909A (en) * 2023-03-01 2023-06-23 东南大学 Robot obstacle avoidance navigation method based on information preprocessing and reinforcement learning
CN116533249A (en) * 2023-06-05 2023-08-04 贵州大学 Mechanical arm control method based on deep reinforcement learning
US11717969B1 (en) * 2022-07-28 2023-08-08 Altec Industries, Inc. Cooperative high-capacity and high-dexterity manipulators
CN116594289A (en) * 2023-05-22 2023-08-15 广东电网有限责任公司 Robot gesture pre-adaptation control method and device, electronic equipment and storage medium
CN116679710A (en) * 2023-06-16 2023-09-01 浙江润琛科技有限公司 Robot obstacle avoidance strategy training and deployment method based on multitask learning
CN117369349A (en) * 2023-12-08 2024-01-09 如特数字科技(苏州)有限公司 Management system of remote monitoring intelligent robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6744679B2 (en) * 2016-12-07 2020-08-19 深▲セン▼前▲海▼▲達▼▲闥▼▲雲▼端智能科技有限公司Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Human-machine hybrid decision making method and apparatus
KR20190104483A (en) * 2019-08-21 2019-09-10 엘지전자 주식회사 Robot system and Control method of the same
KR20210129519A (en) * 2020-04-20 2021-10-28 삼성전자주식회사 Robot device and control method thereof

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017030135A (en) * 2015-07-31 2017-02-09 ファナック株式会社 Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion
CN108227691A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 Control method, system and the device and robot of robot
CN106951923A (en) * 2017-03-21 2017-07-14 西北工业大学 A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion
US11556724B1 (en) * 2017-09-01 2023-01-17 Joseph William Barter Nervous system emulator engine and methods using same
CN109998421A (en) * 2018-01-05 2019-07-12 艾罗伯特公司 Mobile clean robot combination and persistence drawing
CN109407518A (en) * 2018-12-20 2019-03-01 山东大学 The autonomous cognitive approach of home-services robot operating status and system
CN109760030A (en) * 2019-01-26 2019-05-17 温州大学 A kind of robot control system based on deep learning
CN113840697A (en) * 2019-05-28 2021-12-24 川崎重工业株式会社 Control device, control system, mechanical device system, and control method
CN110495819A (en) * 2019-07-24 2019-11-26 华为技术有限公司 Control method, robot, terminal, server and the control system of robot
CN110838353A (en) * 2019-10-11 2020-02-25 科大讯飞(苏州)科技有限公司 Action matching method and related product
CN114728396A (en) * 2019-11-15 2022-07-08 川崎重工业株式会社 Control device, control system, robot system, and control method
CN111753982A (en) * 2020-05-29 2020-10-09 中国科学技术大学 Man-machine integration autonomy boundary switching method and system based on reinforcement learning
CN111844034A (en) * 2020-07-17 2020-10-30 北京控制工程研究所 End-to-end on-orbit autonomous filling control system and method based on deep reinforcement learning
CN115243840A (en) * 2020-10-28 2022-10-25 辉达公司 Machine learning model for mission and motion planning
CN114571473A (en) * 2020-12-01 2022-06-03 北京小米移动软件有限公司 Control method and device for foot type robot and foot type robot
CN112605983A (en) * 2020-12-01 2021-04-06 浙江工业大学 Mechanical arm pushing and grabbing system suitable for intensive environment
WO2022160430A1 (en) * 2021-01-27 2022-08-04 Dalian University Of Technology Method for obstacle avoidance of robot in the complex indoor scene based on monocular camera
CN115213884A (en) * 2021-06-29 2022-10-21 达闼科技(北京)有限公司 Interaction control method and device for robot, storage medium and robot
CN115237113A (en) * 2021-08-02 2022-10-25 达闼机器人股份有限公司 Method for robot navigation, robot system and storage medium
CN116265202A (en) * 2021-12-16 2023-06-20 腾讯科技(深圳)有限公司 Control method and device of robot, medium and robot
CN116278880A (en) * 2021-12-20 2023-06-23 华为技术有限公司 Charging equipment and method for controlling mechanical arm to charge
CN114397680A (en) * 2022-01-17 2022-04-26 腾讯科技(深圳)有限公司 Error model determination method, device, equipment and computer readable storage medium
US11717969B1 (en) * 2022-07-28 2023-08-08 Altec Industries, Inc. Cooperative high-capacity and high-dexterity manipulators
CN115700414A (en) * 2022-11-07 2023-02-07 中建三局第一建设安装有限公司 Robot motion error compensation method
CN116007616A (en) * 2023-01-18 2023-04-25 天津大学 Self-adaptive map construction system and method based on network state decision
CN116300909A (en) * 2023-03-01 2023-06-23 东南大学 Robot obstacle avoidance navigation method based on information preprocessing and reinforcement learning
CN116594289A (en) * 2023-05-22 2023-08-15 广东电网有限责任公司 Robot gesture pre-adaptation control method and device, electronic equipment and storage medium
CN116533249A (en) * 2023-06-05 2023-08-04 贵州大学 Mechanical arm control method based on deep reinforcement learning
CN116679710A (en) * 2023-06-16 2023-09-01 浙江润琛科技有限公司 Robot obstacle avoidance strategy training and deployment method based on multitask learning
CN117369349A (en) * 2023-12-08 2024-01-09 如特数字科技(苏州)有限公司 Management system of remote monitoring intelligent robot

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
何宛余.给建筑师的人工智能导读.上海同济大学出版社,2021,第155-157页. *
基于多传感器融合技术的移动机器人位姿估计方法研究;涂远泯;制造业自动化;20231130;第45卷(第11期);全文 *
朱大昌.机器人机构学基础.机械工业出版社,2020,第14-30页. *
郭广颂.智能控制技术.2014,第116-117页. *
黄石生.新型弧焊电源及其蓄能控制.机械工业出版社,2000,第231-232页. *

Also Published As

Publication number Publication date
CN117697769A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US11161241B2 (en) Apparatus and methods for online training of robots
CN107139179B (en) Intelligent service robot and working method
US9630318B2 (en) Feature detection apparatus and methods for training of robotic navigation
CN101825903B (en) Water surface control method for remotely controlling underwater robot
CN109241912B (en) Target identification method based on brain-like cross-media intelligence and oriented to unmanned autonomous system
Sim et al. Internet-based teleoperation of an intelligent robot with optimal two-layer fuzzy controller
Pramila et al. Design and Development of Robots for Medical Assistance: An Architectural Approach
WO2023178737A1 (en) Spiking neural network-based data enhancement method and apparatus
Fu et al. Vision-based obstacle avoidance for flapping-wing aerial vehicles
WO2019147357A1 (en) Controlling and commanding an unmanned robot using natural interfaces
CN113848984A (en) Unmanned aerial vehicle cluster control method and system
CN110806758B (en) Unmanned aerial vehicle cluster autonomous level self-adaptive adjustment method based on scene fuzzy cognitive map
CN112123338A (en) Transformer substation intelligent inspection robot system supporting deep learning acceleration
CN117697769B (en) Robot control system and method based on deep learning
CN109760030A (en) A kind of robot control system based on deep learning
CN110673642B (en) Unmanned aerial vehicle landing control method and device, computer equipment and storage medium
US10812904B2 (en) Acoustic equalization method, robot and AI server implementing the same
Demidova et al. Autonomous navigation algorithms based on cognitive technologies and machine learning
Li et al. Guest editorial for special issue on human-centered intelligent robots: issues and challenges
KR20210141262A (en) Variable pre-swirl stator and method for regulating angle thereof
Sun et al. Tracking control for a biomimetic robotic fish guided by active vision
Mahmud et al. Intelligent autonomous vehicle navigated by using artificial neural network
Rios-Cabrera et al. Dynamic categorization of 3D objects for mobile service robots
CN113554700B (en) Invisible light aiming method
Angelopoulou et al. Brain-inspired intelligent systems for daily assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant