CN112668687B - Cloud robot system, cloud server, robot control module and robot - Google Patents

Cloud robot system, cloud server, robot control module and robot Download PDF

Info

Publication number
CN112668687B
CN112668687B CN202011386136.2A CN202011386136A CN112668687B CN 112668687 B CN112668687 B CN 112668687B CN 202011386136 A CN202011386136 A CN 202011386136A CN 112668687 B CN112668687 B CN 112668687B
Authority
CN
China
Prior art keywords
robot
digital twin
data
module
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011386136.2A
Other languages
Chinese (zh)
Other versions
CN112668687A (en
Inventor
黄晓庆
张站朝
马世奎
王斌
董文锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202011386136.2A priority Critical patent/CN112668687B/en
Publication of CN112668687A publication Critical patent/CN112668687A/en
Priority to PCT/CN2021/124506 priority patent/WO2022116716A1/en
Application granted granted Critical
Publication of CN112668687B publication Critical patent/CN112668687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models

Abstract

The embodiment of the invention relates to the technical field of robots, and discloses a cloud robot system, a cloud server, a robot control module and a robot. The cloud robot system comprises a cloud server and a robot control module, wherein the cloud server comprises a robot access and data exchange module, a knowledge and data intelligent module, an artificial enhancement machine intelligent module, a digital twin operation core module and a robot big data module, the robot control module is located in an entity robot, and the robot control module is communicated with the cloud server through a special network. Through the mode, the embodiment of the invention realizes the dynamic closed-loop and continuous-evolution intelligent cloud robot system.

Description

Cloud robot system, cloud server, robot control module and robot
Technical Field
The embodiment of the invention relates to the technical field of robots, in particular to a cloud robot system, a cloud server, a robot control module and a robot.
Background
At present, in an implementation manner of a robot, a cloud robot is increasingly widely applied. In some application scenarios with danger, dirtiness, repeatability and difficult implementation, the cloud robot is required to be higher, and market demands of intelligent robots capable of functionally replacing humans are generated.
The cloud robot in the prior art is not intelligent enough, and cannot meet market demands. How to construct a more intelligent cloud robot system architecture is a problem to be solved urgently at present.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a cloud robot system, a cloud server, a robot control module, and a robot, so as to solve the problem that in the prior art, a cloud robot implementation scheme is not intelligent enough.
According to an aspect of an embodiment of the present invention, a cloud robot system is provided, including a cloud server and a robot control module, where the cloud server includes a robot access and data exchange module, a knowledge and data intelligent module, an artificial enhancement machine intelligent module, a digital twin operation core module, and a robot big data module, the robot control module is located in an entity robot, and the robot control module and the cloud server communicate with each other through a dedicated network; wherein the content of the first and second substances,
the robot access and data exchange module is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the robot control module, and performing data exchange, fusion and distribution;
the knowledge and data intelligent module is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map;
the digital twin operation core module comprises a digital twin world and a digital twin body, and the robot control module comprises a digital twin copy, wherein the digital twin world is constructed based on the three-dimensional environment semantic map, the digital twin body is a physical model with the same physical attributes as the entity robot, and the digital twin copy is a copy of the digital twin body operated on the cloud server; the digital twin is used for training and online running of robot skills and applications in the digital twin world based on a multi-domain knowledge map of the robot service, the robot behavior action library and the multi-source data, and the digital twin copy synchronously controls the entity robot to execute the robot skills and applications according to the robot skills and applications executed by the digital twin;
the artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through a language AI, a vision AI, a motion AI, a multi-mode AI and an artificial enhancement AI;
the robot big data module is used for storing and analyzing the multi-source data, and feeding the analyzed multi-source data back to the digital twin operation core module for training and online operation of the robot skills and applications;
and the robot control module is also used for sending multi-source data to the robot access and data exchange module.
In an optional manner, the cloud server further includes a robot service application service platform, configured to configure the entity robot, and provide downloading of robot services.
In an alternative, the configuring the physical robot includes:
configuring one or more of a digital twin model of the physical robot, robot name, role, personality, application scenario and dialog, language parameters, network parameters, a list of user faces to be recognized, and corresponding robot services, wherein the application scenario is configured according to the three-dimensional environment semantic map.
In an optional manner, the cloud server further includes a robot open platform for providing a robot service development interface for a developer to perform the robot service development.
In an alternative approach, the robot service is an application based on the digital twin development and training, the robot service development including digital twin development, robot behavior and action editing, and robot business behavior blueprint editing.
In an optional manner, the digital twin operation core module is further configured to: in the training process of the robot skills and applications executed by the digital twin body in the digital twin world, if the numerical evaluation of the completion condition of the robot skills and applications executed by the digital twin body exceeds a first preset threshold value, the completion of the training of the robot skills and applications is determined, and if the completion of the training of the robot skills and applications is determined, the trained robot skills and applications are loaded to the robot control module for synchronous commissioning;
the robot control module is further configured to: loading and synchronously commissioning the trained robot skills and applications;
the digital twinning core module is further configured to: and if the numerical evaluation of the robot skill and the application completion condition after the trial operation training of the robot control module exceeds a second preset threshold value, releasing the service corresponding to the robot skill and the application to the robot business application service platform.
In an alternative, the digital twin operation core module further comprises a first game engine for loading the digital twin and the digital twin world, operating and updating the digital twin world, and operating the behavior and actions of the digital twin;
the robot control module further comprises a second game engine for running the digital twin replica;
the first game engine and the second game engine are used for jointly driving the behaviors and actions of the digital twin and the digital twin copy to be synchronously executed.
In an optional manner, the digital twin operation core module is further configured to: synchronizing behaviors and actions of the digital twin to a digital twin copy on the robot control module over the private network;
and the digital twin copy synchronously controls the entity robot to execute the behaviors and actions according to the behaviors and actions of the digital twin body.
In an optional manner, the robot control module is further configured to: and sending the current environment change information acquired by the sensor of the entity robot and the self behavior and action change information of the entity robot to the digital twin operation core module so as to enable the digital twin to keep the behavior and action synchronous with the entity robot.
In an alternative approach, the multi-domain knowledge graph comprises a semantic network of relationships between entities related to a robot service, the semantic network comprising information describing external objective facts and knowledge that is a generalization and summary of external objective laws;
the robot behavior action library comprises human behaviors and actions learned by the robot through simulation;
the three-dimensional environment semantic map is semantic data of a three-dimensional environment where the entity robot is located, and is obtained in the following mode: and fusing the multi-source data to obtain three-dimensional environment data, and performing map modeling through semantic segmentation based on the three-dimensional environment data to construct the three-dimensional environment semantic map.
In an optional manner, the building the three-dimensional environment semantic map includes:
and constructing a multi-semantic-fusion three-dimensional environment semantic map by combining application scene recognition, object detection recognition, geometric model representation, spatial semantic relation and semantic annotation based on deep learning.
In an alternative approach, the language AI includes automatic speech recognition, natural language understanding, and speech synthesis; the vision AI comprises face recognition, human body recognition, portrait recognition, object recognition and environmental scene recognition; the motion AI comprises external force sensing perception, autonomous movement and navigation and limb actions; the multi-mode AI is the ability of the language AI, the vision AI and the sports AI and the ability of the multi-factor combination output, wherein the multi-factor combination output comprises the input of the language AI, the vision AI and the sports AI and the voice output and the sports output;
the artificially enhanced AI is to: and providing forward excitation input for system reinforcement learning through manual intervention operation, wherein the language AI, the vision AI, the movement AI and the multi-mode AI are in online running states during the manual intervention operation.
In an optional manner, the artificially enhanced AI is further configured to: and if the robot service abnormal condition occurs, receiving the operation of the service trainer on the digital twin body within the control authority.
In an alternative approach, the robot big data module is further used for storing and analyzing one or more of system operation and service log data, user data, manually enhanced operation data and system performance data.
In an optional mode, the multi-source data comprises one or more of audio and video data, three-dimensional environment point cloud data, robot behavior and action data and multi-modal interaction data acquired by a sensor of the entity robot.
In an optional manner, the robot big data module is further configured to:
performing data extraction, data conversion, data loading, data classification, data labeling, anomaly detection and data cleaning on the stored data to obtain processed data;
and performing real-time analysis and off-line analysis on the processed data, performing numerical evaluation on the operation of each robot skill and application in the cloud robot system, wherein the numerical evaluation is used for determining whether the training of the robot skill and application is completed or not, and triggering the digital twin operation core module to retrain and update the robot skill and application.
In an optional mode, the numerical evaluation comprises the actual recognition rate of an AI algorithm and a model, the satisfaction degree of a man-machine conversation reply, the service response duration and the efficiency and the stability of a robot business behavior blueprint;
the robot big data module is further to: and classifying the objective conclusion of the numerical evaluation to form priori knowledge, related services and related data.
According to another aspect of the embodiments of the present invention, there is provided a cloud server for controlling an entity robot, including a robot access and data exchange module, a knowledge and data intelligent module, an artificial enhancement machine intelligent module, a digital twin operation core module, and a robot big data module, wherein the cloud server and the entity robot communicate with each other through a private network; wherein the content of the first and second substances,
the robot access and data exchange module is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the entity robot, and performing data exchange, fusion and distribution;
the knowledge and data intelligent module is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map;
the digital twin operation core module comprises a digital twin world and a digital twin body, wherein the digital twin world is constructed based on the three-dimensional environment semantic map, and the digital twin body is a physical model with the same physical attributes as the entity robot; the digital twin body is used for performing training and online running of robot skills and applications based on a multi-domain knowledge map of the robot service, the robot behavior action library and the multi-source data in the digital twin world so as to synchronously control the entity robot to perform the robot skills and applications;
the artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through a language AI, a vision AI, a motion AI, a multi-mode AI and an artificial enhancement AI; the robot big data module is used for storing and analyzing the multi-source data, and feeding the analyzed multi-source data back to the digital twin operation core module for training and online operation of the robot skills and applications.
According to another aspect of the embodiments of the present invention, there is provided a robot control module, wherein the robot control module and a cloud server communicate with each other through a dedicated network;
the robot control module comprises a digital twin copy, wherein the digital twin copy is a copy of a digital twin running on the cloud server; the digital twin copy synchronously controls an entity robot to execute the robot skills and applications according to the robot skills and applications executed by the digital twin copy;
the robot control module is further used for sending multi-source data to the cloud server, so that the digital twin body can execute training and online running of robot skills and applications in the digital twin world based on the multi-domain knowledge map of the robot service, the robot behavior action library and the multi-source data, and the entity robot can be synchronously controlled to execute the robot skills and applications through the digital twin copy.
According to another aspect of embodiments of the present invention, there is provided a robot including the robot control module as described above.
According to the embodiment of the invention, a digital twin world is constructed on a cloud server, a digital twin body with the same physical attribute as an entity robot is adopted in the digital twin world for robot training and online operation, the synchronous control of the entity robot is realized by controlling a virtual digital twin body, the control difficulty and cost for completing business application to the entity robot are reduced, manual operation is introduced by adopting artificial enhancement AI as the positive excitation input of system reinforcement learning, the training and online operation of the digital twin body on the robot skills and application are supported, and meanwhile, multi-source data acquired by the entity robot is also fed back to the cloud server for the training and online operation of the robot skills and application, so that a dynamic closed-loop and continuous-evolution intelligent robot system is realized.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and in order that the technical solutions of the embodiments of the present invention can be clearly understood, the embodiments of the present invention can be implemented according to the content of the description, and the above and other objects, features, and advantages of the embodiments of the present invention can be more clearly understood, the detailed description of the present invention is provided below.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic application diagram of a cloud robot system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a cloud robot system according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a framework of a cloud robot system according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of the operation of the robot service provided by the embodiment of the invention;
FIG. 5 is a schematic flow chart of robot service development provided by an embodiment of the present invention;
FIG. 6 is a schematic flow chart of an artificially enhanced AI operation provided by an embodiment of the invention;
fig. 7 is a schematic structural diagram of a cloud server provided in an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a robot control module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
From the perspective of the intelligent development of the robot to the human, if an electronic brain as smart as the human brain is to be manufactured, the electronic brain will be huge and cannot be realized on a single robot. In addition, because the data that the single robot can contact is limited, machine learning and deep learning which need big data training cannot be completed. The deep learning of artificial intelligence must be provided by a large amount of robots, converge to the high in the clouds, accomplish by the huge "machine brain" in the high in the clouds, this further explains that some perception and cognitive system of robot must put in the high in the clouds, this is the inevitable direction of intelligent robot development.
Accordingly, an embodiment of the present invention provides a cloud robot system, and fig. 1 is an application schematic diagram of the cloud robot system provided in the embodiment of the present invention. As shown in fig. 1, the cloud server 10 and the entity robot 20 communicate with each other through a private network 30. The robot services are trained by the cloud server 10, and the cloud server 10 controls the entity robot 20 to execute the trained robot services. The robot service means executing preset actions in different application scenes to complete preset functions, such as welcome reception, mobile grabbing, security patrol, distribution and the like. Services need to be composed of applications, and several skills are combined into the logic of the application. For example, table tennis is played into cutting, pulling and the like, and belongs to the skill, and the application refers to actual playing of the table tennis by a physical robot. The service means that the entity robot can provide the service of table tennis training. For another example, the solid robot grabs the article and belongs to the skill, the solid robot can complete the application of delivering coffee to people by using the skill of grabbing the article, and then the solid robot can complete the reception service of tea serving and water pouring.
Fig. 2 is a schematic structural diagram of a cloud robot system according to an embodiment of the present invention. As shown in fig. 2, the cloud robot system 100 includes a cloud server 10 and a robot control module 21. The cloud server 10 comprises a robot access and data exchange module 11, a knowledge and data intelligent module 12, an artificial enhancement machine intelligent module 13, a digital twin operation core module 14 and a robot big data module 15. The robot control module 21 is located at the physical robot 20. The robot control module 21 and the cloud server 10 communicate with each other through a private network 30. The communication between the robot control module 21 and the cloud server 10 may be secured through the private network 30.
The robot access and data exchange module 11 is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the robot control module 21, and performing data exchange, fusion and distribution. Here, the service process refers to a service process of a program, that is, a microservice.
The knowledge and data intelligent module 12 is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map.
The digital twin operation core module 14 includes a digital twin world and a digital twin, and the robot control module 21 includes a digital twin copy. The digital twin world is constructed based on a three-dimensional environment semantic map, the digital twin is a physical model with the same physical attributes as the entity robot, and the digital twin copy is a copy of the digital twin running on the cloud server. The digital twin is used for performing training and online operation of robot skills and applications in the digital twin world based on a multi-domain knowledge graph, a robot behavior action library and multi-source data of robot services. And synchronously controlling the entity robot to execute the robot skills and applications according to the robot skills and applications executed by the digital twin.
The artificial enhanced machine intelligent module 13 supports the digital twin operation core module 14 to train and operate the skills and applications of the robot on line through multi-modal AI and artificial enhanced AI.
The robot big data module 15 is used for storing and analyzing multi-source data, and feeding back the analyzed multi-source data to the digital twin operation core module 14 for training and online operation of robot skills and applications.
The robot control module 21 is also used for sending multi-source data to the robot accessing and data exchanging module 11.
According to the embodiment of the invention, a digital twin world is constructed on a cloud server, a digital twin body with the same physical property as an entity robot is adopted in the digital twin world for robot training and online operation, the synchronous control of the entity robot is realized by controlling a virtual digital twin body, the control difficulty and cost for completing business application to the entity robot are reduced, manual operation is introduced by adopting artificial enhancement AI as forward excitation input of system reinforcement learning, the training and online operation of the robot skills and applications by the digital twin body are supported, and meanwhile, multi-source data acquired by the entity robot is also fed back to the cloud server for the training and online operation of the robot skills and applications, so that a cloud-end system with dynamic closed-loop and continuous evolution is realized.
The cloud robot system is further described below. Fig. 3 is a schematic diagram of a framework of a cloud robot system according to an embodiment of the present invention. As shown in fig. 3, the cloud robot system adopts a distributed computing architecture of "cloud (brain) -net (nerve) -end (body)". The cloud is located in a cloud server, the network refers to a special network, and the end is located in the entity robot.
The Artificial enhancement machine intelligent module of the cloud brain organically fuses multimode fused AI (Artificial Intelligence) such as robot language AI (Artificial Intelligence) capability, visual AI capability, movement AI capability, environment cognition capability and the like and the Artificial enhancement AI to form perception and cognition capability of the cloud brain, and high-level human Intelligence such as logic inference, intelligent decision and the like is realized by combining human priori knowledge and data Intelligence. And by the digital twin operation core module, the digital twin body of the entity robot is operated in the virtual digital twin world, and robot skills and applications are executed. All behaviors and actions of the digital twin body can synchronously control the digital twin copy running in a robot control module of the entity robot through a special network, and an instruction and a data sequence executed by the digital twin copy drive the entity robot to synchronously control all behaviors and actions of the digital twin body, so that a target task of the entity robot in an application scene is completed, the whole cloud robot system is more intelligent, and a user can provide intelligent services for various industries by using the entity robot in a simple, safe and reliable manner. Wherein the actions of one or more robots constitute a meaningful behavior.
In this embodiment, the cloud server includes a robot service application service platform and a robot open platform, in addition to the robot access and data exchange module, the knowledge and data intelligent module, the artificial enhancement machine intelligent module, the digital twin operation core module and the robot big data module of the above embodiments.
The functions of each module and platform of the physical robot and the cloud server are further described in detail below.
1. Robot access and data exchange module
The robot access and data exchange module is used for registering a robot service process and authenticating robot access, receiving multi-source data sent by the robot control module, and exchanging, fusing and distributing data. The multi-source data comprises one or more of audio and video data, three-dimensional environment point cloud data, robot behavior and action data and multi-modal interaction data which are acquired through a sensor of the entity robot. The robot behavior and motion data are mainly robot joint motion frame data. The data acquisition mode of the robot sensor includes various modes such as vision, ultrasonic wave, laser and the like. The multi-modal interaction generally refers to human-computer interaction in various modes such as characters, voice, vision, actions and environments, and fully simulates the interaction mode between people.
The data exchange refers to exchange between multi-source data of the entity robot ascending and data (such as control commands, voice data, update data and the like) of the cloud server descending, for example, the data of the entity robot ascending is sent to the robot big data module, and the data of the cloud server descending is sent to the entity robot.
The distribution of the data refers to the distribution of the uplink data to one or more services at the cloud server for different processing or analysis. A service is a program that provides various function calls to other programs, such as programs, routines, or processes running in the background of an operating system. Such as audio-video data, can be distributed to visual processing services, while visual monitoring by users and operators is also required.
The fusion of data refers to that data of different sources or different structures are processed to form a standard data interface or are represented by a standard data structure. For example, for data of different sources and structures, such as audio-video unstructured data, robot behavior and action data, three-dimensional environment semantic map data, and the like, a standard data interface is formed by adding data descriptions, and the added descriptions include but are not limited to interface identification, SessionID (session ID), interface type, interface sequence, version, initiator, receiver, initiator module, receiver module, data identification ID, and the like. For the same category of data from different sources, such as lidar point cloud data and visual camera depth point cloud data, a standard data structure representation may be employed.
2. Intelligent knowledge and data module
The knowledge and data intelligent module is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map. A multi-domain knowledge graph of robot service and a robot behavior action library belong to a human priori knowledge base. The three-dimensional environment semantic map is sensed and recognized by the entity robot through various sensors.
The multi-domain knowledge graph comprises a semantic network of relations between entities related to the robot service, the semantic network comprises information and knowledge, the information is used for describing external objective facts, and the knowledge is the induction and summary of external objective rules. In particular, the multi-domain knowledge graph includes, but is not limited to, knowledge graphs oriented to various vertical domains and industries and corpora for general natural language understanding, such as a character relationship knowledge graph, a hotel industry knowledge graph, a real estate industry knowledge graph, a chinese historical knowledge graph, and the like.
The robot behavior action library includes human behaviors and actions learned by the robot through simulation, including but not limited to human actions learned by the physical robot through simulation, such as grabbing a target object, autonomously positioning and navigating, lifting hands, bending waist, shaking hands and the like.
The three-dimensional environment semantic map is semantic data of a three-dimensional environment in which the entity robot is located. The three-dimensional environment semantic map is a semantic level data service provided for a three-dimensional environment in which an entity robot is located, is used for describing the environment and the relationship of an objective physical world in a human natural language mode, is a digital representation of three-dimensional environment semantics which can be cognitively understood by the entity robot in various application scenes, helps the entity robot to perceive and recognize the physical world, and is used for training a virtual digital twin (namely, a digital twin robot, which will be elaborated in detail later).
In some embodiments, the three-dimensional environment semantic map may be obtained by: and fusing the multi-source data to obtain three-dimensional environment data, performing map modeling through semantic segmentation based on the three-dimensional environment data, and constructing a three-dimensional environment semantic map. The semantic segmentation aims at multi-feature fused three-dimensional environment data.
In some embodiments, the three-dimensional environment semantic map may be constructed by: and constructing a multi-semantic-fusion three-dimensional environment semantic map by combining application scene recognition based on deep learning, object detection recognition, geometric model representation, spatial semantic relation and semantic annotation. The three-dimensional environment semantic map is stored and accessed by means of a database.
3. Digital twin operation core module
The digital twin operation core module includes a digital twin world and a digital twin body, and accordingly, the robot control module located on the physical robot includes a digital twin copy (which will be described in detail later). The digital twin world is constructed based on a three-dimensional environment semantic map, the digital twin is a physical model with the same physical attributes as the entity robot, and the digital twin copy is a copy of the digital twin running on the cloud server. The digital twin is used for training and online running of robot skills and applications based on a multi-domain knowledge map, a robot behavior action library and multi-source data of robot service in a digital twin world, and the digital twin copy synchronously controls the entity robot to execute the robot skills and applications according to the robot skills and applications executed by the digital twin.
The embodiment of the invention adopts the digital twin body with the same physical attribute as the entity robot, thereby realizing the low-cost training and trial-and-error process of the robot service. A digital twin world is constructed by fusing various sensors of the robot, so that a digital twin of the robot is trained and runs on line in real time in the digital twin world, synchronous control over the entity robot is realized by controlling the virtual digital twin, and the control requirement for the entity robot to complete robot service is lowered.
Specifically, the behaviors and actions of the digital twins are synchronized to a digital twins copy on the robot control module through a special network, and the digital twins copy synchronously controls the entity robot to execute the behaviors and actions according to the behaviors and actions of the digital twins.
For the training of the robot skill and the application, a first preset threshold value can be preset as a numerical evaluation threshold value of the training completion condition of the robot skill and the application, and a second preset threshold value is preset as a numerical evaluation threshold value of the trial operation completion condition of the robot skill and the application. In the training process of the robot skill and the application executed by the digital twin body in the digital twin world, if the numerical evaluation of the completion condition of the robot skill and the application executed by the digital twin body exceeds a first preset threshold value, the completion of the training of the robot skill and the application is determined, and if the completion of the training of the robot skill and the application is determined, the trained robot skill and the trained application are loaded to a robot control module for synchronous commissioning. And then, the robot control module loads and synchronously tries to run the trained robot skills and applications. And if the numerical evaluation of the robot skill and the application completion condition after the trial operation training of the robot control module exceeds a second preset threshold, the digital twin operation core module releases the service corresponding to the robot skill and the application.
The numerical evaluation comprises the actual recognition rate of an AI algorithm and a model, the satisfaction degree of man-machine conversation reply, the service response duration and the high efficiency and stability of a robot business behavior blueprint. The robot big data module is further to: and classifying the objective conclusion of the numerical evaluation to form prior knowledge, related services and related data. Wherein, the numerical evaluation can be completed by a robot big data module, which will be described in detail later. And if the numerical evaluation of the robot skill and the application completion condition after the training is performed exceeds a second preset threshold value, releasing the service corresponding to the robot skill and the application, and putting the robot skill and the application into online operation. And if the numerical evaluation does not exceed a second preset threshold value in subsequent evaluation of the robot service which is trained and put into on-line operation, retraining and updating the skill and application of the robot will be triggered.
It can be understood that the digital twin operation core module is an environment service which continuously operates on line in the cloud. The digital twin body is a virtual robot which is constructed by performing 1:1 geometric appearance modeling on the geometric shape, structure and appearance of the physical robot and simulating each movable intelligent joint of the physical robot (including but not limited to a motor, an accelerator, damping parameters and the like), and can support methods such as design model updating and three-dimensional reconstruction to realize a physical model. In addition, physical simulation of the sensors of the physical robot is also required. The physical simulation comprises physical gravity simulation and physical collision simulation, and physical properties such as friction, light reflection and the like are expressed by applying physical materials, and the physical properties influence the behavior of the robot in a specific environment.
The digital twin world is a three-dimensional semantic map data service of a virtual mirror image of a physical world where the entity robot is located, is a digital representation of three-dimensional environment semantics which can be cognitively understood by the entity robot in various application scenes, helps the robot to perceive and recognize the physical world, and provides an interactive digital semantic environment for a real-time online robot operation service of a cloud server. Meanwhile, various sensors of the physical robot acquire the change of the environment and are synchronized to the digital twin world. The digital twin world is also used for background (offline) training of various digital twin bodies, and optimal operation strategies, behaviors and actions are guaranteed when the entity robot operates online.
4. Artificial enhanced machine intelligent module
The artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through language AI, vision AI, movement AI, multi-mode AI and artificial enhancement AI.
Wherein the language AI includes automatic speech recognition, natural language understanding, and speech synthesis; visual AI includes but is not limited to face recognition, human body recognition, portrait recognition, various object recognition, environmental scene recognition and other visual perception; the motion AI comprises external force sensing perception, autonomous movement and navigation, various limb actions and the like; the multi-modal AI means an ability to have the above-mentioned input of language, vision and external force perception and the combined output of multiple factors such as voice output and motion output in addition to the above-mentioned single AI ability. The artificial enhancement AI is used for providing forward excitation input for system reinforcement learning through manual intervention operation, and during the manual intervention operation, the language AI, the vision AI, the movement AI and the multi-mode AI are in an online running state, so that the mode of man-machine cooperation is realized, meanwhile, the uncertainty caused by the inexplicability of the current AI is ensured through artificial enhancement, and the safety and the robustness are fundamentally ensured. The above-mentioned various unidirectional AI capabilities and multimodal AI capabilities are all used in an API (Application Programming Interface) or SDK (Software Development Kit) manner to support digital twins and digital twins worlds in a digital twinning operation core module, and to support robot business Application management. The cloud robot system gradually approaches to human intelligence by artificially enhancing AI, and important decisions of the cloud server can be set to be made manually, so that the cloud brain is controlled by human.
The natural language AI includes AI capabilities such as speech recognition, natural language understanding, dialog knowledge bases, knowledge graphs in the industry field, and speech synthesis. The visual AI supports the AI capabilities of face recognition, human body recognition, object recognition, visual positioning navigation and the like. The motion AI supports the capabilities of robot autonomous positioning, navigation movement, autonomous obstacle avoidance, robot self-balancing, robot vision-guided grabbing and hand-waving and other common actions, dancing, robot behavior and action training and generation and the like. The three-dimensional environment semantic map function refers to the three-dimensional semantic cognition capability on the physical environment formed by the recognition cognition of the scene, 2D/3D object recognition, three-dimensional pose under a world coordinate system, 3D reconstruction and semantic segmentation of the physical environment, semantic description of the scene and the like.
In some embodiments, the AI capability described above can be achieved by employing various deep learning algorithms, machine learning algorithms, deep reinforcement learning algorithms, kinematic planning algorithms, and the like. Among them, Deep learning algorithms may include Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Neural Networks (DNN), fast region-based Convolutional Networks (FastR-CNN), YOLO (Young Only Look one), Single-level Multi-Box Detector (SSD), Long Short-Term Memory Networks (LSTM, Long Short-Term Memory), Deep two-way Language Models (Embelling from Language Models, ELMO), binary Encoder Representation (Bidirectional Encoder Representation) based on transform, Gene Pre-Training (pret), etc.
In some embodiments, the artificially enhanced AI is further for: and if the robot service abnormal condition occurs, receiving the operation of the service trainer on the digital twin in the control authority. The specific operation of the service trainer will be described in further detail below.
5. Big data module of robot
The robot big data module is used for storing and analyzing multi-source data, and the analyzed multi-source data are fed back to the digital twin operation core module to be used for training and online operation of robot skills and application. Further, the robot big data module is also used for storing and analyzing one or more of system operation and service log data, user data, artificially enhanced operation data and system performance data. The user data refers to user identity information, multi-dimensional portrait attributes and the like. The manually enhanced operation data refers to recorded data recorded by the system when the service is manually operated or identification data generated when the service is manually operated.
The analysis of the data mainly means that for the stored data, the robot big data module further performs data extraction, data conversion, data loading, data classification, data labeling, anomaly detection and data cleaning on the stored data to obtain processed data; and performing real-time analysis and off-line analysis on the processed data, performing numerical evaluation on the operation of each robot service in the cloud robot system, wherein the numerical evaluation is used for triggering a digital twin operation core module to retrain and update the robot skills and application. The data extraction, data conversion and data loading are also called ETL (Extract-Transform-Load), and refer to a process of extracting (Extract), converting (Transform) and loading (Load) data from a source to a destination.
Specifically, the numerical evaluation comprises the actual recognition rate of an AI algorithm and a model, the satisfaction degree of a man-machine conversation reply, the service response duration and the high efficiency and stability of a robot business behavior blueprint. For example, when the entity robot executes the service, based on the feedback of each data, it is determined that the behavior and the action of the entity robot are not like a human being, the actual recognition rate of the AI algorithm and the model is evaluated not high (for example, a specific recognition rate value is calculated), and the numerical evaluation result is fed back to the digital twin operation core module, so as to retrain and update the robot skills and applications. In addition, the numerical evaluation is also used for evaluating the robot business behavior blueprint, judging whether the behavior logic represented by the robot business behavior blueprint is optimal or not, and if not, feeding back the behavior logic represented by the robot business behavior blueprint to the digital twin operation core module for optimizing the blueprint.
The analysis of the data also includes generating user images and associated knowledge based on human-machine interaction.
Through the analysis of the data, feedback information of each robot service of the cloud robot system is formed, and the retraining and updating of the robot skills and applications (particularly algorithms and models) are triggered, so that the whole cloud robot system forms a complete and continuously optimized closed-loop system.
6. Robot open platform
The robot open platform is used for providing a robot service development interface for developers to develop robot services. The robot service is applied based on digital twin development and training, and the robot service development comprises digital twin development, robot behavior and action editing and robot business behavior blueprint editing. The robot business behavior blueprint represents the behavior logic of the robot.
The robot open platform is provided with various robot development kits, such as an integrated development environment, a digital twin model construction, a behavior and action editor, a blueprint editor, a behavior task arrangement and the like, and provides robot service development and training of digital twin.
Through the robot open platform, a developer can develop and train robot service based on the digital twin of the entity robot, and the robot intelligence is continuously evolved. The robot service developer is provided with a visual arrangement interactive interface to call various robots, mobile intelligent equipment or automatic driving vehicles and the like to complete corresponding service scene functions, so that the robot service development becomes simple and rapid.
7. Robot business application service platform
The robot business application service platform is used for configuring the entity robot and providing the downloading of the robot service. The method comprises the steps of configuring an entity robot, and mainly configuring one or more of a digital twin model, a robot name, a role, characters, application scenes and conversations, language parameters, network parameters, a user face list to be recognized and corresponding robot services of the entity robot, wherein the application scenes are configured according to a three-dimensional environment semantic map. After the robot skills and the application are trained in the digital twin operation core module, the robot business application service platform is further used for issuing the trained robot skills and the service corresponding to the application for downloading and applying by the entity robot. The roles comprise receptionists, patrolmen, distributors and the like, the roles can be configured according to actual application requirements, the characters comprise fast and slow, and the service places can acquire three-dimensional environment semantic map scenes such as business halls, communities, hotels and the like in advance according to the application scenes of the robot.
In some embodiments, the robot service application service platform includes a robot service management module and a robot application market, the robot service management module is configured to implement the configuration function, that is, to face a robot service application scene of each industry, and performs related attribute configuration on a robot role and corresponding robot services and the like through the robot service management module, where the robot services mainly include but are not limited to the following robot services, such as welcome reception, mobile capture, security patrol, delivery skills and the like. The robot application market is mainly used for supporting the robot service to be directly downloaded from the robot application market to the digital twin operation core module for commissioning or operation.
8. Solid robot
The physical robot comprises a robot body and a robot control module.
Wherein, the robot body comprises at least 1 or more intelligent flexible actuators, various sensors and a local computing unit.
The intelligent flexible actuator integrates a high-torque-density servo motor, a motor driver, a high-precision encoder and a precision reducer into a small and exquisite flexible whole body for a robot joint.
Various sensors include, but are not limited to: the system comprises a laser radar, an ultrasonic radar, a millimeter radar, a 3D depth vision camera, an RGB camera, a binocular synchronous positioning And Mapping (SLAM) camera, an Inertial Measurement Unit (IMU), an air detector, a temperature And humidity detector And the like.
The local computing unit is mainly used for realizing preprocessing, motion control and execution, for example, preprocessing, perception detection and identification are carried out on environmental data (namely multi-source data) collected by various sensors, and meanwhile, motion control is carried out on robot joints, so that the execution functions of behaviors and actions such as robot movement and limb actions are completed.
The robot control module is located in the entity robot, and the robot control module and the cloud server are communicated through a special network.
In some embodiments, the robot control module is further to: and sending the current environment change information acquired by the sensor of the entity robot and the behavior and action change information of the entity robot to the digital twin operation core module so that the digital twin and the entity robot keep the behavior and action synchronization.
In addition, the robot control module also sends the multi-source data preprocessed by the local computing unit to the robot access and data exchange module for the exchange, fusion and distribution of data by the robot access and data exchange module. The robot control module may also download the released robot service from the robot application market so that the physical robot can execute the robot service.
The robot control module includes a communication unit and a calculation processing unit associated with each other. The communication unit supports network communication modes such as WiFi, 4G and 5G, Ethernet, is connected to the cloud server through a private network, and forms a secure connection channel and a network isolation domain with the cloud server. The robot control module is also connected with the robot body and used for controlling the robot body and transmitting data. All actions and actions of the digital twin copy on the robot control module will be performed in full synchronization with the robot body. The behavior and actions of presenting the digital twin copy are supported on the screen of the robot control module.
In some embodiments, game engine technology is employed in the digital twin run core module. The digital twin operation core module further comprises a first game engine for loading the digital twin and the digital twin world, operating and updating the digital twin world, and operating the behavior and actions of the digital twin. The robot control module further comprises a second game engine for running the digital twin copy. The first game engine and the second game engine are used for jointly driving the behaviors and actions of the digital twin and the digital twin copy to be synchronously executed.
The game engine is a complex system formed by a plurality of subsystems together, and comprises modeling, animation (movement of a physical robot mapped by a digital twin), light and shadow, special effects, a physical system, collision detection, file management, network characteristics, editing tools, plug-ins and the like.
The animation comprises two types: the skeleton cartoon motion system is a character skeleton cartoon motion system, which uses built-in skeleton to drive an object to move; one is a model animation motion system, which directly performs deformation on the basis of a model.
Light and shadow refers to the way light sources in an application scene affect people and things in the application scene. Basic optical principles such as refraction and reflection and advanced effects such as dynamic light sources and color light sources are realized by the game engine.
The game engine provides a physical system, so that the motion of the object follows a fixed physical law, for example, when the character jumps up, the gravity value determined in the system will determine how high the character can jump, how fast the character falls, and the flight path of the object and the pitching mode of the robot movement are determined by the physical system.
Collision detection is a core part of the physical system and can detect the physical edges of objects in the digital twin world. Collision detection prevents two 3D objects from passing through each other when they are knocked together, thereby ensuring that when an object hits a wall, it does not pass through the wall, nor knock the wall down, because collision detection determines the position and interaction relationship of the two based on the characteristics between the object and the wall. By the physical system and the collision detection, the simulation of the physical characteristics and the dynamic control of the digital twin body can be realized.
Rendering means: after the target 3D model is manufactured, the material chartlet is given to the model according to different surfaces, which is equivalent to covering skin on the skeleton of the robot physical model, and finally, all effects of the model, animation, light shadow, special effect and the like are calculated in real time through a rendering engine and displayed on a screen.
In the embodiment of the invention, the game engine is adopted to provide various tools and running server environments, the digital twin and the digital twin world are loaded, all behaviors and actions of the digital twin are run, and the digital twin world is run and updated, so that the running simulation degree of the digital twin and the digital twin world is higher, and the entity robot and the environment of the application scene where the entity robot is located can be simulated really. The behaviors and actions of the robots are synchronized to the digital twin copies on the robot control module through the robot safety special network, and the digital twin copies run and output instructions to synchronously control the entity robot. Meanwhile, the entity robot obtains the current actual environment change and the self state of the robot through the sensor and reports the current actual environment change and the self state of the robot to the cloud digital twin body, so that the cloud digital twin body and the entity robot keep the behavior and the state synchronous.
The operation mechanism and operation flow of the cloud robot system are further described in detail below.
1. Operation mechanism and process of robot business management and robot service
Fig. 4 is a schematic flow chart of the operation of the robot service provided by the embodiment of the invention. As shown in fig. 4, the method comprises the following steps:
step 401: configuration management and monitoring;
in this step, a user (i.e., an administrator) of the robot service performs configuration of the physical robot and monitors the designated robot service by logging in the robot service management module. And may select and configure robotic services from the robotic application marketplace.
Step 402: the robot business behavior blueprint and the trial run and download of behaviors and actions;
in the step, a robot business behavior blueprint and all used behavior and action data are subjected to digital twin body training on a digital twin operation core module, if the numerical evaluation of the completion condition of the robot skill and application executed by the digital twin body exceeds a first preset threshold value, the completion of the training of the robot skill and application is determined, and if the completion of the training of the robot skill and application is determined, the trained robot skill and application are loaded to a robot control module for synchronous commissioning
When the numerical evaluation of the trial operation process and the target task completion condition exceeds a second preset threshold, the robot service related to the robot business behavior blueprint and all used behavior and action data can be issued, and the issued blueprint, behavior and action data can be downloaded to the robot control module.
Step 403: the behaviors and actions of the digital twin operation core module and the entity robot are synchronous;
in the step, the behaviors of the digital twin and the digital twin copy are driven to be synchronously executed through a first game engine of the digital twin operation core module and a second game engine of the robot control module.
Step 404: man-machine interaction and interaction between the entity robot and the environment;
in the step, the instruction of the digital twin copy running on the robot control module drives the entity robot to run based on the logic of the robot business behavior blueprint, and the entity robot interacts with the user and the physical environment of the current application scene in a multi-mode manner.
Step 405: entity robot event and status feedback;
in this step, voice input of a user, an event caused by a current environmental change, and a state change of the physical robot are received through various sensors of the physical robot. These occurring events and states are fed back into the digital twin and digital twin world of the digital twin operating core module of the cloud server.
Step 406: and synchronizing the behaviors and actions of the digital twin operation core module and the entity robot based on feedback.
In the step, the cloud server influences the digital twin through the digital twin operation core module based on the intelligent decision response of time-sequenced events and state changes, and synchronizes behaviors and actions to the digital twin copy of the robot control module, so as to synchronously control the entity robot to complete response behaviors.
2. Robot service development process
Fig. 5 is a schematic flowchart of robot service development provided by an embodiment of the present invention. As shown in fig. 5, the method comprises the following steps:
step 501: a developer registers and logs in on the robot open platform;
the developer refers to a developer of the robot service.
Step 502: creating a digital twin model of a designated entity robot through an integrated development environment of a robot development suite;
step 503: developing the basic behaviors and actions of the digital twins by importing or editing based on the behavior and action editor of the robot development suite;
multiple nested combinations are supported when exploiting the underlying behavior and actions of digital twins.
Step 504: developing a robot business behavior blueprint based on a blueprint editor of a robot development kit;
and when the robot business behavior blueprint is developed, the import of a sub blueprint and multi-blueprint nested combination and the like are supported.
Step 505: the digital twin body conducts simulation training of robot skills and application for a plurality of times in the digital twin world, and the simulation training is continuously tried and mistaken until the running process of the training and the numerical evaluation of the target task completion condition exceed a first preset threshold;
step 506: packing the robot service and the blueprint through an integrated development environment of the robot development kit;
step 507: pushing or loading the service behavior blueprints of the digital twin bodies and the robots to a robot control module to perform synchronous commissioning;
step 508: when the numerical evaluation of the trial operation process and the target task completion condition exceeds a second preset threshold value, submitting the robot service to management and verification;
the management and the verification can be performed by the auditors of the cloud server. In particular, it can be performed by auditors of the robot application market.
Step 509: and the robot service is issued to a robot application market of the robot service application service platform after passing the audit.
In the robot application market, robot services may be downloaded for use by physical robots.
3. Artificial enhanced AI operation flow
Fig. 6 is a schematic flow chart of an artificially enhanced AI operation according to an embodiment of the present invention. As shown in fig. 6, the method comprises the following steps:
step 601: intervention operation and control of a robot service client;
the service trainer of the robot carries out visual monitoring on the entity robot in the service state through the service client, and comprises the step of monitoring the digital twin and the digital twin world of the entity robot. When abnormal service conditions (such as loss positioning, overtime service, body overtemperature and the like) occur, within the control authority of the current service trainer, manual intervention control can be performed through multimode fusion AI and manual enhancement AI, and the method mainly comprises but is not limited to the following steps: the digital twin is directly operated through devices such as voice input devices, keyboards, mice and VR glasses.
Step 602: human intelligence and artificial intelligence manipulation;
the manual intervention operations of the service trainer will automatically override the operational functions on the digital twin currently driven by the robot business behavior blueprint and multimodal fusion AI. Therefore, during manual intervention operation, the cloud robot system judges whether the current execution behaviors and actions can be replaced manually or not, if the current execution behaviors and actions cannot be replaced manually, prompt information is given, and a service trainer determines whether to intervene the operation or not according to the prompt information. When the manual operation is completed, the cloud robot system can also enter the original blueprint processing logic or the multi-mode fusion AI response process, and the system can also give corresponding prompt information.
It will be appreciated that several operations performed by human intervention in the manually enhanced AI do not have the physical robot perform the target task exactly as the human intervention operations, but rather only the current operation is replaced by a human. And when the manual operation is finished, returning to the original blueprint so that the entity robot executes and completes the target task according to the original blueprint. Among them, the strategy of manual intervention operation is called Human Intelligence (HI).
Step 603: manual intervention operation storage, behavior synchronous operation and event, and state feedback;
on the basis of manual intervention operation, the digital twin body and the digital twin copy on the robot control module are enabled to keep behavior synchronous operation through the digital twin operation core module. The cloud robot system carries out identification recording on manual intervention operation and accesses the robot into the robot big data module. The human-computer interaction between the entity robot and the user and the events and state changes caused by the environment changes can be synchronously fed back to the digital twin body, and the visual feedback is carried out through the client of the service trainer, so that the current operation behavior of manual intervention is influenced.
Step 604: data analysis triggers retraining and optimization to form empirical knowledge and data accumulation;
and based on the big data analysis capability of the robot big data module, providing the numerical evaluation of the current manual intervention operation. The numerical evaluation of the manual intervention operation mainly comprises the following steps: the improved evaluation of the language AI, the visual AI motion AI and the multi-modal fusion AI, triggers retraining of the corresponding algorithm and model to improve the algorithm and model capabilities. And the optimization and improvement proposal of the blueprint logic and the flow for completing the service function can also be analyzed through big data. According to historical statistical data, the big data analysis can also obtain empirical knowledge, data and conventional habits of the robot behaviors and actions.
Step 605: updating a robot business behavior blueprint, behavior and action data;
based on the optimization improvement of the robot business behavior blueprint and the updating of the behavior and action data, the digital twin copy of the digital twin and the robot control module is synchronously updated, and the entity robot online service is also driven.
Step 606: real-time events and state changes are responded to based on the updates.
Responding to real-time events and state changes of the current entity robot and the like based on the updated algorithms and models of language AI, visual AI, motion AI and multi-mode fusion AI, and the formed empirical knowledge, data and routine habits.
Fig. 7 is a schematic structural diagram of a cloud server according to an embodiment of the present invention. As shown in fig. 7, the cloud server 10 is used for controlling the physical robot, and includes a robot access and data exchange module 11, a knowledge and data intelligence module 12, an artificial enhancement machine intelligence module 13, a digital twin operation core module 14, and a robot big data module 15, and the cloud server 10 and the physical robot communicate with each other through a private network.
The robot access and data exchange module 11 is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the entity robot, and performing data exchange, fusion and distribution;
the knowledge and data intelligent module 12 is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map;
the digital twin operation core module 14 comprises a digital twin world and a digital twin body, wherein the digital twin world is constructed based on a three-dimensional environment semantic map, and the digital twin body is a physical model with the same physical attributes as the entity robot; the digital twin body is used for performing training and online running of robot skills and applications based on a multi-domain knowledge map, a robot behavior action library and multi-source data of robot service in a digital twin world so as to synchronously control the entity robot to execute the robot skills and applications;
the artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through a language AI, a vision AI, a motion AI, a multi-mode AI and an artificial enhancement AI; the robot big data module 15 is used for storing and analyzing multi-source data, and feeding back the analyzed multi-source data to the digital twin operation core module 14 for training and online operation of robot skills and applications.
The specific structure and function of the cloud server 10 are the same as those of the cloud server 10 in the cloud robot system 100, and reference is made to the foregoing description, which is not repeated herein.
Fig. 8 is a schematic structural diagram of a robot control module according to an embodiment of the present invention. The robot control module 21 and the cloud server communicate with each other through a dedicated network. As shown in fig. 8, the robot control module 21 includes a digital twin copy 211, the digital twin copy 211 being a copy of a digital twin on the cloud server; the digital twin replica 211 synchronously controls the physical robot to perform robot skills and applications according to the robot skills and applications performed by the digital twin.
The robot control module 21 is further configured to send the multi-source data to the cloud server, so that the digital twin performs training and online operation of robot skills and applications in the digital twin world based on the multi-domain knowledge graph, the robot behavior action library, and the multi-source data of the robot service, so as to synchronously control the entity robot to perform the robot skills and applications through the digital twin copy 211 of the robot control module 21.
The specific structure and function of the robot control module 21 are the same as those of the robot control module 21 in the cloud robot system 100, and reference is made to the foregoing description, which is not repeated herein.
Fig. 9 is a schematic structural diagram of a robot according to an embodiment of the present invention. As shown in fig. 9, the robot 40 includes the robot control module 21 in the embodiment shown in fig. 8. The specific structure and function of the robot control module 21 are the same as those of the robot control module 21 in the cloud robot system 100, and reference is made to the foregoing description, which is not repeated herein.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (20)

1. A cloud robot system is characterized by comprising a cloud server and a robot control module, wherein the cloud server comprises a robot access and data exchange module, a knowledge and data intelligent module, an artificial enhancement machine intelligent module, a digital twin operation core module and a robot big data module, the robot control module is positioned in an entity robot, and the robot control module and the cloud server are communicated through a special network; wherein the content of the first and second substances,
the robot access and data exchange module is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the robot control module, and exchanging, fusing and distributing data;
the knowledge and data intelligent module is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map;
the digital twin operation core module comprises a digital twin world and a digital twin body, and the robot control module comprises a digital twin copy, wherein the digital twin world is constructed based on the three-dimensional environment semantic map, the digital twin body is a physical model with the same physical attributes as the entity robot, and the digital twin copy is a copy of the digital twin body operated on the cloud server; the digital twin is used for training and online running of robot skills and applications in the digital twin world based on a multi-domain knowledge map of the robot service, the robot behavior action library and the multi-source data, and the digital twin copy synchronously controls the entity robot to execute the robot skills and applications according to the robot skills and applications executed by the digital twin;
the artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through a language AI, a vision AI, a motion AI, a multi-mode AI and an artificial enhancement AI;
the robot big data module is used for storing and analyzing the multi-source data, and feeding the analyzed multi-source data back to the digital twin operation core module for training and online operation of the robot skills and applications;
and the robot control module is also used for sending multi-source data to the robot access and data exchange module.
2. The system of claim 1, wherein the cloud server further comprises a robot business application service platform for configuring the physical robot and providing for downloading of robot services.
3. The system of claim 2, wherein the configuring the physical robot comprises:
configuring one or more of a digital twin model, a robot name, a role, a character, an application scene and a dialogue, a language parameter, a network parameter, a user face list to be recognized and a corresponding robot service of the entity robot, wherein the application scene is configured according to the three-dimensional environment semantic map.
4. The system of claim 1, wherein the cloud server further comprises a robotic open platform for providing a robotic service development interface for developers to perform the robotic service development.
5. The system of claim 4, wherein the robotic service is an application based on the digital twin development and training, the robotic service development including digital twin development, robot behavior and action editing, and robot business behavior blueprint editing.
6. The system of claim 2, wherein the digital twinning core module is further configured to: in the training process of the robot skills and applications executed by the digital twin body in the digital twin world, if the numerical evaluation of the completion condition of the robot services executed by the digital twin body exceeds a first preset threshold value, the completion of the training of the robot skills and applications is determined, and if the completion of the training of the robot skills and applications is determined, the trained robot skills and applications are loaded to the robot control module for synchronous commissioning;
the robot control module is further configured to: loading and synchronously commissioning the trained robot skills and applications;
the digital twin operation core module is further configured to: and if the numerical evaluation of the robot skill and the application completion condition after the trial operation training of the robot control module exceeds a second preset threshold value, releasing the service corresponding to the robot skill and the application to the robot business application service platform.
7. The system of claim 1, wherein the digital twin operation core module further comprises a first game engine for loading the digital twin and the digital twin world, operating and updating the digital twin world, and operating the behavior and actions of the digital twin;
the robot control module further comprises a second game engine for running the digital twin replica;
the first game engine and the second game engine are used for jointly driving the behaviors and actions of the digital twin and the digital twin copy to be synchronously executed.
8. The system of claim 1, wherein the digital twinning core module is further configured to: synchronizing behaviors and actions of the digital twin to a digital twin copy on the robot control module over the private network;
and the digital twin copy synchronously controls the entity robot to execute the behaviors and actions according to the behaviors and actions of the digital twin body.
9. The system of claim 1, wherein the robot control module is further configured to: and sending the current environment change information acquired by the sensor of the entity robot and the self behavior and action change information of the entity robot to the digital twin operation core module so that the digital twin keeps the behavior and action synchronization with the entity robot.
10. The system of claim 1, wherein the multi-domain knowledge graph comprises a semantic network of relationships between entities related to a robot service, the semantic network comprising information describing external objective facts and knowledge that is a generalization and summarization of external objective laws;
the robot behavior action library comprises human behaviors and actions learned by the robot through simulation;
the three-dimensional environment semantic map is semantic data of a three-dimensional environment where the entity robot is located, and is obtained in the following mode: and fusing the multi-source data to obtain three-dimensional environment data, and performing map modeling through semantic segmentation based on the three-dimensional environment data to construct the three-dimensional environment semantic map.
11. The system of claim 10, wherein the building the three-dimensional environment semantic map comprises:
and constructing a multi-semantic-fusion three-dimensional environment semantic map by combining application scene recognition, object detection recognition, geometric model representation, spatial semantic relation and semantic annotation based on deep learning.
12. The system of claim 1, wherein the language AI comprises automatic speech recognition, natural language understanding, and speech synthesis; the vision AI comprises face recognition, human body recognition, portrait recognition, object recognition and environmental scene recognition; the motion AI comprises external force sensing perception, autonomous movement and navigation and limb actions; the multi-mode AI is the ability of the language AI, the vision AI and the sports AI and the ability of the multi-factor combination output, wherein the multi-factor combination output comprises the input of the language AI, the vision AI and the sports AI and the voice output and the sports output;
the artificially enhanced AI is to: and providing forward excitation input for system reinforcement learning through manual intervention operation, wherein the language AI, the vision AI, the movement AI and the multi-mode AI are in online running states during the manual intervention operation.
13. The system of claim 12, wherein the artificially enhanced AI is further configured to: and if the robot service abnormal condition occurs, receiving the operation of the service trainer on the digital twin body within the control authority.
14. The system of claim 1, wherein the robot big data module is further to store and analyze one or more of system operational and service log data, user data, artificially enhanced operational data, and system performance data.
15. The system of claim 1, wherein the multi-source data comprises one or more of audio-visual data, three-dimensional environmental point cloud data, robot behavior and motion data, and multi-modal interaction data acquired by sensors of the physical robot.
16. The system of claim 1, 14 or 15, wherein the robot big data module is further to:
performing data extraction, data conversion, data loading, data classification, data annotation, anomaly detection and data cleaning on the stored data to obtain processed data;
and performing real-time analysis and off-line analysis on the processed data, performing numerical evaluation on the operation of each robot service in the cloud robot system, wherein the numerical evaluation is used for determining whether the training of the robot skills and applications is completed or not, and triggering the digital twin operation core module to retrain and update the robot skills and applications.
17. The system of claim 16,
the numerical evaluation comprises the actual recognition rate of an AI algorithm and a model, the satisfaction degree of man-machine conversation reply, the service response duration and the high efficiency and the stability of a robot business behavior blueprint;
the robot big data module is further to: and classifying the objective conclusion of the numerical evaluation to form prior knowledge, related services and related data.
18. A cloud server is used for controlling an entity robot and is characterized by comprising a robot access and data exchange module, a knowledge and data intelligent module, an artificial enhancement machine intelligent module, a digital twin operation core module and a robot big data module, wherein the cloud server and the entity robot are communicated through a special network; wherein, the first and the second end of the pipe are connected with each other,
the robot access and data exchange module is used for performing robot service process registration and robot access authentication, receiving multi-source data sent by the entity robot, and performing data exchange, fusion and distribution;
the knowledge and data intelligent module is used for providing a multi-domain knowledge map of robot service, a robot behavior action library and a three-dimensional environment semantic map;
the digital twin operation core module comprises a digital twin world and a digital twin body, wherein the digital twin world is constructed based on the three-dimensional environment semantic map, and the digital twin body is a physical model with the same physical attributes as the entity robot; the digital twin body is used for performing training and online running of robot skills and applications based on a multi-domain knowledge map of the robot service, the robot behavior action library and the multi-source data in the digital twin world so as to synchronously control the entity robot to perform the robot skills and applications;
the artificial enhancement machine intelligent module supports the digital twin operation core module to train and operate the skills and applications of the robot on line through a language AI, a vision AI, a motion AI, a multi-mode AI and an artificial enhancement AI;
the robot big data module is used for storing and analyzing the multi-source data, and feeding the analyzed multi-source data back to the digital twin operation core module for training and online operation of the robot skills and applications.
19. A robot control module is characterized in that the robot control module and a cloud server are communicated through a special network;
the robot control module comprises a digital twin copy, wherein the digital twin copy is a copy of a digital twin running on the cloud server; the digital twin copy synchronously controls an entity robot to execute the robot service according to the robot service executed by the digital twin;
the robot control module is further used for sending multi-source data to the cloud server, so that the digital twin body can execute training and online running of robot skills and applications based on a multi-domain knowledge map of robot services, a robot behavior action library and the multi-source data in the digital twin world, and the entity robot can be synchronously controlled to execute the services through the digital twin copy.
20. A robot characterized in that it comprises a robot control module according to claim 19.
CN202011386136.2A 2020-12-01 2020-12-01 Cloud robot system, cloud server, robot control module and robot Active CN112668687B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011386136.2A CN112668687B (en) 2020-12-01 2020-12-01 Cloud robot system, cloud server, robot control module and robot
PCT/CN2021/124506 WO2022116716A1 (en) 2020-12-01 2021-10-18 Cloud robot system, cloud server, robot control module, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011386136.2A CN112668687B (en) 2020-12-01 2020-12-01 Cloud robot system, cloud server, robot control module and robot

Publications (2)

Publication Number Publication Date
CN112668687A CN112668687A (en) 2021-04-16
CN112668687B true CN112668687B (en) 2022-08-26

Family

ID=75400739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011386136.2A Active CN112668687B (en) 2020-12-01 2020-12-01 Cloud robot system, cloud server, robot control module and robot

Country Status (2)

Country Link
CN (1) CN112668687B (en)
WO (1) WO2022116716A1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668687B (en) * 2020-12-01 2022-08-26 达闼机器人股份有限公司 Cloud robot system, cloud server, robot control module and robot
CN115237113B (en) * 2021-08-02 2023-05-12 达闼机器人股份有限公司 Robot navigation method, robot system and storage medium
CN113626003A (en) * 2021-08-16 2021-11-09 杭州群核信息技术有限公司 Cloud robot solution system
CN113687718A (en) * 2021-08-20 2021-11-23 广东工业大学 Man-machine integrated digital twin system and construction method thereof
CN113959444A (en) * 2021-09-30 2022-01-21 达闼机器人有限公司 Navigation method, device and medium for unmanned equipment and unmanned equipment
CN114310870A (en) * 2021-11-10 2022-04-12 达闼科技(北京)有限公司 Intelligent agent control method and device, electronic equipment and storage medium
CN114080905B (en) * 2021-11-25 2022-12-06 杭州乔戈里科技有限公司 Picking method based on digital twins and cloud picking robot system
CN114371174A (en) * 2021-12-17 2022-04-19 中国电子科技集团公司第四十一研究所 Visual twin detection device and method for industrial production line
CN114387643A (en) * 2021-12-28 2022-04-22 达闼机器人有限公司 Robot control method, system, computer device and storage medium
CN114290333B (en) * 2021-12-29 2024-02-27 中国电信股份有限公司 Ubiquitous robot system, construction method and device, equipment and medium
CN114372356B (en) * 2021-12-29 2023-02-28 达闼机器人股份有限公司 Artificial enhancement method, device and medium based on digital twins
CN114559433B (en) * 2022-03-17 2024-01-12 达闼机器人股份有限公司 Robot control method and device, storage medium, robot and cloud server
CN114722050B (en) * 2022-06-10 2022-09-30 辰星(天津)自动化设备有限公司 Data synchronization method of robot system and robot system
CN115080797B (en) * 2022-06-28 2023-03-07 电子科技大学 Knowledge graph-based digital twin workshop multi-scale association method
CN115268667A (en) * 2022-07-18 2022-11-01 北京数字冰雹信息技术有限公司 Method and system for controlling webpage end digital twin three-dimensional scene
CN117910188A (en) * 2022-10-10 2024-04-19 华为云计算技术有限公司 Simulation training method and device and computing device cluster
CN115659838B (en) * 2022-11-14 2023-02-28 中国电力科学研究院有限公司 Method and device for constructing digital twin hybrid model of main equipment of power system
CN115860366A (en) * 2022-11-17 2023-03-28 桂林电子科技大学 Community robot intelligent coordination control method and system and readable storage medium
CN115994458B (en) * 2023-03-23 2023-07-18 华南理工大学 Virtual-real integrated multi-agent cluster system simulation method
CN116704156A (en) * 2023-04-28 2023-09-05 北京优酷科技有限公司 Model generation method, electronic equipment and model generation system
CN116214527B (en) * 2023-05-09 2023-08-11 南京泛美利机器人科技有限公司 Three-body collaborative intelligent decision-making method and system for enhancing man-machine collaborative adaptability
CN116370954B (en) * 2023-05-16 2023-09-05 北京可以科技有限公司 Game method and game device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235697A (en) * 2017-09-12 2018-06-29 深圳前海达闼云端智能科技有限公司 A kind of Robotic Dynamic learning method, system, robot and cloud server
CN110430260A (en) * 2019-08-02 2019-11-08 哈工大机器人(合肥)国际创新研究院 Robot cloud platform based on big data cloud computing support and working method
CN110866588A (en) * 2019-11-08 2020-03-06 中国科学院软件研究所 Training learning method and system for realizing individuation of learnable ability model of intelligent virtual digital animal
CN111273892A (en) * 2020-02-13 2020-06-12 济南浪潮高新科技投资发展有限公司 Method for realizing intelligent robot based on cloud technology and edge calculation
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200304375A1 (en) * 2019-03-19 2020-09-24 Microsoft Technology Licensing, Llc Generation of digital twins of physical environments
CN112668687B (en) * 2020-12-01 2022-08-26 达闼机器人股份有限公司 Cloud robot system, cloud server, robot control module and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108235697A (en) * 2017-09-12 2018-06-29 深圳前海达闼云端智能科技有限公司 A kind of Robotic Dynamic learning method, system, robot and cloud server
CN110430260A (en) * 2019-08-02 2019-11-08 哈工大机器人(合肥)国际创新研究院 Robot cloud platform based on big data cloud computing support and working method
CN110866588A (en) * 2019-11-08 2020-03-06 中国科学院软件研究所 Training learning method and system for realizing individuation of learnable ability model of intelligent virtual digital animal
CN111273892A (en) * 2020-02-13 2020-06-12 济南浪潮高新科技投资发展有限公司 Method for realizing intelligent robot based on cloud technology and edge calculation
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于ROS的机器人云平台架构设计;陈彬 等;《制造业自动化》;20190831;第41卷(第8期);第115-117页 *
机器人云服务平台架构研究综述;陈彬 等;《制造业自动化》;20190430;第41卷(第4期);第169-172页 *

Also Published As

Publication number Publication date
CN112668687A (en) 2021-04-16
WO2022116716A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
CN112668687B (en) Cloud robot system, cloud server, robot control module and robot
CN111432989B (en) Artificial enhancement cloud-based robot intelligent framework and related methods
Xi et al. The rise and potential of large language model based agents: A survey
AU2020201118A1 (en) Methods and systems for managing dialogs of a robot
US11120365B2 (en) For hierarchical decomposition deep reinforcement learning for an artificial intelligence model
JP7085788B2 (en) Robot dynamic learning methods, systems, robots and cloud servers
WO2019113502A2 (en) Adding deep learning based ai control
CN106845624A (en) The multi-modal exchange method relevant with the application program of intelligent robot and system
AU2018202076A1 (en) Activity monitoring of a robot
CN111095170B (en) Virtual reality scene, interaction method thereof and terminal equipment
CN110465089A (en) Map heuristic approach, device, medium and electronic equipment based on image recognition
CN113199472A (en) Robot control method, device, storage medium, electronic device, and robot
US20220076099A1 (en) Controlling agents using latent plans
US20220207831A1 (en) Simulated control for 3- dimensional human poses in virtual reality environments
WO2022140540A1 (en) Simulated control for 3-dimensional human poses in virtual reality environments
Scheutz et al. Toward affective cognitive robots for human-robot interaction
WO2021138939A1 (en) Cloud brain robot system
Trivedi et al. Communicating, interpreting, and executing high-level instructions for human-robot interaction
Gonçalves et al. Defining behaviors for autonomous agents based on local perception and smart objects
Temsamani et al. A multimodal AI approach for intuitively instructable autonomous systems: a case study of an autonomous off-highway vehicle
CN110177661A (en) Automatically move layout
Kasbo Reducing the Sim-To-Real Gap in Reinforcement Learning for Robotic Grasping with Depth Observations
Thalmann et al. Virtual and real humans interacting in the virtual world
Waelti Strategy for an Autonomous Behavior that Guarantees a Qualitative Fine Adjustment at the Target Pose of a Collaborating Mobile Robot
Millán Romera Development of a ROS environment for researching machine learning techniques applied to drones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant