WO2024078003A1 - 模拟训练方法、装置及计算设备集群 - Google Patents

模拟训练方法、装置及计算设备集群 Download PDF

Info

Publication number
WO2024078003A1
WO2024078003A1 PCT/CN2023/101580 CN2023101580W WO2024078003A1 WO 2024078003 A1 WO2024078003 A1 WO 2024078003A1 CN 2023101580 W CN2023101580 W CN 2023101580W WO 2024078003 A1 WO2024078003 A1 WO 2024078003A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
target
task
target simulation
environment
Prior art date
Application number
PCT/CN2023/101580
Other languages
English (en)
French (fr)
Inventor
周顺波
任航
徐乔博
王烽
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2024078003A1 publication Critical patent/WO2024078003A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Definitions

  • the present application relates to the field of simulation technology, and in particular to a simulation training method, apparatus and computing device cluster.
  • the virtual robot execution scheme in the prior art is not intelligent enough and cannot meet the needs of users. How to construct a virtual robot execution scheme is a problem that needs to be solved urgently.
  • the embodiments of the present application provide a simulation training method, apparatus and computing device cluster, which can realize the simulation of a movable device and its environment in the cloud.
  • the cloud executes the user-configured tasks in the user-configured simulation environment through the user-configured simulation device, thereby realizing intelligent task execution.
  • an embodiment of the present application provides a simulation training method, which is applied to a cloud management platform.
  • the method includes: providing a first configuration interface, the first configuration interface is used to obtain an identifier of a target simulation environment and an identifier of a target simulation device; providing a second configuration interface, the second configuration interface is used to obtain task instructions; according to the task instructions, using the target simulation device to execute the task in a simulation scenario to obtain an execution result.
  • the simulation of the mobile device and its environment can be realized in the cloud.
  • the cloud executes the user-configured tasks in the user-configured simulation environment through the user-configured simulation device, thereby realizing intelligent task execution.
  • the method before obtaining the identification of the target simulation scene, includes: obtaining collected data corresponding to the target simulation environment; providing a third configuration interface, the third configuration interface being used to obtain type parameters of the target simulation environment; and generating the target simulation environment based on the collected data corresponding to the target simulation environment and the type parameters of the target simulation environment.
  • simulation can be performed based on the type of environment to ensure that the simulated environment is close to the real environment.
  • the type parameter includes one or more of the following: indoor scene, outdoor scene, weather type.
  • the collected data corresponding to the target environment includes data sensed by a movable device and/or field equipment in a real environment corresponding to the target environment.
  • the target environment includes at least one three-dimensional model and its respective corresponding physical parameters.
  • the task is performed in the target simulation environment using the target simulation device, including: according to the task instructions, the task is performed in the target simulation environment using the target simulation device and the physical parameters of at least one three-dimensional model.
  • tasks can be executed based on the physical parameters of objects in the simulated environment, ensuring that the task execution results are close to the actual task execution conditions.
  • the physical parameter is determined based on collected data corresponding to the target environment.
  • the physical parameters include a coefficient of friction and/or a coefficient of air resistance.
  • the second configuration interface is further used to obtain the number of processes corresponding to the task.
  • the number of processes can be set to achieve parallel execution of tasks and ensure task execution efficiency.
  • the task includes a starting point and an end point
  • the second configuration interface is further used to obtain the starting point and the end point set by the user.
  • obtaining the identifier of the target simulation environment includes: a first configuration interface is used to obtain the identifier of the target simulation environment selected by the user from a plurality of candidate simulation environments.
  • each of the plurality of candidate environments is determined based on its corresponding collected data, where the collected data includes data sensed by a movable device and/or a field device in the corresponding real environment.
  • obtaining the identifier of the target simulation device includes: a first configuration interface is used to obtain the identifier of the target simulation device selected by the user from a plurality of candidate simulation devices.
  • the plurality of candidate devices include pre-set candidate devices or candidate devices generated by modeling based on appearance data of real devices.
  • the method further includes: sending the execution result to a target device corresponding to the simulation device.
  • the target simulation device is used to perform the task in the target simulation environment, including: based on semantic recognition, the task instructions are converted into simulation instructions, and the simulation instructions are in a computer-readable format; based on the simulation instructions, the target simulation device is used to perform the task in the target simulation environment.
  • the task includes at least one skill.
  • the skill includes navigation
  • the execution result includes a motion trajectory
  • the method further includes displaying the motion trajectory.
  • the target simulation device includes at least one joint, and the at least one joint corresponds to a dynamic parameter.
  • the target simulation device is used to perform the task in the target simulation environment, including: according to the task instruction, the dynamic parameters of at least one joint in the target simulation device are used to control the target simulation device to perform the task in the target simulation environment.
  • the simulation device can be controlled to perform tasks based on the dynamic parameters of the joints in the simulation device, ensuring that the task execution results are close to the actual task execution conditions.
  • the method further includes: displaying resource consumption of each of the at least one skill when executed.
  • the method further includes: determining a target skill among the at least one skill that is deployed on a target device corresponding to the target simulation device.
  • the task includes a prediction indicator
  • the execution result includes an indicator value of the prediction indicator
  • the prediction index includes a temperature threshold, an operation time threshold, and a power threshold of a component in a target device corresponding to the target simulation device.
  • the target environment in the simulation scene has semantic information.
  • the task includes a prediction indicator
  • the execution result includes an indicator value of the prediction indicator
  • the first configuration interface is used to obtain an identification of a static simulation object, a first position of the static simulation object in a target simulation environment, and/or an identification of an object simulation object behavior, a second position of the object simulation object behavior in the target simulation environment.
  • the first position in the target simulation environment includes a static simulation object, and the second position includes an object simulation object behavior.
  • an embodiment of the present application provides a simulation training device, which is applied to a cloud management platform, and the device includes:
  • a first interface providing module used to provide a first configuration interface, the first configuration interface is used to obtain an identifier of a target simulation environment and an identifier of a target simulation device;
  • a second interface providing module used for providing a second configuration interface, the second configuration interface is used for obtaining task instructions
  • the task execution module is used to execute the task in the simulation scenario according to the task instruction and obtain the execution result by using the target simulation device.
  • the device further includes: an environment generation module, the environment generation module includes a data acquisition unit, an interface providing unit, and a generation unit; wherein,
  • a data acquisition unit used to acquire the collected data corresponding to the target simulation environment
  • An interface providing unit used for providing a third configuration interface, where the third configuration interface is used for obtaining type parameters of a target simulation environment;
  • the generating unit is used to generate the target simulation environment according to the collected data corresponding to the target simulation environment and the type parameters of the target simulation environment.
  • the type parameter includes one or more of the following: indoor scene, outdoor scene, weather type.
  • the collected data corresponding to the target simulation environment includes data sensed by a movable device and/or field equipment in a real environment corresponding to the target simulation environment.
  • the target simulation environment includes at least one three-dimensional model and its corresponding physical parameters
  • the task execution module is used to execute the task in the target simulation environment according to the task instructions using the target simulation device and the physical parameters of at least one three-dimensional model.
  • the physical parameter is determined based on collected data corresponding to the target environment.
  • the physical parameters include a coefficient of friction and/or a coefficient of air resistance.
  • the second configuration interface is further used to obtain the number of processes corresponding to the task.
  • the task includes a starting point and an end point
  • the second configuration interface is further used to obtain the starting point and the end point set by the user.
  • the first configuration interface is used to obtain a target simulation environment selected by a user from a plurality of candidate simulation environments. Environment identification.
  • the first configuration interface is used to obtain an identifier of a target simulation device selected by a user from a plurality of candidate simulation devices.
  • the plurality of candidate simulation devices include preset candidate simulation devices or candidate simulation devices generated by modeling based on appearance data of real devices.
  • the apparatus further includes: a sending module; wherein the sending module is used to send the execution result to a target device corresponding to the target simulation device.
  • the task execution module is used to convert the task instructions into simulation instructions based on semantic recognition, and the simulation instructions are in a computer-readable format; based on the simulation instructions, the task is executed in a target simulation environment using a target simulation device.
  • the task includes at least one skill.
  • the skill includes navigation
  • the execution result includes a motion trajectory
  • the method further includes displaying the motion trajectory.
  • the simulation device includes at least one joint and its corresponding dynamic parameters
  • the task execution module is used to use the dynamic parameters of at least one joint in the target simulation device to control the target simulation device to perform a task in a simulation scene.
  • the device further includes: a display module; wherein the display module is used to display resource consumption of at least one skill when each skill is executed.
  • the apparatus further includes: a deployment module; wherein the deployment module is used to determine a target skill among at least one skill to be deployed on a target device corresponding to the target simulation device.
  • the task includes a prediction indicator
  • the execution result includes an indicator value of the prediction indicator
  • the prediction index includes a temperature threshold, an operation time threshold, and a power threshold of a component in a target device corresponding to the target simulation device.
  • the target environment in the simulation scene has semantic information.
  • the task includes a prediction indicator
  • the execution result includes an indicator value of the prediction indicator
  • the first configuration interface is used to obtain an identification of a static simulation object, a first position of the static simulation object in a target simulation environment, and/or an identification of an object simulation object behavior, a second position of the object simulation object behavior in the target simulation environment.
  • the first position in the target simulation environment includes a static simulation object, and the second position includes an object simulation object behavior.
  • an embodiment of the present application provides a simulation training device, comprising: at least one memory for storing programs; and at least one processor for executing the programs stored in the memory.
  • the processor is used to execute the method provided in the first aspect.
  • an embodiment of the present application provides a simulation training device, characterized in that the device runs computer program instructions to execute the method provided in the first aspect.
  • the device may be a chip or a processor.
  • the device may include a processor, which may be coupled to a memory, read instructions in the memory, and execute the method provided in the first aspect according to the instructions.
  • the memory may be integrated in a chip or a processor, or may be independent of the chip or the processor.
  • an embodiment of the present invention provides a computing device cluster, comprising: at least one computing device, each computing device comprising a processor and a memory; the processor of at least one computing device is used to execute instructions stored in the memory of at least one computing device, so that the computing device cluster executes the method provided in the first aspect.
  • an embodiment of the present application provides a computer storage medium, in which instructions are stored. When the instructions are executed on a computer, the computer executes the method provided in the first aspect.
  • an embodiment of the present application provides a computer program product comprising instructions, which, when executed on a computer, enables the computer to execute the method provided in the first aspect.
  • FIG1 is a system architecture diagram of a cloud system provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of a cloud scenario provided in an embodiment of the present application.
  • FIG3 is a flow chart of a simulation training method provided in an embodiment of the present application.
  • FIG4 is a second flow chart of a simulation training method provided in an embodiment of the present application.
  • FIG5a is a schematic diagram of a model configuration page provided in an embodiment of the present application.
  • FIG5b is a schematic diagram of a task scheduling page provided in an embodiment of the present application.
  • FIG6a is a schematic diagram of edge-cloud collaboration in a campus inspection scenario provided by an embodiment of the present application.
  • FIG6b is a schematic diagram of edge-cloud collaboration in a modular hospital scenario provided by an embodiment of the present application.
  • FIG7 is a schematic diagram of the structure of a simulation training device provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of the structure of a computing device provided by an embodiment of the present invention.
  • FIG9 is a schematic diagram of the structure of a computing device cluster provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an application scenario of the computing device cluster provided in FIG. 9 .
  • words such as “exemplary”, “for example” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary”, “for example” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary”, “for example” or “for example” is intended to present related concepts in a concrete way.
  • the term "and/or” is merely a description of the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B may represent: A exists alone, B exists alone, and A and B exist at the same time.
  • the term “multiple” means two or more.
  • multiple systems refers to two or more systems
  • multiple terminals refers to two or more terminals.
  • first and second are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features.
  • the terms “include”, “comprises”, “has” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • Fig. 1 is a schematic diagram of the architecture of a cloud system provided by an embodiment of the present invention. As shown in Fig. 1 , the system includes a cloud server cluster 100, a side device 200 and a terminal device 300.
  • the cloud server cluster 100 can be implemented with an independent electronic device or a device cluster composed of multiple electronic devices.
  • the electronic device in the cloud server cluster 100 can be a terminal or a computer, or a server.
  • the server involved in this solution can be used to provide cloud services, which can be a server or a super terminal that can establish a communication connection with other devices and can provide computing functions and/or storage functions for other devices.
  • the server involved in this solution can be a hardware server, or it can be implanted in a virtualized environment.
  • the server involved in this solution can be a virtual machine executed on a hardware server that includes one or more other virtual machines.
  • the side equipment 200 can be a movable device 210 and a site equipment 220.
  • the movable device 210 can be a movable device such as a robot, a vehicle, etc., and a sensor is installed on the movable device 210, such as a camera, a laser radar, etc.
  • the site equipment 220 can be a sensor installed on a lamp pole, such as a camera, a laser radar, a thermometer, a hygrometer, etc.
  • Figure 2 is a schematic diagram of a cloud scene provided by an embodiment of the present invention. As shown in Figure 2, the movable device 210 can be a wheeled robot, a drone, a quadruped robot dog, etc., and the site equipment 220 can be a park monitoring device.
  • the terminal device 300 can be, but is not limited to, various personal computers, laptops, smart phones, tablet computers, and portable wearable devices.
  • Exemplary embodiments of the terminal device 300 involved in this solution include, but are not limited to, electronic devices equipped with iOS, Android, Windows, Harmony OS, or other operating systems.
  • the embodiments of the present invention do not specifically limit the type of electronic device.
  • the edge device 200 is respectively connected to the cloud server cluster 100 through a network, so that the data collected by the sensor on the edge device 200 can be uploaded to the cloud server cluster 100.
  • the network can be a wired network or a wireless network.
  • the wired network can be a cable network, an optical fiber network, a digital data network (Digital Data Network, DDN), etc.
  • the wireless network can be a telecommunication network, an internal network, the Internet, a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN), a wireless local area network (Wireless Local Area Network, WLAN), a metropolitan area network (Metropolitan Area Network, MAN), a public switched telephone network (Public Service Telephone Network, PSTN), a Bluetooth network, a ZigBee network (ZigBee), a mobile phone (Global System for Mobile Communications, GSM), a CDMA (Code Division Multiple Access) network, a CPRS (General Packet Radio Service) network, etc.
  • GSM
  • the network can use any known network communication protocol to achieve communication between different client layers and gateways.
  • the network communication protocol can be various wired or wireless communication protocols, such as Ethernet, universal serial bus (USB), Firewire, global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Communication protocols such as wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), new radio (NR), Bluetooth, and wireless fidelity (Wi-Fi).
  • the cloud server cluster 100 includes a data warehouse 111 , a simulation environment library 112 , a simulation device library 113 , a material library 114 , a simulation object library 115 , a behavior pattern library 116 , a simulation behavior library 117 , a semantic database 118 , and an algorithm library 119 .
  • the data warehouse 111 is used to store the data after processing the data in various formats received by the edge device 200.
  • the processing can be noise reduction data, time-space alignment and other processing.
  • These data are multi-source data in various formats, such as laser point cloud (supporting ply (Polygon File Format), pcd (Point Cloud Data, a file format for storing point cloud data), e57 (3D image data file format standard, which brings together point cloud and image) and other mainstream formats), video (mp4, rmvb (RealMedia Variable Bit Rate), mkv (a media file of Matroska, Matroska is a new multimedia packaging format, also known as Multimedia Container), avi (Audio Video Inter leaved, audio and video interleaved format) and other mainstream formats), images (jpeg (Joint Photographic Experts Group), jpg (JPEG is a standard for continuous tone static image compression defined by the Joint Photographic Experts Group), bmp (Bitmap), png (Portable Network Graphics)
  • the simulation environment library 112 is used to store the identification corresponding to the simulation scene (for example, it can be the storage address of the simulation scene), and the simulation scene includes the description information corresponding to each of several three-dimensional models.
  • the description information may include geometric information (information that can form a three-dimensional model), position information, physical parameters, and texture information; wherein the physical parameters are parameters that affect motion, such as friction coefficient and/or air resistance coefficient.
  • the three-dimensional model with physical parameters in the simulation scene can indicate the ground.
  • Texture information can be understood as the texture (usually referring to the material) and pattern on the surface of the object, which can better express the information on the surface of the object, and specifically may include material, reflectivity, color, etc.
  • the simulation environment can also include environmental description information, which can include weather conditions, light conditions, and physical parameters of the air. Especially for outdoor simulation environments, it is necessary to determine the light conditions and physical parameters of the air under different weather conditions, so as to be closer to the real environment. Further, the simulation environment can also include a variety of maps, such as two-dimensional maps, three-dimensional maps, semantic maps, point cloud maps, etc. It should be noted that the simulated objects in the model scene are usually fixed objects in the environment, such as trees, ground, buildings, or moving objects with fixed ranges of activity in the environment, such as tigers and lions in zoos.
  • the simulation environment library 112 stores the identifiers corresponding to the simulation scenes, the simulation scenes are still stored in the cloud server cluster 100, and the identifier can read the storage data of the simulation scenes from the place where the simulation scenes are stored in the cloud service cluster 100.
  • the simulation device library 113 is used to store the identification corresponding to the simulation device (for example, it can be the storage address of the simulation device).
  • the simulation device can be a robot model.
  • the simulation device is a virtual mobile device 210 constructed by simulating the geometric shape, structure and appearance of the physical mobile device 210 in 1:1 geometric appearance modeling, simulating each movable intelligent joint of the mobile device 210 (including but not limited to motors, accelerators, damping parameters, etc.), and can support design model updating, three-dimensional reconstruction and other methods to achieve model construction.
  • the physical simulation includes physical gravity simulation, physical collision simulation, and the application of physical materials to express the friction, light reflection and other physical properties.
  • the above physical properties will affect the behavior of the mobile device 210 in a specific environment.
  • the simulation device includes the dynamic parameters of several joints, wherein the dynamic parameters may include inertia parameters and friction parameters, etc., which can be determined in combination with actual needs, and the embodiment of the present invention does not make specific restrictions on this.
  • the simulation device can be a model constructed in advance by three-dimensional software.
  • the dynamic parameters can be the dynamic parameters of each joint provided by the manufacturer when the mobile device 210 indicated by the simulation device leaves the factory.
  • the simulation device library 113 stores the identifiers corresponding to the simulation devices, the simulation devices are still stored in the cloud server cluster 100.
  • the identifier can read the storage data of the simulation devices from the location where the simulation devices are stored in the cloud service cluster 100.
  • the material library 114 is used to store the identification of the materials for constructing the simulation objects and the materials for the simulation scenes (for example, the material address of the simulation objects). It should be noted that although the material library 114 stores the identification corresponding to the materials, the materials are still stored in the cloud server cluster 100, and the identification can read the storage data of the materials from the place where the materials are stored in the cloud service cluster 100.
  • the simulation object library 115 is used to store the identification corresponding to the simulation object (for example, it can be the storage address of the simulation object); in actual applications, the simulation object can represent various objects that may exist in the actual scene, such as static objects. In addition, the simulation object can include descriptive information of the object, such as geometric information, texture information, category, etc., which can simulate the information of the real object. It should be noted that although the simulation object library 115 stores the identification corresponding to the simulation object, the simulation object is still stored in the cloud server cluster 100, and the identification can read the storage data of the simulation object from the place where the simulation object is stored in the cloud service cluster 100.
  • the behavior pattern library 116 is used to store the identification of the behavior material (for example, it can be the storage address of the behavior material).
  • the behavior material can be a motion clip of a vehicle; for example, the behavior material can be a motion clip of weather changes, such as light changes, wind speed changes, rainfall changes, etc.
  • the light changes can be represented by the surface of the object
  • the wind speed changes can be represented by the shaking of the object
  • the rainfall changes can be represented by the shaking of the object.
  • the behavior pattern library 116 stores the identifiers corresponding to the behavior materials, the behavior materials are still stored in the cloud server cluster 100, and the identifier can read the storage data of the behavior materials from the location where the behavior materials are stored in the cloud service cluster 100.
  • the simulation behavior library 117 is used to store the identification corresponding to the simulation behavior (for example, it can be the storage address of the simulation behavior).
  • the simulation behavior can be associated with the simulation object.
  • the same simulation object can have different simulation behaviors, and the same simulation object can have multiple simulation behaviors.
  • the identification corresponding to the simulation behavior also needs to include the identification of the simulation object (for example, it can be the storage address of the simulation object).
  • the simulation object can be a person, a vehicle, an animal, and the simulation behavior can be a straight line, a curve, a circle, a turn, etc.
  • the simulation behavior can also be a weather change.
  • the weather change can include wind speed changes (for example, the change of wind speed over time can be recorded), light changes (for example, the change of light intensity over time), rainfall changes (for example, the change of rainfall over time can be recorded), etc.
  • the simulation behavior exists independently and does not need to be attached to the simulation object. It should be noted that although the simulation behavior library 117 stores the identification corresponding to the simulation behavior, the simulation behavior is still stored in the cloud server cluster 100, and the identification can read the storage data of the simulation behavior from the place where the simulation behavior is stored in the cloud service cluster 100.
  • the semantic database 118 is used to store the semantic information of the simulated object.
  • the semantic information may be the object's outline, color, material, location information, whether it is moving, category, surrounding objects, etc.
  • one semantic database 118 may store the semantic information of several simulated environments.
  • the algorithm library 119 stores various algorithms, such as the robot's dynamic parameter identification algorithm, air resistance calculation algorithm, friction calculation algorithm, interaction force calculation algorithm, and artificial intelligence algorithm.
  • the artificial intelligence algorithm can be various deep learning algorithms, machine learning algorithms, deep reinforcement learning algorithms, kinematic planning algorithms, etc.
  • the deep learning algorithms may include Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Neural Networks (DNN), Fast R-CNN, You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), Long Short-Term Memory (LSTM), Embeddings from Language Models (ELMO), Bidirectional Encoder Representation from Transformers, Bidirectional Encoder Representation from Transformers (BERT), Generative Pre-Training (GPT), etc.
  • CNN Convolutional Neural Networks
  • RNN Recurrent Neural Networks
  • DNN Deep Neural Networks
  • YOLO You Only Look Once
  • SSD Single Shot MultiBox Detector
  • LSTM Long Short-Term Memory
  • ELMO Embeddings from Language Models
  • Bidirectional Encoder Representation from Transformers Bidirectional Encoder Representation from Transformers (BERT), Generative Pre-Training (GPT), etc.
  • the cloud server cluster 100 realizes the construction and update of the simulation environment through the semantic extraction layer, the object/scene reconstruction layer, the behavior reconstruction layer and the physical parameter layer.
  • the cloud server cluster 100 can perform semantic extraction on the data in the data warehouse 131 to obtain semantic information of multiple simulated objects and store it in the semantic database.
  • the cloud server cluster 100 can perform time/space alignment on the data in the data warehouse 131 to obtain materials that can be modeled, and store the materials in the material library 114.
  • object modeling is performed based on the materials in the material library 114, and the identifications of the constructed several simulated objects are stored in the simulated object library 115.
  • scene modeling is performed based on the materials in the material library, and the identifications of the constructed simulated scenes are stored in the simulated scene library 115.
  • the cloud server cluster 100 can perform SLAM (Simultaneous Localization and Mapping) mapping based on the laser point cloud data in the data warehouse 111, generate a point cloud model, and perform mesh texture reconstruction in combination with the image data aligned in time and space with the laser point cloud data to obtain a simulated scene.
  • SLAM Simultaneous Localization and Mapping
  • the semantic database 117 can be combined to obtain a semantic map of the simulated scene.
  • the cloud server cluster 100 can perform behavior statistics/extraction on the data (such as video data) in the data warehouse 131 to obtain behavior patterns, and store the behavior patterns in the behavior pattern library 116; then, the behavior patterns in the behavior pattern library 116 are modeled to obtain simulated behaviors, and the identifiers of the simulated behaviors are stored in the simulated behavior library 117. For example, various similar segments of a vehicle turning can be counted, and then these similar segments are used as behavior patterns. After that, these behavior patterns can be modeled to determine the turning speed and the driving trajectory to obtain simulated behaviors. For example, various similar segments of a brick falling can be counted, and then these similar segments are used as behavior patterns. After that, these behavior patterns can be modeled to determine the accelerator, movement trajectory, etc. of the brick falling to obtain simulated behaviors.
  • the cloud server cluster 100 can obtain the physical parameters of several three-dimensional models in the simulation scene based on the calculation algorithms such as air resistance and interaction force in the algorithm library 119, combined with the motion data of the movable device 210 and the environmental data of the environment in which the simulation scene is constructed.
  • the cloud server cluster 100 can also determine the dynamic parameters of each joint in the simulation device in the simulation device library 113 based on the motion data of the real device indicated by the simulation device and the calculation algorithm of dynamic identification in the algorithm library 119.
  • the cloud server cluster 100 can implement semantic updates, geometric updates, physical parameter updates, and dynamic parameter updates.
  • geometric updates can be understood as updates to the appearance, such as adding simulated objects to the simulation environment, or deleting simulated objects in the simulation environment.
  • Semantic update can be understood as deleting the description information of the simulated object, adding the description information of the simulated object and/or updating the description information of the simulated object.
  • the car next to the tree can be updated to no car next to the tree.
  • the physical parameter update can be understood as the update of the physical parameters of a specific three-dimensional model in the simulation scene.
  • the physical parameter update may include the update of the environmental description information of the simulation environment. For example, a simulated environment under sunny days was previously constructed. Now it is raining in the park, and the environmental description information can be updated to the relevant situation of rainy days. It is worth noting that the environmental description information needs to retain the previous information and store the updated environmental description information in the corresponding simulation scene.
  • the simulation environment in the embodiment of the present invention is constructed based on the data collected by the sensor (referred to as the target sensor for the convenience of description and distinction) on the edge device 200.
  • the cloud server cluster 100 can obtain data indicating changes in the simulation environment (referred to as the target data for the convenience of description and distinction), implement semantic updates through the semantic extraction layer, implement geometric updates through object modeling or scene modeling of the object/scene reconstruction layer, and implement physical parameter updates through the physical parameter layer.
  • the cloud server cluster 100 can determine whether the simulation environment has changed based on the data collected by the target sensor in the data warehouse 111, and when the environment changes, determine data indicating the changes in the simulation environment (referred to as target data for ease of description and distinction).
  • the mobile device 210 can determine whether the simulated environment has changed based on the data collected by the target sensor in the data warehouse 111.
  • the target data indicating the change in the simulated environment is determined and uploaded to the cloud server cluster 100, and the cloud server cluster 100 obtains the target data.
  • the cloud management platform is divided into a client and a server, and the cloud server cluster 100 is installed with the server of the cloud management platform.
  • the user can install the client of the cloud management platform on the terminal device 300, or install a browser and enter the URL in the browser to access the client of the cloud management platform.
  • a user enters a URL in a browser, enters the login page of the client, operates the login page to register an account, manually sets or has an account and password assigned by the server of the cloud management platform, and obtains an account (for the sake of ease of description and distinction, referred to as a target account) and account password (for the sake of ease of description and distinction, referred to as a target account and password) that can access the client of the cloud management platform; then, the user enters the target account and target account password on the login page to enter the client of the cloud management platform; thereafter, the user can use the various services that the server of the cloud management platform can provide through the client of the cloud management platform.
  • a target account for the sake of ease of description and distinction, referred to as a target account
  • account password for the sake of ease of description and distinction, referred to as a target account and password
  • a user establishes a model generation task through the terminal device 300 .
  • the user can select data uploaded by the mobile device 210 in a certain period of time from the material library 114 for model reconstruction, or upload materials to the material library 114 by himself, and the materials refer to data such as point cloud images collected by the mobile device 210 or the user in other ways, and then select data for reconstruction from the material library 114. Then, the cloud server cluster 100 can create a model based on the data selected by the user.
  • the created simulated object may be stored in the simulated object library 115 under the target account.
  • the created simulation scene may be stored in the simulation scene library 112 under the target account. Further, it may be determined whether physical parameters, semantic information, and/or texture information need to be determined.
  • the user may select a behavior pattern of a certain time period from the behavior pattern library 116 for behavior reconstruction, and the created simulated behavior may be stored in the simulated behavior library 117 under the target account.
  • the user may select a simulation environment from the simulation environment library 112 and configure whether to perform geometry update, semantic update, and physical parameter update for the simulation environment.
  • simulation environment library 112 simulation device library 113, simulation object library 115, and simulation behavior library 117 can pre-store models shared by all users; the material library 114 can pre-store materials shared by all users, and the behavior pattern library 116 can pre-store behavior fragments shared by all users.
  • the data warehouse 111 can store monitoring data of the edge device 200 .
  • the embodiment of the present invention provides a simulation training method. It is understood that the method can be executed by any device, equipment, platform, or device cluster with computing and processing capabilities. For example, the cloud system shown in FIG1 .
  • the simulation training method provided by the embodiment of the present invention is introduced below in conjunction with the cloud system shown in FIG1 .
  • FIG3 is a flow chart of a simulation training method provided by an embodiment of the present invention. As shown in FIG3 , the simulation training method includes:
  • Step 301 The cloud server cluster 100 provides a first configuration interface.
  • Step 302 The terminal device 300 displays the first configuration interface, obtains the user's operation on the first configuration interface, and obtains the identifier of the target simulation environment and the identifier of the target simulation device.
  • the first configuration interface may be a model configuration interface.
  • the cloud server cluster 100 may publish The first configuration interface corresponds to that the terminal device 300 will display the model configuration interface to the user, so that the user can operate the model configuration interface through the terminal device 300, determine the identification of the target environment and the identification of the simulation device, and upload them to the cloud server cluster 100.
  • Fig. 5a is a schematic diagram of a model configuration page provided by an embodiment of the present application.
  • the model configuration interface includes a simulation environment configuration control and a simulation device configuration control.
  • the user logs in to the client of the cloud management platform by entering the target account and password on the terminal device 300. After that, the user operates the terminal device 300, and the terminal device 300 displays the model configuration interface. The user clicks the simulation environment configuration control in the model configuration interface to display a list of thumbnails of the simulation environments in the simulation environment library 112 under the target account.
  • the user selects a thumbnail in the list, and the first configuration interface can determine the identification of the target simulation environment from the multiple alternative simulation environments in the simulation environment library 112; similarly, the user clicks the simulation device configuration control in the model configuration interface to display a list of thumbnails of the simulation devices in the simulation device library 113 under the target account. Then the user selects a thumbnail in the list, and the first configuration interface can determine the identification of the target simulation device from the multiple alternative simulation devices in the simulation device library 113.
  • step 302 before obtaining the identification of the target scene, step 302 further includes the following content:
  • the third configuration interface is a model generation interface.
  • the model generation interface includes a data configuration control and a type parameter configuration control.
  • the data configuration control is used to select the collected data corresponding to the target simulation environment, such as the data collected by the movable device 210 and the site equipment 220 between 10 am and 10:30 am.
  • the type parameter configuration control is used to configure the type parameters of the target simulation environment.
  • the cloud server cluster 100 can generate the target simulation environment according to the user-configured data and the type parameters of the target simulation environment.
  • the type parameter may include several parameters, such as indoor scenes, outdoor scenes, and weather types.
  • the weather type may be sunny, cloudy, rainy, cloudy, foggy, dusty, windy, etc.
  • different type parameters have different requirements for modeling.
  • indoor scenes it is generally not necessary to pay attention to light changes, but in outdoor scenes, light changes and weather types are very important, and more resources are needed to simulate outdoor light.
  • outdoor scenes and weather types are usually associated. Therefore, when the type parameter is an outdoor scene, on the one hand, the weather type can be directly configured. On the other hand, there is no need to configure the weather type.
  • the weather type can be determined in combination with weather forecast information to achieve intelligent modeling.
  • the type parameters are merely examples and do not constitute specific limitations on the type parameters.
  • the type parameters may include more or fewer parameters than the above; for example, the type parameters may also include feature complexity levels, objects with transparent features, and dynamic object complexity levels.
  • the feature complexity level is used to indicate the size of the number of features of the simulated environment that needs to be constructed, and the higher the level, the more features there are.
  • the feature complexity level is used to select different feature selection methods, so as to reduce the difference between the constructed simulated scene and the real scene. For example, for a scene with relatively simple features such as a corridor, the optical flow method can be used to realize feature extraction. For example, for a scene with relatively complex features such as a park, an artificial intelligence algorithm can be used to realize feature extraction, for example, a suitable artificial intelligence algorithm is selected from the algorithm library 119 to realize feature extraction.
  • the feature complexity level can be divided into 3 levels, such as simple, medium, and complex; for example, the feature complexity level can be divided into 5 levels, such as very simple, simple, medium, complex, and very complex. It is worth noting that different feature extraction methods can be selected for data collected by different sensors based on the feature complexity level, or the same feature extraction method, for example, images and sensors that perceive the environment by laser, such as laser point cloud data collected by lidar, can all use artificial intelligence algorithms to extract features.
  • objects with transparent features can be objects with high refractive index and reflectivity such as glass and transparent buckets.
  • Such objects have a great impact on the data collected by sensors such as laser radar that perceive the environment in a laser manner, which may affect the construction of the simulated environment and increase the difference between the simulated environment and the real environment.
  • the data of the area where the object with transparent features is located in the laser point cloud data collected by the sensor such as laser radar that perceives the environment in a laser manner can be deleted to reduce the difference between the simulated environment and the real environment.
  • the area where the object with transparent features is located can be perceived by image.
  • the third configuration interface can be configured with a list of objects with transparent features, and the user can select an object with transparent features from the list, such as glass, transparent bucket, etc.
  • the identification of the simulated object stored in the simulated object library 115 in the cloud service cluster 100 includes the identification of the object with transparent features.
  • the third configuration interface associates the identification of the object with transparent features in the simulated object library 115. It is worth noting that the simulated object has description information of the object, and the description information can include whether it is transparent, so whether the object has transparent features can be judged based on the description information of the object.
  • the dynamic object complexity level is used to describe the number of dynamic objects in the actual environment corresponding to the simulated environment to be constructed. The higher the level, the more active dynamic objects there are, the more blocked areas there are, and the more complex the reconstruction of the simulated environment is. Therefore, different simulation environment reconstruction methods can be selected based on the dynamic object complexity level to reduce the difference between the constructed simulated scene and the real scene. For example, for a scene with many dynamic objects such as an intersection, the image perceived by the camera can be used as the main sensor, and the laser sensor can be used to perceive the environment. The laser point cloud data collected by a laser radar is used as an auxiliary to realize the reconstruction of the simulated environment.
  • the complexity level of dynamic objects can be divided into 3 levels, such as simple, medium, and complex; for example, the complexity level of dynamic objects can be divided into 5 levels, such as very simple, simple, medium, complex, and very complex.
  • the laser point cloud data collected by sensors such as laser radar that perceive the environment by laser in the data warehouse 111 can be processed based on objects with transparent features, and a suitable method of simulating scene reconstruction can be selected based on the complexity level of dynamic objects, and different weights can be assigned to different sensors. Based on the weights of different sensors and based on the feature complexity level, a suitable feature extraction method is selected, and the data collected by different sensors are extracted and fused, and the position, color, texture, etc. of the surface of the object in the real scene are analyzed to achieve the simulation of the real environment and reduce the difference between the simulated environment and the real environment.
  • a suitable feature extraction method is selected, and the data collected by different sensors are extracted and fused.
  • the appropriate feature extraction method is selected based on the feature complexity level, and the data collected by different sensors are extracted and fused, and the position, color, texture, etc. of the surface of the object in the real scene are analyzed; thereafter, the features extracted from the data collected by different sensors are fused based on the weights of different sensors to obtain the position, color, texture, etc. of the surface of the object in the real scene that can more accurately reflect the real environment. Simulation of the environment.
  • Step 303 The terminal device 300 sends the identifier of the target simulation environment and the identifier of the target simulation device to the cloud server cluster 100 .
  • Step 304 The cloud server cluster 100 provides a second configuration interface.
  • Step 305 The terminal device 300 displays the second configuration interface, and obtains the user's operation on the second configuration interface to obtain the first task instruction.
  • the second configuration interface may be a task scheduling interface.
  • the cloud server cluster 100 may publish the task scheduling interface, and correspondingly, the terminal device 300 may display the task scheduling interface, so that the user may determine the first task instruction through the task scheduling interface and upload it to the cloud server cluster 100.
  • the first task instruction indicates the task that the simulation device needs to complete.
  • a user logs in to the client of the cloud management platform by entering the target account and password on the terminal device 300. After that, the user operates the terminal device 300, and the terminal device 300 displays the task scheduling interface indicated by the second configuration interface. The user can schedule tasks in the task scheduling interface and obtain the first task instruction.
  • FIG5b is a schematic diagram of a task scheduling page provided by an embodiment of the present application.
  • the task scheduling page includes a task description control, such as an input box.
  • the task description can be entered in the task scheduling page, such as transporting materials to the target point through the disinfection point.
  • the task scheduling page also includes a task flow creation control.
  • the user can operate the task flow creation control to create a task flow in the area where the task flow is created. Specifically, the task start node can be created first, and then the node of the subtask can be created, and the description of the subtask can be added. The subtasks are continuously added to finally obtain the task flow.
  • the task scheduling page can also display a thumbnail of the target simulation environment, usually a two-dimensional map, and the user can indicate the navigation point in the thumbnail of the target simulation environment. For example, for transporting materials to the target point through the disinfection point, it is necessary to mark the material point, disinfection point, and target point (where the materials are stored).
  • Step 306 The terminal device 300 sends a first task instruction to the cloud server cluster 100 .
  • Step 307 The cloud server cluster 100 executes the task in the target simulation scenario using the target simulation device according to the first task instruction, the identifier of the target simulation environment and the identifier of the target simulation device, to obtain a first execution result.
  • the first task instruction needs to complete, and thus convert the first task instruction into a simulation instruction, which is in a computer-readable format. Afterwards, based on the simulation instruction, the identifier of the target simulation environment, and the identifier of the target simulation device, load the target simulation device and the target simulation environment, and use the simulation device to perform the task in the target simulation scenario.
  • the first configuration interface can also determine the identification of the target simulator; load the target simulation environment and the target simulation device in the target simulator.
  • the model configuration interface includes a simulator configuration control.
  • the user clicks on the simulator configuration control in the model configuration interface to display a list of simulators, and then the user selects a simulator in the list, so that the first configuration interface can obtain the identification of the target simulator configured by the user.
  • simulators there are many types of simulators, and different simulators have different emphases. For example, some simulators have more realistic surface details and better visual experience, and some simulators have more realistic details of three-dimensional models. Users can choose different simulators to achieve simulation.
  • the second configuration interface is also used to obtain the number of processes.
  • the cloud server cluster 100 creates a process that adapts to the number of processes, thereby using the target simulation device in parallel to execute tasks in the target simulation scenario to ensure the efficiency of task execution. For example, a task can be decomposed into multiple subtasks, and different subtasks are completed by different processes, thereby achieving parallel processing.
  • the configuration of simulated devices, simulated environments, and tasks is achieved through the first configuration interface and the second configuration interface. After that, the simulation of the movable device and its environment can be realized, and the task can be performed in the simulated environment, reducing the difficulty and cost of completing the task on the physical device.
  • the outdoor open environment where the mobile device 210 is active is generally large, and the generated simulation environment is also large. It is necessary to load the complete simulation at one time.
  • the environment takes a long time and consumes a lot of resources. It is not smooth to operate, and it takes a long time to render the entire simulation environment. This is a big challenge for current mainstream simulators.
  • the embodiments of the present invention use the abundant server resources on the cloud to divide the large model into several parts, which are distributed and loaded by different servers, greatly reducing the resource requirements for a single machine.
  • the first configuration interface can configure the number of instances.
  • the instance can include at least one of a physical host (computing device), a virtual machine, and a container.
  • step 307 can include the following content:
  • the cloud server cluster 100 loads the target simulation environment and the target simulation device on instances with matching instance numbers, so that the target simulation device and the target simulation environment can be distributedly loaded by different servers, greatly reducing the resource requirements for a single machine.
  • the local real-time model loading range can be adaptively determined based on the perception range of the simulation sensors on the simulation equipment, which can avoid loading irrelevant areas of the simulation environment to a certain extent, reduce the overall loading time of the model, and improve the smoothness of operation, thus achieving efficient loading and operation of ultra-large-scale models.
  • step 307 may include the following content:
  • the cloud server cluster 100 loads the target simulation environment within the perception range based on the perception range of the simulation sensor on the simulation device during the process of executing the task based on the first task instruction.
  • the position of the target simulation device in the target simulation environment can be initialized.
  • the environment of the target simulation environment within the perception range is loaded.
  • part of the target simulation environment can be loaded first, and then the target simulation device can be randomly loaded in the loaded target simulation environment. Afterwards, based on the perception range of the simulation sensor on the simulation device, the target simulation environment and target simulation device within the perception range are loaded.
  • this solution reduces the overall loading time of the model and improves the smoothness of operation through distributed loading and adaptive determination of the local real-time model loading range, thereby achieving efficient loading and operation of ultra-large-scale models.
  • the target simulation environment includes at least one three-dimensional model and object parameters carried by each of them.
  • the first configuration interface can also determine the physical parameters corresponding to at least one three-dimensional model in the target simulation environment.
  • the model configuration page indicated by the first configuration interface also includes a physical parameter configuration control of a three-dimensional model in a target simulation environment.
  • the physical parameter configuration control can display the physical parameters corresponding to the three-dimensional model, so that the first configuration interface can select the target physical parameters from the candidate physical parameters of the three-dimensional model.
  • the three-dimensional model is usually a road surface.
  • the cloud server cluster 100 uses the physical parameters of the target simulation device and at least one three-dimensional model to perform the task in the target simulation scene.
  • the three-dimensional model is a material.
  • the physical parameters of the material may include the friction coefficient and the weight of the material.
  • the target simulation device needs to grab the material, it is necessary to calculate the friction based on the friction coefficient and weight of the material to determine how much force the target simulation device needs to use to grab the material.
  • the three-dimensional model is a road surface
  • the physical parameters of the road surface may include the friction coefficient.
  • the target simulation device needs to be inspected
  • the target simulation device is carrying materials, it is also necessary to consider the maximum speed to ensure that the materials will not fall.
  • simulated objects can be added or reduced in the simulated environment to generate a complex simulated environment for robot training, thereby solving the problem of low probability of unexpected situations in the real environment and small amount of data.
  • step 302 may also include the following contents:
  • the terminal device 300 displays a thumbnail of the target simulation scene, obtains the user's operation on the first configuration interface, and obtains the identifier of the target simulation object and the first position of the target simulation object in the target simulation scene.
  • the model configuration page indicated by the first configuration interface can also be displayed in the display area of the environment thumbnail.
  • Thumbnail of the target simulation scene simulation object configuration control.
  • the simulation object configuration control is used to display a list of thumbnails of simulation objects in the simulation object library 115. The user can drag the thumbnail to the thumbnail of the target simulation scene, so that the first configuration interface can obtain the identifier of the target simulation object and the first position of the target simulation object in the target simulation scene.
  • step 307 specifically includes the following contents:
  • the cloud server cluster 100 loads the target simulation environment and loads the target simulation object at a first position in the target simulation environment.
  • the first position is within the sensing range of the simulation sensor of the target simulation device. At this time, the target simulation object can be loaded at the first position.
  • the user can place the target simulation device at a certain position in the target simulation environment, and the position is used as the initial position of the target simulation device.
  • the model configuration page displays the thumbnail of the target simulation device in the display area of the device thumbnail, and the user can drag the thumbnail of the target simulation device to the thumbnail of the target simulation scene to determine the initial position of the target simulation device in the target simulation device.
  • the real movement behaviors of objects can be simulated in the simulation environment, and various cases can be generated for robot training, thereby solving the problem of low probability of case occurrence and small amount of data in the real environment.
  • step 302 may further include the following contents:
  • the terminal device 300 displays a thumbnail of the target simulation scene, obtains the user's operation on the first configuration interface, and obtains the identifier of the target behavior of the target simulation object.
  • the model configuration page indicated by the first configuration interface can display a thumbnail of the target simulation scene (displayed in the display area of the environment thumbnail), a simulation object configuration control, and a simulation behavior configuration control.
  • the simulation behavior configuration control is used to display a list of thumbnails of simulation behaviors of the target simulation object in the simulation behavior library 116, and the user can click on the thumbnail of the simulation behavior, so that the first configuration interface can obtain the identification of the target behavior of the target simulation object.
  • step 307 specifically includes the following contents:
  • the cloud server cluster 100 loads the target simulation environment, and loads the target simulation object at the first position in the target simulation environment, and controls the target simulation object to move according to the target behavior to obtain the final target simulation environment.
  • the first position is within the perception range of the simulation sensor of the target simulation device.
  • the target simulation object can be loaded at the first position and controlled to move according to the target behavior.
  • the real weather behavior can be simulated in the simulation environment, and various simulation environments can be generated for robot training, thereby solving the problem of low probability of abnormal weather in the real environment and small amount of data.
  • step 307 specifically includes the following contents:
  • the cloud server cluster 100 loads the target simulation environment, and loads the target weather behavior in the target simulation environment.
  • weather behaviors can include changes in wind speed, rainfall, etc., so that the execution results of the target simulation device under different weather conditions can be obtained to adapt to the complexity of the real scene.
  • the embodiment of the present invention can perform predictive maintenance on the movable device 210 through simulation equipment and simulation environment, thereby extending the service life of the movable device 210 as much as possible and reducing the risk of accidents.
  • the task indicated by the first task instruction may be a predictive maintenance task; correspondingly, the task includes at least one maintenance indicator, and the execution result includes the indicator value of the at least one maintenance indicator.
  • At least one maintenance indicator includes a temperature threshold of a component in the movable device, a power threshold of the movable device, and/or a The operating time threshold of the mobile device.
  • the component here can be a central processing unit (CPU for short).
  • step 307 may specifically include the following contents:
  • the cloud server cluster 100 performs a simulation test on a target simulation device in a target simulation environment to determine an indicator value of at least one maintenance indicator of the target simulation device.
  • the cloud server cluster 100 can receive operation information sent by the target device corresponding to the target simulation device; based on the motion information and the first execution result, determine the loss of the target device, and issue an alarm when the loss is high.
  • the user can configure multiple target simulation environments. Subsequently, the target simulation equipment is simulated and tested in the multiple target simulation environments to determine the indicator value of at least one maintenance indicator, so as to more accurately analyze the loss of the movable device 210.
  • the operation information of the target device may include power, CPU temperature and/or total operation time, etc., and the degree of wear and tear may be determined by one maintenance indicator or by multiple maintenance indicators.
  • the CPU temperature in the running information can be compared with the CPU temperature threshold. For example, the ratio of the CPU temperature in the running information to the CPU temperature threshold can be used as the degree of loss. Once the CPU temperature in the running information is close to the CPU temperature threshold, an alarm is issued. The power and running time are similar and will not be repeated.
  • the embodiment of the present invention can realize the skill training test of the simulation device, and the movement of the simulation device requires the dynamic parameters of the joints in the simulation device, so as to ensure that the skill training test of the simulation device can be realized.
  • the dynamic parameters are required.
  • the target simulation device needs to grab objects, dance, say hello, mop the floor, spread disinfectant, unload goods, carry stones, etc.
  • the target simulation device includes at least one joint
  • the first configuration interface can also determine dynamic parameters corresponding to at least one joint in the target simulation device.
  • the model configuration page indicated by the first configuration interface may also include a dynamic parameter configuration control for the target simulation device.
  • the dynamic parameter configuration control may display multiple sets of dynamic parameters for the target simulation device, so that the first configuration interface may select dynamic parameters from multiple sets of alternative dynamic parameters for the target simulation device.
  • each set of dynamic parameters includes dynamic parameters for all joints in the target simulation device.
  • the user does not need to configure the dynamic parameters of at least one joint in the target simulation device, and can directly use the default dynamic parameters.
  • the cloud server cluster 100 uses the dynamic parameters of at least one joint in the target simulation device to control the target simulation device to perform the task in the target simulation environment.
  • the dynamic parameters of the joints in the target simulation device can be used to control the force required for joint movement, thereby realizing the joint activity of the target simulation device, so that the target simulation device can complete a variety of actions, such as dancing, greeting, walking, carrying, etc.
  • the cloud server cluster 100 when performing the task of crossing the pile of dirt, can set multiple foot-lifting heights and the foot-lifting speeds corresponding to each height. Then, based on the dynamic parameters of the joints required when the robot lifts its feet, it can determine how to control the movement of the joints when the robot lifts its feet; thereafter, a simulation test can be performed on each height and its corresponding foot-lifting speed to see whether the robot can cross the pile of dirt.
  • the task indicated by the first task instruction includes several skills.
  • the skills indicate the activities that the movable device 210 can perform.
  • the first execution result includes the skill implementation strategies of the several skills.
  • the target device corresponding to the target simulation device executes the skill implementation strategy to implement the skill.
  • the several skills include navigation, obstacle avoidance, shooting, dancing, greeting, mopping, spreading disinfectant, unloading, grabbing goods, carrying stones, crossing a pile of earth, etc.
  • the skill implementation strategy includes a motion trajectory, in other words, the execution result includes a motion trajectory.
  • the cloud server cluster 100 can also display the motion trajectory. It is worth noting that when the skill is navigation, it is necessary to mark a number of task points, which include a starting point and an end point. In some possible scenarios, there are several task points between the starting point and the end point. It is worth noting that a task point can be associated with one or more skills, that is, these skills need to be completed at the task point.
  • the skills of the task point can be shooting, dancing, greeting, unloading, grabbing goods, carrying stones, etc.
  • the task indicated by the first task instruction includes N skills and M task points.
  • the task scheduling page can also display a thumbnail of the target simulation environment, usually a two-dimensional map.
  • the user can indicate several task points in the thumbnail of the target simulation environment, such as a material point, a disinfection point, and a target point (where the materials are stored).
  • the material point can be the starting point and the target point can be the end point.
  • the skill associated with the material point is grabbing materials
  • the skill associated with the disinfection point is material disinfection
  • the skill associated with the destination point is unloading materials.
  • the tasks indicated by the first task instruction need to be flexibly increased in combination with the actual situations encountered by the target simulation device in the target simulation environment, and new skills can be continuously added.
  • the first task instruction is to transport materials to the target point through the disinfection point. Assuming that there is a moving object in the target simulation environment, the target simulation device encounters the object in the process of executing the task. At this time, a new skill can be added for obstacle avoidance; for another example, there is a mound of earth in the target simulation environment, and the target simulation device encounters the mound of earth in the process of executing the task. At this time, a new skill can be added for crossing the mound of earth.
  • the skill realization strategy in the first execution result is a strategy with better performance.
  • the target simulation device is a robot and the task is to cross a dirt pile
  • the cloud server cluster 100 can set multiple foot-lifting heights and the foot-lifting speeds corresponding to each height, and then simulate each height and its corresponding foot-lifting speed to see whether the robot crosses the dirt pile, and then analyze whether the robot crosses successfully, the energy consumption, the time consumed, and the smoothness of the action at each height and its corresponding foot-lifting speed, and finally select the height and its corresponding foot-lifting speed that can successfully cross, have lower energy consumption, lower time consumption, and smoother action.
  • steps 301 to 307 shown in FIG. 3 based on steps 301 to 307 shown in FIG. 3 , at least the following steps may be included:
  • Step 308 The cloud server cluster 100 sends the first execution result to the target device corresponding to the target simulation device.
  • the cloud server cluster 100 can send the first execution result to the target device corresponding to the target simulation device, that is, the real device, so that the target device can execute the task according to the first execution result.
  • the target device is a mobile device 210 .
  • the target device can run according to the motion trajectory in the execution result, and transport the materials when it reaches the material point, transport the transported materials to the disinfection point for disinfection, then transport the disinfected materials to the target point, and finally place the disinfected materials at the target point.
  • the skill implementation strategy in the first execution result is a strategy that can implement the skill.
  • the user can decide the skill implementation strategy required to implement the skill.
  • the first execution result also includes the task execution status, which indicates the execution status of the target simulation device when executing according to the skill implementation strategy, such as execution time, resource consumption, whether the target device is stable during the skill implementation process, etc.
  • steps 301 to 307 shown in FIG. 3 based on steps 301 to 307 shown in FIG. 3 , at least the following steps may be included:
  • Step 309 The cloud server cluster 100 sends the task execution status in the first execution result to the terminal device 300 .
  • the cloud server cluster 100 may send the skill implementation strategy to the target device, and receive the execution status of the skill implementation strategy executed by itself from the target device.
  • the cloud server cluster 100 can control the target simulation device to implement the skill in the target simulation environment according to the skill implementation strategy to obtain the execution status.
  • Step 310 The terminal device 300 displays the task execution status and determines the task deployment strategy.
  • the terminal device 300 can display the task execution status.
  • the task execution status can also include a thumbnail video of the target simulation device executing each skill implementation strategy.
  • the terminal device 300 can also display a thumbnail video of the target simulation device executing the skill implementation strategy, thereby facilitating user decision-making.
  • the task deployment strategy includes identifiers corresponding to the multiple skills in the task execution status, a deployment strategy, and an identifier of a target skill implementation strategy.
  • the deployment strategy can be deployed on the target device, deployed on the cloud server cluster 100, or self-adaptive decision, that is, it can be determined whether to execute itself or the cloud server cluster 100 in combination with the resource situation of the target device.
  • self-adaptive decision that is, it can be determined whether to execute itself or the cloud server cluster 100 in combination with the resource situation of the target device.
  • the user after seeing the execution status of different skills, the user can decide whether to deploy the skill on the cloud server cluster 100, on the target device, or make an adaptive decision.
  • the task deployment strategy also includes the execution order of the multiple target skill implementation strategies for the skill, thereby ensuring the implementation of the skill in the actual application process.
  • Step 311 The terminal device 300 sends a task deployment strategy to the cloud server cluster 100.
  • Step 312 The cloud server cluster 100 determines the task execution strategy based on the task deployment strategy.
  • the task execution strategy indicates the deployment strategy of each of the multiple skills and the target skill implementation strategy.
  • it also includes the execution order of the target skill implementation strategies.
  • the identifier of the skill implementation strategy in the task deployment strategy is replaced with the skill implementation strategy in the first execution result to obtain the task execution strategy.
  • Step 313 The cloud server cluster 100 sends a task execution policy to the target device.
  • the cloud server cluster 100 can send the task execution strategy to the target device corresponding to the target simulation device, that is, the real device, so that the target device can execute the task according to the task execution strategy.
  • users can predict the execution time and execution effect of related skills during the entire task execution process, provide a reference for workload planning and allocation, and ensure user experience.
  • the embodiment of the present invention can achieve real-time interaction with the mobile device 210 through the simulation device and the simulation environment, thereby reducing the risk of accidents occurring in the mobile device 210.
  • the real-time interaction may be posture synchronization.
  • the task indicated by the first task instruction may be interactive positioning.
  • the embodiment of the present invention further includes:
  • Step 401 The terminal device 300 uploads the first pose.
  • step 307 may specifically include the following contents:
  • Step 307a the cloud server cluster 100 executes the task in the target simulation scene using the target simulation device according to the first task instruction, the identifier of the target simulation environment and the identifier of the target simulation device, and the first posture, to obtain a first execution result.
  • the cloud server cluster 100 obtains the first pose of the target device corresponding to the target simulation device; based on the first pose, updates the pose of the target simulation device in the target simulation environment, and correspondingly, the first execution result is the updated target simulation environment and target simulation device.
  • the posture of the mobile device 210 in reality may have changed when the cloud server cluster 100 processes the posture data.
  • the actual posture of the mobile device 210 in reality can be predicted based on the first posture and the simulated acceleration time.
  • the second configuration interface can also upload the acceleration time; then the cloud server cluster 100 can update the position and posture of the target simulation device in the simulation scene according to the first position and the acceleration time.
  • the acceleration time can be manually input by the user.
  • the second configuration interface may also indicate the use of a default acceleration duration.
  • the cloud server cluster 100 may calculate the communication delay between itself and the removable device 210, and use the communication delay as the default acceleration duration.
  • the cloud server cluster 100 may update the pose of the target simulation device in the simulation scene according to the first pose and the default acceleration duration.
  • step 307a At least the following steps are included:
  • Step 402 The terminal device 300 displays the first execution result.
  • the cloud server cluster 100 displays the target simulation device and the target simulation scene, as well as the fourth configuration interface.
  • the fourth configuration interface can be a semantic display interface of the target simulation environment, a geometric update interface of the target simulation environment, and an update interface of the physical parameters of the target simulation environment.
  • the terminal device 300 will also display the target simulation device and the target simulation scene, as well as the fourth configuration interface.
  • the user can operate the fourth configuration interface through the terminal device 300.
  • the cloud server cluster 100 can realize the semantic display, geometric update, and physical parameter update of the target simulation environment based on the information provided by the fourth configuration interface.
  • the update of physical parameters not only includes the update of the physical parameters of the simulated object, but also includes the update of the physical parameters of the weather.
  • the fourth configuration interface can also include an update interface for the dynamic parameters of the target simulation device.
  • the cloud server cluster 100 can obtain the second posture of the target device corresponding to the target simulation device, and update the posture of the target simulation device and the motion trajectory of the first posture and the second posture in the target simulation environment according to the first posture and the second posture, thereby achieving real-time posture update.
  • Step 403 The terminal device 300 obtains a third task instruction.
  • the sensing range of the sensor of the movable device 210 is limited and it may not be able to sense potentially dangerous actions (e.g., going downhill) or moving obstacles in the blind spot of the monitoring.
  • the movable device 210 has a high risk and needs to avoid the risk.
  • the future movement of the movable device 210 can be predicted and evaluated, so as to control the operation of the robot to avoid the risk.
  • the task indicated by the third task instruction is a prediction task.
  • the fourth configuration interface may also include a prediction interface.
  • the terminal device 300 displays the prediction interface, and the user may operate the prediction interface through the terminal device 300 to set information such as the prediction duration.
  • the cloud server cluster 100 may implement motion trajectory prediction based on the information provided by the prediction interface.
  • Step 404 The cloud server cluster 100 accelerates the simulation time of the target simulation scene based on the third task instruction, predicts the motion trajectory of the target simulation device, and obtains the predicted motion trajectory.
  • Step 405 The cloud server cluster 100 determines the monitoring data corresponding to the predicted motion trajectory.
  • the monitoring data can be collected by the side device 200.
  • the monitoring data can be determined by comprehensively collecting the data collected by the multiple robots operating in the park and the data collected by the monitoring equipment.
  • Step 406 The cloud server cluster 100 sends the predicted motion trajectory and its corresponding monitoring data to the terminal device 300 .
  • Step 407 The terminal device 300 displays the predicted motion trajectory and its corresponding monitoring data.
  • the user can combine the predicted motion trajectory and the monitoring data corresponding to the predicted motion trajectory to analyze whether there are potential moving obstacles in the blind spot of the movable device 210. If it is determined that there is a risk of collision, the user will be prompted to issue a deceleration or stop command in advance to avoid it.
  • Step 408 The terminal device 300 determines the operation instruction of the target device.
  • operation instructions such as acceleration, deceleration, and stop may be issued to the target simulation device, thereby controlling the movable device 210 to avoid risks.
  • Step 409 The terminal device 300 sends an operation instruction of the target device to the cloud server cluster 100 .
  • the cloud server cluster 100 sends an operation instruction for the target simulation device to the target device, so that the target device operates according to the operation instruction.
  • the posture of the movable device in the real environment and the simulated environment can be synchronized.
  • the motion trajectory of the simulated device can be predicted in the simulated environment to avoid potential risks.
  • the real-time interaction may be a side task uploaded by the mobile device 210 to the cloud server cluster 100 during the actual operation process.
  • the mobile device 210 may further include the following steps during actual operation:
  • Step 314 The target device corresponding to the target simulation device sends a second task instruction and task data to the cloud server cluster 100 .
  • Step 315 The cloud server cluster 100 processes the task data and executes the task according to the second task instruction to obtain a second execution result.
  • the real scene is relatively complex, and the simulation environment constructed by the embodiment of the present invention generally only takes into account objects in the environment that hardly change. Therefore, for tasks with high real-time requirements: real-time obstacle avoidance and local path planning, they need to be deployed on the mobile device 210 body for execution.
  • the resources of the mobile device 210 body are consumed more and more, resulting in a shortage of resources on the mobile device 210 body.
  • the mobile device 210 may have no additional resources to perform tasks that consume more resources.
  • the embodiment of the present invention can use the cloud server cluster 100 to execute tasks that consume more resources on the mobile device 210 body.
  • the task indicated by the second task instruction may be content recognition.
  • the task data may be data collected by the sensor.
  • the content to be recognized by the content recognition may be voiceprint recognition, speech recognition, or gesture recognition.
  • the resource consumption of the task indicated by the second task instruction is greater than a set resource consumption threshold.
  • the target device corresponding to the target simulation device can monitor the resource consumption of executing non-essential tasks in real time, and when the resource consumption is greater than the set resource consumption threshold, the second task instruction of the task can be uploaded to the cloud server cluster 100.
  • non-essential tasks can be understood as tasks with lower real-time requirements, such as tasks other than real-time obstacle avoidance and local path planning.
  • the task indicated by the second task instruction may be issued for a local map.
  • the local map required for the current operation of the robot is adaptively determined based on the sensing range of the robot sensor, and is sent to the robot from the cloud, thereby avoiding the resource consumption caused by large-scale map deployment to the robot body.
  • the cloud server cluster 100 adaptively determines the local map required for the current operation of the mobile device 210 based on the sensing range of the sensor of the mobile device 210, and then sends it to the mobile device 210, thereby avoiding the resource consumption caused by large-scale map deployment to the mobile device 210 body.
  • the target device corresponding to the target simulation device only needs to pay attention to the moving obstacles in the environment, which can reduce resource consumption to a certain extent.
  • the task data may be data collected by a sensor of a target device corresponding to the target simulation device (which may indicate the maximum area that the target device can perceive), and the cloud server cluster 100 receives the task data sent by the target device corresponding to the target simulation device, matches the task data with the target simulation environment, and determines the local area; thereafter, the local area of the target simulation scene in the local area may be determined. map.
  • the local map of the target device corresponding to the target simulation device can be determined based on the sensing range of the current simulation sensor of the target simulation device, and the local map can be sent to the target device corresponding to the target simulation device. If this method is adopted, no task data is required.
  • Step 316 The cloud server cluster 100 sends the second execution result to the target device.
  • mobile devices can upload tasks that consume a lot of resources or are difficult to execute to the cloud. After the cloud platform processes them, it returns the results to the mobile devices on the side, reducing the hardware requirements for the mobile devices.
  • cloud resources are elastic, allowing users to use them on demand, further reducing economic costs.
  • Scenario 1 The robot conducts inspections in the park.
  • the traditional approach is to first map the park, obtain a two-dimensional grid map or a three-dimensional point cloud map, mark the task points on the map, and then assign the task to the robot.
  • the robot develops and tests its navigation skills in a real environment or a simulated world in advance, and then conducts navigation inspections according to the map.
  • the embodiment of the present invention aims to establish a simulation environment and simulation equipment (collectively referred to as a digital twin world) to enable the robot to efficiently perform skill training, thereby completing the park inspection task.
  • a simulation environment and simulation equipment collectively referred to as a digital twin world
  • A1 Multiple robots work together to collect multi-source heterogeneous data such as laser point cloud, visual images, robot speed, robot model and structure data of the park, and upload them to the cloud server cluster 100.
  • the cloud server cluster 100 performs noise reduction processing and puts the data into the data warehouse 111.
  • the cloud server cluster 100 performs time/space alignment processing on the point cloud, video, image and other data in the data warehouse 111, performs SLAM mapping based on the laser point cloud data, generates a point cloud model, combines the image data, performs mesh texture reconstruction, generates a simulated environment of the park, and adds the simulated environment to the simulated environment library 112; in addition, simulated objects of objects in the park can also be generated, and the simulated objects are added to the simulated object library 115.
  • the description information of the objects in the point cloud and image data is extracted and stored in the semantic database 118. Based on the matching of the description information of the objects in the semantic database 118 and the three-dimensional model in the simulated environment, the semantic map of the simulated environment is obtained.
  • the simulated environment and the robot model are used as the digital twin world.
  • the cloud server cluster 100 performs dynamic identification, air resistance, and interaction force analysis on the collected robot motion data and environmental data based on the algorithms in the algorithm library 119.
  • the dynamic parameters of each joint in the robot can be obtained, making the simulation device closer to the real robot;
  • the physical parameters such as the friction coefficient and air resistance coefficient of the three-dimensional model in the simulated environment can be obtained, so that the simulated objects in the simulated environment obtained in the previous step and/or the weather in the simulated environment have physical parameters close to reality, further reducing the difference between the simulated environment and the real environment.
  • the cloud server cluster 100 extracts the behavior segments of the moving objects based on the collected video data, and makes statistics to generate behavior patterns, thereby obtaining a behavior pattern library 116. Then, modeling is performed based on the behavior patterns in the behavior pattern library to obtain a simulation behavior library 116, thereby simulating real object behaviors and weather behaviors, generating different simulation environments for robot training, and improving the robot's ability to cope with complex environments in the real world.
  • the user operates the model configuration interface (the above-mentioned first configuration interface) provided by the cloud server cluster 100 through the terminal device 300 to determine the identification of the simulation device, the identification of the simulation environment, and the identification of some newly added simulated objects in the simulation environment, the identification of the simulated behavior of the simulated object, and the identification of the simulated weather behavior.
  • the model configuration interface the above-mentioned first configuration interface
  • the identification of the simulation device can be determined by the robot model and its structural parameters.
  • the cloud server cluster 100 can match the simulation devices in the simulation device library 113 based on the robot model and its structural parameters, and select the closest simulation device as the robot model.
  • the cloud server cluster 100 builds a data twin world based on the identifier provided by the model configuration interface (the above-mentioned first configuration interface).
  • the data twin world can be segmented and distributedly loaded by simulated devices, simulated environments, simulated objects, simulated behaviors of simulated objects, and simulated weather behaviors, which can reduce the loading time of the entire model and improve the fluency of the simulation.
  • the user operates the task scheduling page (the second configuration interface mentioned above) provided by the cloud server cluster 100 through the terminal device 300 to determine the inspection task.
  • the cloud server cluster 100 uses the artificial intelligence algorithm in the algorithm library 119 to execute the inspection task uploaded on the task scheduling page (the above-mentioned second configuration interface) in the digital twin world constructed by A6 and obtains the first execution result.
  • the cloud server cluster 100 forms a closed-loop simulation training system of perception, decision-making, planning, and control, thereby improving the efficiency of robot skill development and testing.
  • the cloud server cluster 100 packages the first execution result as the task execution strategy for application deployment, and sends the packaged robot application to multiple robots on the side.
  • the robot application also known as the robot native application, refers to an application designed and developed for the robot platform and scenario.
  • the task execution strategy may be a planned motion path.
  • the multi-source heterogeneous data collected by the multiple robots on the side during the inspection process are uploaded to the cloud server cluster 100 again, and the cloud server cluster 100 updates the existing simulation environment, reconstructs the mesh texture of the changed parts, and updates the semantic database.
  • a simulation environment is established by collecting multi-source sensor data from the park, and the difference between the two is further reduced by continuously updating the simulation environment and giving the simulation environment physical parameters close to the real environment, providing the robot with an accurate navigation map.
  • the generated digital twin world is used to enable closed-loop simulation, reducing the time and economic costs of field testing of robots.
  • the use of edge-cloud collaborative technology greatly improves the efficiency of local and cloud resource utilization of robots, and accelerates the development and testing of robot skills.
  • Scenario 2 In some special places, such as quarantine areas or shelter hospitals, manual tasks will bring the risk of infection, and various disinfection and epidemic prevention processes must be followed, which has a great deal of time and economic costs. Therefore, robots can be used to replace manual tasks. Unlike outdoor park inspections, shelter hospitals are complex places, and the tasks that robots have to perform are more difficult than traditional inspections, including carrying materials, entering designated areas for disinfection, unloading at the target point, and interacting with people in the quarantine area. The quality and completion time of the robot's tasks must also be evaluated to determine the daily material delivery volume.
  • the robot collects multi-source heterogeneous data in the square cabin hospital and uploads the data to the cloud server cluster 100. Based on the process of building a digital twin world described in A1 to A6 above, the first digital twin world of the square cabin hospital is generated.
  • the first digital twin world is a digital twin world configured by users for simulation training, which can construct a variety of complex environments that are rarely seen in the real world.
  • the user performs semantic analysis and decomposition of the material disinfection and transportation task through the terminal device 300, for example, converting "transporting materials from the disinfection point to the target point" into a task flow: grabbing materials, arriving at the disinfection point, disinfection, arriving at the target point, and unloading.
  • the user edits the decomposed task flow through the task scheduling page (the second configuration interface) provided by the cloud server cluster 100.
  • the cloud server cluster 100 performs simulation training on multiple skills in the task flow uploaded on the task scheduling page (the above-mentioned second configuration interface) in the first data twin world determined by B1 to determine a first execution result.
  • the cloud server cluster 100 can decompose the tasks of the task flow into multiple skills (which cannot be further divided) and perform simulation training on each skill.
  • the first execution result may include several skill implementation strategies for each of the multiple skills.
  • the cloud server cluster 100 sends the task execution status in the first execution result to the terminal device 100.
  • the terminal device 300 displays the task execution status and determines the task deployment strategy.
  • the cloud server cluster 100 determines the task execution strategy based on the task deployment strategy, performs packaging preparation for application deployment based on the task execution strategy, and sends the packaged robot application to the side robot.
  • the robot on the side runs the received robot application, carries out material disinfection and transportation in the square cabin hospital, and monitors the resource consumption and execution difficulty of the task in real time. If a task with high resource consumption or difficult processing is encountered, the task on the side is uploaded to the cloud server cluster 100 for execution.
  • the side tasks can be: interacting with people in the isolation point, gesture recognition, and other tasks.
  • the cloud server cluster 100 After completing the task, the cloud server cluster 100 sends the second execution result to the robot at the side.
  • the cloud server cluster 100 generates a second digital twin world of the square cabin hospital based on the process of building a digital twin world described in A1 to A6 above, and the terminal device 300 displays the second digital twin world.
  • the second digital twin world is a digital twin world configured by users to achieve real-time interaction, which is closer to the real environment.
  • the cloud server cluster 100 continuously updates the posture of the robot model in the second digital twin world determined by B9 by receiving the real-time posture data of the robot on the side, so as to achieve synchronization with the posture of the robot in reality.
  • the cloud server cluster 100 accelerates the time of the current digital twin world according to the actual posture data of the robot, reduces the delay between the simulation and the real world, and achieves the purpose of real-time teleoperation. For details, see the description of step 307a above.
  • the cloud server cluster 100 can predict the motion trajectory of the robot model in the second digital twin world based on the operation of the prediction interface of the terminal device 300 on the second digital twin world, obtain the predicted motion trajectory, and determine the monitoring data of the square cabin hospital corresponding to the predicted motion trajectory.
  • the cloud server cluster 100 can analyze whether there are potential mobile obstacles in the robot's blind spot. If it is determined that there is a risk of collision, it will ask the terminal device 300 whether it agrees to the robot on the side to execute the operation instruction (deceleration or stop instruction) to avoid it. If the terminal device 300 agrees to the operation instruction, the cloud server cluster 100 can send the operation instruction to the robot on the side.
  • the cloud server cluster 100 can send the predicted motion trajectory and the corresponding monitoring data of the square cabin hospital to the terminal device 300, so as to analyze whether there are potential mobile obstacles in the robot's blind spot. If the user determines that there is a risk of collision, the operation instruction of the robot on the side can be uploaded to the cloud server cluster 100, and the cloud server cluster 100 can send the operation instruction to the corresponding robot on the side.
  • robots perform tasks in the shelter hospital, avoiding the risk of virus infection in manual execution, simplifying the disinfection and epidemic prevention process, and reducing time and economic costs.
  • the digital twin world of the shelter hospital established based on the present invention not only provides a simulation environment that conforms to the real world for robot skill training and improves development and testing efficiency, but also supports users to semi-autonomously remotely operate robots, simulate the actual movement state of robots in the simulation environment, and predict the movement trajectory of robots through the large computing power on the cloud, thereby avoiding potential risks and further improving work efficiency.
  • the data collected by the edge devices is used to build simulated environments, simulated objects, simulated behaviors, etc. in the cloud server cluster.
  • Various complex simulation environments can be built for simulation training and testing to solve the problem of low probability of unexpected situations in real environments and large amount of data.
  • simulated devices with the same physical properties as the actual movable devices are used to perform tasks in the simulation environment, thereby reducing the difference between executing tasks in the real environment and in the simulation environment.
  • the position and posture synchronization of the physical movable device and the simulation device can be achieved.
  • the subsequent motion trajectory of the simulation device can be predicted, and the predicted motion trajectory and the monitoring data of the trajectory can be fed back to the user, so that the user can understand the operation status of the movable device and operate the physical movable device to avoid risks, thereby reducing the difficulty and cost of operating the movable device to complete tasks.
  • cloud server clusters can handle tasks that consume large amounts of mobile device resources or are difficult to handle, thereby reducing the hardware requirements for physical mobile devices.
  • predictive maintenance is performed on the movable device 210 through simulation equipment and simulation environment, so as to extend the service life of the movable device 210 as much as possible and reduce the risk of accidents.
  • the present application also provides a simulation training device, as shown in FIG7 , comprising:
  • a first interface providing module 701 is used to provide a first configuration interface, wherein the first configuration interface is used to obtain an identifier of a target environment and an identifier of a target simulation device;
  • a second interface providing module 702 configured to provide a second configuration interface, wherein the second configuration interface is used to obtain task instructions;
  • the task execution module 703 is used to execute the task in the simulation scenario using the target simulation device according to the task instruction to obtain the execution result.
  • the first interface providing module 701, the second interface providing module 702 and the task execution module 703 can all be implemented by software, or can be implemented by hardware.
  • the implementation of the first interface providing module 701 is introduced below by taking the first interface providing module 701 as an example.
  • the implementation of the second interface providing module 702 and the task execution module 703 can refer to the implementation of the first interface providing module 701.
  • the first interface providing module 701 may include code running on a computing instance.
  • the computing instance may include at least one of a physical host (computing device), a virtual machine, and a container. Further, the above-mentioned computing instance may be one or more.
  • the first interface providing module 701 may include code running on multiple hosts/virtual machines/containers. It should be noted that the multiple hosts/virtual machines/containers used to run the code may be distributed in the same region (region) or in different regions.
  • the multiple hosts/virtual machines/containers used to run the code may be distributed in the same availability zone (AZ) or in different AZs, each AZ including a data center or multiple data centers with similar geographical locations. Among them, usually a region may include multiple AZs.
  • VPC virtual private cloud
  • multiple hosts/virtual machines/containers used to run the code can be distributed in the same virtual private cloud (VPC) or in multiple VPCs.
  • VPC virtual private cloud
  • a VPC is set up in a region.
  • a communication gateway needs to be set up in each VPC to achieve interconnection between VPCs through the communication gateway.
  • the first interface providing module 701 may include at least one computing device, such as a server, etc.
  • module A may also be a device implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the PLD may be a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • the multiple computing devices included in the first interface providing module 701 can be distributed in the same region or in different regions.
  • the multiple computing devices included in the first interface providing module 701 can be distributed in the same AZ or in different AZs.
  • the multiple computing devices included in the first interface providing module 701 can be distributed in the same VPC or in multiple VPCs.
  • the multiple computing devices can be any combination of computing devices such as servers, ASICs, PLDs, CPLDs, FPGAs, and GALs.
  • the first interface providing module 701 can be used to execute any step in the simulation training method
  • the second interface providing module 702 can be used to execute any step in the simulation training method
  • the task execution module 703 can be used to execute any step in the simulation training method.
  • the steps that the first interface providing module 701, the second interface providing module 702, and the task execution module 703 are responsible for implementing can be specified as needed.
  • the first interface providing module 701, the second interface providing module 702, and the task execution module 703 respectively implement different steps in the simulation training method to realize the full functions of the simulation training device.
  • the present application also provides a computing device 800.
  • the computing device 800 includes: a bus 802, a processor 804, a memory 806, and a communication interface 808.
  • the processor 804, the memory 806, and the communication interface 808 communicate with each other through the bus 802.
  • the computing device 800 can be a server or a terminal device. It should be understood that the present application does not limit the number of processors and memories in the computing device 800.
  • the bus 802 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus may be divided into an address bus, a data bus, a control bus, etc.
  • FIG. 3 is represented by only one line, but does not mean that there is only one bus or one type of bus.
  • the bus 804 may include a path for transmitting information between various components of the computing device 800 (e.g., the memory 806, the processor 804, and the communication interface 808).
  • Processor 804 may include any one or more of a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP), or a digital signal processor (DSP).
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • the memory 806 may include a volatile memory (volatile memory), such as a random access memory (RAM).
  • volatile memory volatile memory
  • RAM random access memory
  • the processor 804 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 806 stores executable program codes, and the processor 804 executes the executable program codes to respectively implement the functions of the first interface providing module 701, the second interface providing module 702, and the task execution module 703, thereby implementing the simulation training method. That is, the memory 806 stores instructions for executing the simulation training method.
  • the communication interface 808 uses a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 800 and other devices or communication networks.
  • a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 800 and other devices or communication networks.
  • the embodiment of the present application also provides a computing device cluster corresponding to the above-mentioned cloud server cluster 100.
  • the computing device cluster includes at least one computing device.
  • the computing device can be a server, such as a central server, an edge server, or a local server in a local data center.
  • the computing device can also be a terminal device such as a desktop, a laptop, or a smart phone.
  • the computing device cluster includes at least one computing device 800.
  • the memory 806 in one or more computing devices 800 in the computing device cluster may store the same instructions for executing the simulation training method.
  • the memory 806 of one or more computing devices 800 in the computing device cluster may also store partial instructions for executing the simulation training method.
  • the combination of one or more computing devices 800 may jointly execute instructions for executing the simulation training method.
  • the memory 806 in different computing devices 800 in the computing device cluster can store different instructions, which are respectively used to execute part of the functions of the simulation training device. That is, the instructions stored in the memory 806 in different computing devices 800 can be implemented.
  • one or more computing devices in the computing device cluster can be connected via a network.
  • the network can be a wide area network or a local area network, etc.
  • FIG. 10 shows a possible implementation. As shown in FIG. 10 , two computing devices 800A and 100B are connected via a network. Specifically, the network is connected via a communication interface in each computing device.
  • the memory 806 in the computing device 800A stores instructions for executing the functions of the first interface providing module 701 and the second interface providing module 702. At the same time, the memory 806 in the computing device 800B stores instructions for executing the functions of the task execution module 703.
  • connection method between the computing device clusters shown in Figure 10 can be considered to be that the simulation training method provided in this application requires a large amount of resources to load the simulation environment and simulation equipment, as well as to perform simulation training. Therefore, it is considered to hand over the functions implemented by the task execution module 703 to the computing device 800B for execution.
  • computing device 800A shown in FIG10 may also be completed by multiple computing devices 800.
  • functionality of the computing device 800B may also be completed by multiple computing devices 800.
  • the present application also provides a computer program product including instructions.
  • the computer program product may be software or a program product including instructions that can be run on a computing device or stored in any available medium.
  • the computer program product is run on at least one computing device, the at least one computing device performs a simulation training method.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium can be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk).
  • the computer-readable storage medium includes instructions that instruct the computing device to perform a simulation training method.
  • each component or each step can be decomposed and/or recombined. Such decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种模拟训练方法、装置及计算设备集群。在实施例中,该方法应用于云管理平台,包括:提供第一配置接口,所述第一配置接口用于获取目标模拟环境的标识和目标模拟设备的标识;提供第二配置接口,所述第二配置接口用于获取任务指令;根据所述任务指令,利用目标模拟设备在所述目标模拟环境中执行任务,获得执行结果。由此,能够在云端实现可移动设备和其所在环境的模拟,云端通过用户配置的模拟设备在用户配置的模拟环境执行用户配置的任务,实现智能化的任务执行。

Description

模拟训练方法、装置及计算设备集群
本申请要求于2022年10月10日提交中国国家知识产权局、申请号为202211237632.0、申请名称为“模拟训练方法、装置及计算设备集群”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及模拟技术领域,尤其涉及一种模拟训练方法、装置及计算设备集群。
背景技术
目前,在机器人的实现方式中,虚拟的机器人得到了越来越广泛的应用。其中,在危险的、肮脏的、繁琐性的以及实现较为困难的一些应用场景中,对虚拟的机器人的任务执行的要求也更高。
现有技术中的虚拟的机器人执行任务的方案不够智能,尚不能满足用户需求。如何构建一种虚拟的机器人的执行方案,是目前亟待解决的问题。
发明内容
本申请实施例提供了一种模拟训练方法、装置及计算设备集群,能够在云端实现可移动设备和其所在环境的模拟,云端通过用户配置的模拟设备在用户配置的模拟环境执行用户配置的任务,实现智能化的任务执行。
第一方面,本申请实施例提供了一种模拟训练方法,应用于云管理平台,该方法包括:提供第一配置接口,第一配置接口用于获取目标模拟环境的标识和目标模拟设备的标识;提供第二配置接口,第二配置接口用于获取任务指令;根据任务指令,利用目标模拟设备在仿真场景中执行任务,获得执行结果。
本方案中,能够在云端实现可移动设备和其所在环境的模拟,云端通过用户配置的模拟设备在用户配置的模拟环境执行用户配置的任务,实现智能化的任务执行。
在一种可能的实现方式中,获取目标模拟场景的标识前,方法包括:获取目标模拟环境对应的采集数据;提供第三配置接口,第三配置接口用于获取目标模拟环境的类型参数;根据目标模拟环境对应的采集数据和目标模拟环境的类型参数,生成目标模拟环境。
本方案中,能够基于环境的类型进行仿真,确保仿真出的环境能够接近真实的环境。
在一个例子中,类型参数包括下述的一种或多种:室内场景、室外场景、天气类型。
在一个例子中,目标环境对应的采集数据包括目标环境对应的真实环境中的可移动设备和/或场地设备感知到的数据。
在一种可能的实现方式中,目标环境中包括至少一个三维模型和其各自对应的物理参数,根据任务指令,利用目标模拟设备在目标模拟环境中执行任务,包括:根据任务指令,利用目标模拟设备和至少一个三维模型的物理参数,在目标模拟环境中执行任务。
本方案中,能够基于仿真的环境中的物体的物理参数执行任务,确保任务执行结果能够接近真实的任务执行情况。
在一个例子中,物理参数基于目标环境对应的采集数据确定。
在一个例子中,物理参数包括摩擦系数和/或空气阻力系数。
在一种可能的实现方式中,第二配置接口,还用于获取任务对应的进程数。
本方案中,能够设置进程数,从而实现并行执行任务,确保任务执行效率。
在一种可能的实现方式中,任务包括起始点和终点,第二配置接口,还用于获取用户设置的起始点和终点。
在一种可能的实现方式中,获取目标模拟环境的标识,包括:第一配置接口用于获取用户从多个备选模拟环境中选择的目标模拟环境的标识。
在一个例子中,多个备选环境各自基于其对应的采集数据确定,采集数据包括对应的真实环境中的可移动设备和/或场地设备感知到的数据。
在一种可能的实现方式中,获取目标模拟设备的标识,包括:第一配置接口用于获取用户从多个备选模拟设备中选择的目标模拟设备的标识。
在一个例子中,多个备选设备包括预先设置的备选设备或者基于真实设备的外观数据建模生成的备选设备。
在一种可能的实现方式中,方法还包括:将执行结果下发至与模拟设备对应的目标设备。
在一种可能的实现方式中,根据任务指令,利用目标模拟设备在目标模拟环境中执行任务,包括:基于语义识别,将任务指令转换为仿真指令,仿真指令为计算机可读的格式;基于仿真指令,利用目标模拟设备在目标模拟环境中执行任务。
在一种可能的实现方式中,任务包括至少一个技能。
在一个例子中,技能包括导航,执行结果包括运动轨迹,方法还包括显示运动轨迹。
在一个例子中,目标模拟设备包括至少一个关节,至少一个关节对应有动力学参数,根据任务指令,利用目标模拟设备在目标模拟环境中执行任务,包括:根据任务指令,利用目标模拟设备中至少一个关节的动力学参数,控制目标模拟设备在目标模拟环境中执行任务。
本方案中,能够基于模拟设备中的关节的动力学参数,从而控制模拟设备执行任务,确保任务执行结果能够接近真实的任务执行情况。
在一个例子中,方法还包括:显示至少一个技能各自在执行时的资源消耗。
在一个例子中,方法还包括:确定至少一个技能中部署在目标模拟设备对应的目标设备的目标技能。
在一种可能的实现方式中,任务包括预测指标,执行结果包括预测指标的指标值。
在一个例子中,预测指标包括目标模拟设备对应的目标设备中部件的温度阈值、运行时长阈值和电量阈值。
在一种可能的实现方式中,仿真场景中目标环境具有语义信息。
在一种可能的实现方式中,任务包括预测指标,执行结果包括预测指标的指标值。
在一种可能的实现方式中,第一配置接口用于获取静态模拟物体的标识、静态模拟物体在目标模拟环境的第一位置,和/或,物体模拟物体行为的标识,物体模拟物体行为在目标模拟环境的第二位置。
目标模拟环境中的第一位置包括静态模拟物体,第二位置包括物体模拟物体行为。
第二方面,本申请实施例提供了一种模拟训练装置,应用于云管理平台,装置包括:
第一接口提供模块,用于提供第一配置接口,第一配置接口用于获取目标模拟环境的标识和目标模拟设备的标识;
第二接口提供模块,用于提供第二配置接口,第二配置接口用于获取任务指令;
任务执行模块,用于根据任务指令,利用目标模拟设备在仿真场景中执行任务,获得执行结果。
在一种可能的实现方式中,装置还包括:环境生成模块,环境生成模块包括数据采集单元、接口提供单元、生成单元;其中,
数据采集单元,用于获取目标模拟环境对应的采集数据;
接口提供单元,用于提供第三配置接口,第三配置接口用于获取目标模拟环境的类型参数;
生成单元,用于根据目标模拟环境对应的采集数据和目标模拟环境的类型参数,生成目标模拟环境。
在一个例子中,类型参数包括下述的一种或多种:室内场景、室外场景、天气类型。
在一个例子中,目标模拟环境对应的采集数据包括目标模拟环境对应的真实环境中的可移动设备和/或场地设备感知到的数据。
在一种可能的实现方式中,目标模拟环境中包括至少一个三维模型和其各自对应的物理参数,任务执行模块,用于根据任务指令,利用目标模拟设备和至少一个三维模型的物理参数,在目标模拟环境中执行任务。
在一个例子中,物理参数基于目标环境对应的采集数据确定。
在一个例子中,物理参数包括摩擦系数和/或空气阻力系数。
在一种可能的实现方式中,第二配置接口,还用于获取任务对应的进程数。
在一种可能的实现方式中,任务包括起始点和终点,第二配置接口,还用于获取用户设置的起始点和终点。
在一种可能的实现方式中,第一配置接口用于获取用户从多个备选模拟环境中选择的目标模拟环 境的标识。
在一种可能的实现方式中,第一配置接口用于获取用户从多个备选模拟设备中选择的目标模拟设备的标识。
在一个例子中,多个备选模拟设备包括预先设置的备选模拟设备或者基于真实设备的外观数据建模生成的备选模拟设备。
在一种可能的实现方式中,装置还包括:下发模块;其中,下发模块用于将执行结果下发至与目标模拟设备对应的目标设备。
在一种可能的实现方式中,任务执行模块,用于基于语义识别,将任务指令转换为仿真指令,仿真指令为计算机可读的格式;基于仿真指令,利用目标模拟设备在目标模拟环境中执行任务。
在一种可能的实现方式中,任务包括至少一个技能。
在一个例子中,技能包括导航,执行结果包括运动轨迹,方法还包括显示运动轨迹。
在一个例子中,模拟设备包括至少一个关节和其各自对应的动力学参数,任务执行模块,用于利用目标模拟设备中至少一个关节的动力学参数,控制目标模拟设备在仿真场景中执行任务。
在一个例子中,装置还包括:显示模块;其中,显示模块用于显示至少一个技能各自在执行时的资源消耗。
在一个例子中,装置还包括:部署模块;其中,部署模块用于确定至少一个技能中部署在目标模拟设备对应的目标设备的目标技能。
在一种可能的实现方式中,任务包括预测指标,执行结果包括预测指标的指标值。
在一个例子中,预测指标包括目标模拟设备对应的目标设备中部件的温度阈值、运行时长阈值和电量阈值。
在一种可能的实现方式中,仿真场景中目标环境具有语义信息。
在一种可能的实现方式中,任务包括预测指标,执行结果包括预测指标的指标值。
在一种可能的实现方式中,第一配置接口用于获取静态模拟物体的标识、静态模拟物体在目标模拟环境的第一位置,和/或,物体模拟物体行为的标识,物体模拟物体行为在目标模拟环境的第二位置。
目标模拟环境中的第一位置包括静态模拟物体,第二位置包括物体模拟物体行为。
第三方面,本申请实施例提供了一种模拟训练装置,包括:至少一个存储器,用于存储程序;至少一个处理器,用于执行存储器存储的程序,当存储器存储的程序被执行时,处理器用于执行第一方面中所提供的方法。
第四方面,本申请实施例提供了一种模拟训练装置,其特征在于,装置运行计算机程序指令,以执行第一方面中所提供的方法。示例性的,该装置可以为芯片,或处理器。
在一个例子中,该装置可以包括处理器,该处理器可以与存储器耦合,读取存储器中的指令并根据该指令执行第一方面中所提供的方法。其中,该存储器可以集成在芯片或处理器中,也可以独立于芯片或处理器之外。
第五方面,本发明实施例提供了一种计算设备集群,包括:至少一个计算设备,每个计算设备包括处理器和存储器;至少一个计算设备的处理器用于执行至少一个计算设备的存储器中存储的指令,以使计算设备集群执行第一方面中所提供的方法。
第六方面,本申请实施例提供了一种计算机存储介质,计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行第一方面中所提供的方法。
第七方面,本申请实施例提供了一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面中所提供的方法。
附图说明
图1是本申请实施例提供的一种云端系统的系统架构图;
图2是本申请实施例提供的一种云端场景的示意图;
图3是本申请实施例提供的一种模拟训练方法的流程示意图一;
图4是本申请实施例提供的一种模拟训练方法的流程示意图二;
图5a是本申请实施例提供的一种模型配置页面的示意图;
图5b是本申请实施例提供的一种任务编排页面的示意图;
图6a是本申请实施例提供的一种园区巡检场景下的边云协同的示意图;
图6b是本申请实施例提供的一种方舱医院场景下的边云协同的示意图;
图7是本申请实施例提供的一种模拟训练装置的结构示意图;
图8是本发明实施例提供的一种计算设备的结构示意图;
图9是本发明实施例提供的一种计算设备集群的结构示意图;
图10是图9提供的一种计算设备集群的应用场景的示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本申请实施例中的技术方案进行描述。
在本申请实施例的描述中,“示例性的”、“例如”或者“举例来说”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”、“例如”或者“举例来说”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、“例如”或者“举例来说”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,单独存在B,同时存在A和B这三种情况。另外,除非另有说明,术语“多个”的含义是指两个或两个以上。例如,多个系统是指两个或两个以上的系统,多个终端是指两个或两个以上的终端。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
图1为本发明实施例提供的一种云端系统的架构示意图。如图1所示,该系统包括云服务器集群100、边侧设备200和终端设备300。
其中,云服务器集群100可以用独立的电子设备或者是多个电子设备组成的设备集群来实现。可选地,云服务器集群100中的电子设备可为终端也可为计算机,还可以为服务器。在一个例子中,本方案中涉及的服务器可以用于提供云服务,其可以为一种可以与其他的设备建立通信连接、且能为其他的设备提供运算功能和/或存储功能的服务器或者是超级终端。其中,本方案中涉及的服务器可以是硬件服务器,也可以植入虚拟化环境中,例如,本方案中涉及的服务器可以是在包括一个或多个其他虚拟机的硬件服务器上执行的虚拟机。
其中,边侧设备200可以为可移动设备210、场地设备220。其中,可移动设备210可以为机器人、车辆等可以移动的设备,可移动设备210上安装有传感器,比如,摄像头camera、激光雷达Laser Radar等。场地设备220可以为灯杆上安装的传感器,比如,摄像头camera、激光雷达Laser Radar、温度计、湿度计等。图2为本发明实施例提供的一种云端场景的示意图。如图2所示,可移动设备210可以为轮式机器人、无人机、四足机器狗等,场地设备220可以为园区监控设备。
其中,终端设备300可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。本方案中涉及的终端设备300的示例性实施例包括但不限于搭载iOS、android、Windows、鸿蒙系统(Harmony OS)或者其他操作系统的电子设备。本发明实施例对电子设备的类型不做具体限定。
其中,边侧设备200分别和云服务器集群100通过网络连接,从而使得边侧设备200上的传感器采集到的数据可以上传到云服务器集群100。其中,网络可以为有线网络或无线网络。示例地,有线网络可以为电缆网络、光纤网络、数字数据网(Digital Data Network,DDN)等,无线网络可以为电信网络、内部网络、互联网、局域网络(Local Area Network,LAN)、广域网络(Wide Area Network,WAN)、无线局域网络(Wireless Local Area Network,WLAN)、城域网(Metropolitan Area Network,MAN)、公共交换电话网络(Public Service Telephone Network,PSTN)、蓝牙网络、紫蜂网络(ZigBee)、移动电话(Global System for Mobile Communications,GSM)、CDMA(Code Division Multiple Access)网络、CPRS(GeneralPacketRadioService)网络等或其任意组合。可以理解的是,网络可使用任何已知的网络通信协议来实现不同客户端层和网关之间的通信,上述网络通信协议可以是各种有线或无线通信协议,诸如以太网、通用串行总线(universal serial bus,USB)、火线(firewire)、全球移动通讯系统(global system for mobile communications,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址接入(code division multiple access,CDMA)、宽带 码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA)、长期演进(long term evolution,LTE)、新空口(new radio,NR)、蓝牙(bluetooth)、无线保真(wireless fidelity,Wi-Fi)等通信协议。
如图1所示,云服务器集群100包括数据仓库111、模拟环境库112、模拟设备库113、素材库114、模拟物体库115、行为模式库116、模拟行为库117、语义资料库118、算法库119。
其中,数据仓库111用于存储对边侧设备200接收到的多种格式的数据进行处理后的数据。这里,处理可以为降噪数据、时间空间对齐等处理。这些数据为多种格式的多源数据,比如激光点云(支持ply(Polygon File Format,多边形档案)、pcd(Point Cloud Data,是一种存储点云数据的文件格式)、e57(是3D图像数据文件格式标准,汇集了点云和图像)等主流格式)、视频(mp4、rmvb(RealMedia Variable Bit Rate,RealMedia可变比特率)、mkv(是Matroska的一种媒体文件,Matroska是一种新的多媒体封装格式,也称多媒体容器(Multimedia Container))、avi(Audio Video Interleaved,音频视频交错格式)等主流格式)、图像(jpeg(Joint Photographic Experts Group,联合图像专家组)、jpg(是JPEG联合图像专家组定义的一种用于连续色调静态图像压缩的标准)、bmp(Bitmap,位图)、png(Portable Network Graphics,便携式网络图形)等主流格式)、机器人本体型号等数据(txt,xml(Extensible Markup Language,可扩展标记语言),xlsx,json(JavaScript Object Notation,JS对象简谱)等主流格式)。
其中,模拟环境库112用于存储模拟场景对应的标识(比如,可以为模拟场景的存储地址),模拟场景包括若干个三维模型各自对应的描述信息。这里,描述信息可以包括几何信息(可以形成三维模型的信息)、位置信息,物理参数,纹理信息;其中,物理参数为影响运动的参数,比如,摩擦力系数和/或空气阻力系数,示例地,模拟场景中具有物理参数的三维模型可以指示地面。纹理信息可以理解为物体表面的纹路(通常指的是材质)和图案,可以更好的表现物体表面的信息,具体可以包括材质、反射率、颜色等。需要指出,模拟环境还可以包括环境描述信息,环境描述信息可以包括天气情况、光线情况和空气的物理参数,尤其对于室外的模拟环境,需要确定不同天气下的光线情况、空气的物理参数,从而更加接近真实的环境。进一步地,模拟环境还可以包括多种地图,比如,二维地图,三维地图,语义地图,点云地图等。需要说明的是,模型场景中的模拟物体通常是在环境中固定不变的物体,比如,树、地面、建筑物,或者在环境中固定活动范围的运动物体,比如,动物园中的老虎、狮子等。需要说明的是,模拟环境库112虽然存储的是模拟场景对应的标识,但是模拟场景还是存储在云服务器集群100,该标识可以从云服务集群100中存储模拟场景的地方读取出模拟场景的存储数据。
其中,模拟设备库113用于存储模拟设备对应的标识(比如,可以为模拟设备的存储地址)。示例地,模拟设备可以为机器人模型。其中,模拟设备是通过对实体的可移动设备210的几何形状、结构及外观进行1:1几何外观建模,对可移动设备210每个可活动的智能关节的仿真(包括但不限于电机、加速器、阻尼参数等)构建的虚拟的可移动设备210,可支持设计模型更新、三维重建等方法实现模型构建。此外,还需要对可移动设备210的传感器进行物理仿真。其中,物理仿真包括物理重力仿真,物理碰撞仿真,应用物理材质来表达摩擦力、光反射等自身物理属性,上述物理属性将影响可移动设备210在特定环境下的行为。这里,模拟设备包括若干个关节的动力学参数,其中,动力学参数可以包括惯性参数和摩擦参数等,具体可结合实际需求确定,本发明实施例对此不做具体限定。在实际应用中,模拟设备可以预先通过三维软件构建的模型。在一个例子中,动力学参数可以为模拟设备指示的可移动设备210出厂时厂家提供的每个关节的动力学参数。需要说明的是,模拟设备库113虽然存储的是模拟设备对应的标识,但是模拟设备还是存储在云服务器集群100,该标识可以从云服务集群100中存储模拟设备的地方读取出模拟设备的存储数据。
其中,素材库114用于存储构建模拟物体的素材、模拟场景的素材的标识(比如,可以为模拟物体的素材地址)。需要说明的是,素材库114虽然存储的是素材对应的标识,但是素材还是存储在云服务器集群100,该标识可以从云服务集群100中存储素材的地方读取出素材的存储数据。
其中,模拟物体库115用于存储模拟物体对应的标识(比如,可以为模拟物体的存储地址);在实际应用中,该模拟物体可以表示实际场景中可能存在的各种物体,比如静止物体。另外,模拟物体可以包括物体的描述信息,比如几何信息、纹理信息、类别等可以模拟出真实物体的信息。需要说明的是,模拟物体库115虽然存储的是模拟物体对应的标识,但是模拟物体还是存储在云服务器集群100,该标识可以从云服务集群100中存储模拟物体的地方读取出模拟物体的存储数据。
其中,行为模式库116用于存储行为素材的标识(比如,可以为行为素材的存储地址)。示例地,行为素材可以为车辆的运动片段;示例地,行为素材可以为天气变化的运动片段,比如,光线变化,风速变化,雨量变化等。这里,光线变化可以通过物体表面的情况表现,风速变化可以通过物体的晃动表现,雨量变 化可以通过对雨天的拍摄照片体现。需要说明的是,行为模式库116虽然存储的是行为素材对应的标识,但是行为素材还是存储在云服务器集群100,该标识可以从云服务集群100中存储行为素材的地方读取出行为素材的存储数据。
其中,模拟行为库117用于存储模拟行为对应的标识(比如,可以为模拟行为的存储地址)。在实际应用中,模拟行为可以和模拟物体关联,通常同一个模拟物体可以有不同的模拟行为,则同一模拟物体可以具有多个模拟行为,对应的,模拟行为对应的标识还需要包括模拟物体的标识(比如,可以为模拟物体的存储地址)。这里,模拟物体可以为人、车辆、动物,模拟行为可以为直线、曲线、转圈、转弯等。另外,模拟行为也可以为天气变化,比如,天气的变化可以包括风速变化(比如,可以记录风速随着时间的变化)、光线变化(比如,光线强度随着时间的变化)、雨量变化(比如,可以记录雨量随着时间的变化)等,这里,模拟行为是单独存在的,不需要依附模拟物体。需要说明的是,模拟行为库117虽然存储的是模拟行为对应的标识,但是模拟行为还是存储在云服务器集群100,该标识可以从云服务集群100中存储模拟行为的地方读取出模拟行为的存储数据。
其中,语义资料库118用于存储模拟物体的语义信息。其中,语义信息可以为物体的轮廓、颜色、材质、位置信息、是否运动、类别、周边物体等。在实际应用中,一个语义资料库118可以存储若干个模拟环境的语义信息。
其中,算法库119中存储有各种算法,比如,机器人的动力学参数辨识算法、空气阻力的计算算法、摩擦力的计算算法、交互力的计算算法、人工智能算法。这里,人工智能算法可以为各种深度学习算法、机器学习算法、深度强化学习算法、运动学规划算法等。其中深度学习算法可包括卷积神经网络(Convolutional Neural Networks,CNN)、循环神经网络(Recurrent Neural Network,RNN)、深度神经网络(Deep Neural Networks,DNN)、基于快速区域卷积网络(FastR-CNN)、YOLO(You Only Look Once)、单级多框预测(Single Shot MultiBox Detector,SSD)、长短期记忆网络(LSTM,Long Short-Term Memory)、深层双向语言模型(Embeddings fromLanguage Models,ELMO)、Bidirectional Encoder Representation from Transformers,基于Transformers的双向编码器表示(Bidirectional Encoder Representation fromTransformers,BERT)、生成式预训练(Generative Pre-Training,GPT)等。
如图2所示,云服务器集群100通过语义抽取层、物体/场景重建层、行为重建层和物理参数层实现模拟环境的构建和更新。
对于语义抽取层,云服务器集群100可以对数据仓库131中的数据进行语义提取,得到多个模拟物体的语义信息,并存储到语义资料库中。
对于物体/场景重建层,云服务器集群100可以对数据仓库131中的数据进行时间/空间对齐,得到可以建模的素材,将素材存储在素材库114中,一方面,基于素材库114中的素材进行物体建模,将构建的若干个模拟物体的标识存储在模拟物体库115中,另一方面,基于素材库中的素材进行场景建模,将构建的模拟场景的标识存储在模拟场景库115中。可选地,云服务器集群100可以基于数据仓库111中的激光点云数据,进行SLAM(Simultaneous Localization and Mapping,即时定位与地图构建)建图,生成点云模型,结合激光点云数据在时间和空间上对齐的图片数据,进行网格纹理重建,得到模拟场景。另外,可以结合语义资料库117,得到模拟场景的语义地图。
对于行为重建层,云服务器集群100可以对数据仓库131中的数据(比如视频数据)进行行为统计/抽取,得到行为模式,将行为模式存储在行为模式库116中;然后,对行为模式库116中的行为模式进行建模,得到模拟行为,将模拟行为的标识存储在模拟行为库117中。示例地,可以统计车辆的转弯的各种相似片段,然后将这些相似片段作为行为模式,之后,可以对这些行为模式进行建模,确定转弯的速度,行驶的轨迹,得到模拟行为。示例地,可以统计砖头掉落的各种相似片段,然后将这些相似片段作为行为模式,之后,可以对这些行为模式进行建模,确定砖头掉落的加速器、运动轨迹等,得到模拟行为。
对于物理参数层,云服务器集群100对于模拟场景,基于算法库119中的空气阻力、交互力等计算算法,结合构建模拟场景的数据中可移动设备210的运动数据和所在环境的环境数据,可以得到该模拟场景中若干个三维模型的物理参数。另外,云服务器集群100还可以对于模拟设备库113中的模拟设备,基于模拟设备指示的真实设备的运动数据和算法库119中的动力学辨识的计算算法,确定模拟设备中的每个关节的动力学参数。
对于更新层,云服务器集群100可以实现语义更新、几何更新、物理参数更新、动力学参数更新。值得注意的是,几何更新可以理解为外形的更新,比如在模拟环境中新增模拟物体,或者,删除模拟环境中的模拟物体,在几何更新后的,需要保留原有的模拟场景,并将更新后的模拟场景的标识存储在模拟模拟场 景库112。语义更新可以理解为删除模拟物体的描述信息、新增模拟物体的描述信息和/或更新模拟物体的描述信息,比如,可以将树边有车,更新为树边无车。物理参数更新可以理解为模拟场景中特定的三维模型的物理参数的更新。假设园区中的地面的摩擦力为A,现在园区中下雨了,地面非常湿滑,此时就会重新计算地面的摩擦力系数并更新该系数。值得注意的是,物理参数更新后,需要存储到对应的模拟场景中。另外,物理参数更新可包括模拟环境的环境描述信息的更新,比如,之前构建晴天下的模拟环境,现在园区中下雨了,就可以更新环境描述信息为下雨天的相关情况,值得注意的是,环境描述信息需要保留之前的信息,并将更新后环境描述信息存储到对应的模拟场景中。
本发明实施例中的模拟环境基于边侧设备200上的传感器(为了便于描述和区别,称为目标传感器)采集的数据构建。在实际应用中,云服务器集群100可以获取到指示模拟环境中发生变化情况的数据(为了便于描述和区别,称为目标数据),通过语义抽取层实现语义更新,通过物体/场景重建层的物体建模或场景建模实现几何更新,通过物理参数层实现物理参数的更新。
在一个例子中,云服务器集群100可以基于数据仓库111中的目标传感器采集的数据,判断模拟环境是否发生变化,在环境发生变化时,确定指示模拟环境中发生变化情况的数据(为了便于描述和区别,称为目标数据)。
在一个例子中,可移动设备210可以基于数据仓库111中的目标传感器采集的数据,判断模拟环境是否发生变化,在环境发生变化时,确定指示模拟环境中发生变化情况的目标数据上传到云服务器集群100,云服务器集群100获取到目标数据。
需要说明的是,在实际应用中,云管理平台分为客户端和服务端,云服务器集群100安装有云管理平台的服务端。用户可以在终端设备300上安装云管理平台的客户端,或者,安装浏览器,在浏览器输入网址从而访问云管理平台的客户端。
举例来说,用户在浏览器输入网址,进入客户端的登录页面,操作登录页面注册一个账号,手动设置或者由云管理平台的服务端分配一个账号密码,得到可以访问云管理平台的客户端的账号(为了便于描述和区别,称为目标账号)和账号密码(为了便于描述和区别,称为目标账号密码);然后,用户在登录页面输入目标账号和目标账号密码,进入云管理平台的客户端;之后,用户可以通过云管理平台的客户端使用云管理平台的服务端可以提供的各种服务。
在一个例子中,用户通过终端设备300建立模型生成任务。
示例地,用户可以从素材库114中选择某时间段可移动设备210上传的数据用于模型重建,或者自己上传素材到素材库114,这个素材指可移动设备210或用户通过别的方式采集的点云图片等数据,之后,再从素材库114中选择用于重建的数据。然后,云服务器集群100可以基于用户选择的数据创建模型。
可选地,若用于重建模拟物体,则可以创建的模拟物体存储在目标账号下的模拟物体库115。
可选地,若用于重建模拟场景,则可以将创建的模拟场景存储在目标账号下的模拟场景库112。进一步地,还可以确定需要选择是否需要确定物理参数、语义信息、和/或纹理信息。
示例地,用户可以从行为模式库116中选择某时间段的行为模式用于行为重建,则可以将创建的模拟行为存储在目标账号下的模拟行为库117。
示例地,用户可以从模拟环境库112中选择模拟环境,并为模拟环境配置是否进行几何更新、语义更新、物理参数更新。
值得注意的是,模拟环境库112、模拟设备库113、模拟物体库115、模拟行为库117可以预先存储所有用户共享的模型;素材库114可以预先存储所有用户共享的素材,行为模式库116可以预先存储所有用户共享的行为片段。
另外,在实际应用中,数据仓库111可以存储边侧设备200的监控数据。
本发明实施例提供了一种模拟训练方法。可以理解,该方法可以通过任何具有计算、处理能力的装置、设备、平台、设备集群来执行。比如,图1示出的云端系统。下面结合图1示出的云端系统对本发明实施例提供的模拟训练方法进行介绍。
图3本发明实施例提供了一种模拟训练方法的流程示意图,如图3所示,该模拟训练方法包括:
步骤301、云服务器集群100提供第一配置接口。
步骤302、终端设备300显示第一配置接口,获取用户对第一配置接口的操作,以获取目标模拟环境的标识和目标模拟设备的标识。
根据一种可行的实现方式,第一配置接口可以为模型配置界面。具体地,云服务器集群100可以发布 第一配置接口,对应的,终端设备300会向用户显示模型配置界面,以使用户通过终端设备300操作模型配置界面,确定目标环境的标识和模拟设备的标识,并上传至云服务器集群100。
图5a是本申请实施例提供的一种模型配置页面的示意图,如图5a所示,模型配置界面包括模拟环境配置控件和模拟设备配置控件。在实际应用中,用户通过在终端设备300输入目标账号和密码登录云管理平台的客户端,之后,用户操作终端设备300,终端设备300显示模型配置界面,用户点击模型配置界面中的模拟环境配置控件,显示目标账号下的模拟环境库112中的模拟环境的缩略图的列表,然后用户选择列表中的缩略图,第一配置接口就可以从模拟环境库112中的多个备选模拟环境中确定目标模拟环境的标识;相似的,用于点击模型配置界面中的模拟设备配置控件,显示目标账号下的模拟设备库113中的模拟设备的缩略图的列表,然后用户选择列表中的缩略图,第一配置接口就可以从模拟设备库113中的多个备选模拟设备中确定目标模拟设备的标识。
根据一种可行的实现方式,在获取目标场景的标识前,步骤302还包括如下内容:
获取目标模拟环境对应的采集数据;提供第三配置接口,第三配置接口用于获取目标环境的类型参数;根据目标模拟环境对应的采集数据和目标模拟环境的类型参数,生成目标模拟环境。
可选地,第三配置接口为模型生成界面,用户在终端设备300通过模型生成界面建立模型生成任务时,模型生成界面包括数据配置控件、类型参数配置控件,数据配置控件用于选择目标模拟环境对应的采集数据,比如,可移动设备210和场地设备220在上午10点到10点半采集的数据,类型参数配置控件用于配置目标模拟环境的类型参数,之后,云服务器集群100可以根据用户配置的数据、目标模拟环境的类型参数,生成目标模拟环境。
其中,类型参数可以为包括若干个参数,比如室内场景、室外场景、天气类型。示例地,天气类型可以为晴天、阴天、雨天、多云、雾天、扬尘、大风等。需要说明的是,不同的类型参数对建模的要求不同,比如,在室内场景,一般不需要关注光线变化,但是在室外场景,光线变化和天气类型很重要,需要耗费更多的资源模拟出室外光线。值得注意的是,在实际应用中,通常室外场景和天气类型是关联的,因此在类型参数为室外场景时,一方面可以直接配置天气类型,另一方面,无需配置天气类型,可以结合天气预报信息等确定天气类型,实现智能化建模。
上述类型参数仅仅作为示例,并不构成对类型参数的具体限定,在一些可能的实施例中,类型参数可以包括比上述更多或更少的参数;比如,类型参数还可以包括特征复杂等级、具有透明特征的物体、动态物体复杂等级。
其中,特征复杂等级用于说明需要构建的模拟环境的特征的数目的大小,等级越高说明特征的数目越多。这里,特征复杂等级用于选择不同的特征选取方法,从而减小构建的模拟场景和真实场景之间的差异。比如,对于走廊这种特征较为简单的场景,可以采用光流法实现特征提取。比如,对于园区这种特征较为复杂的场景,可以采用人工智能算法实现特征提取,比如,从算法库119中选择合适的人工智能算法实现特征提取。示例地,特征复杂等级可以分为3个等级,比如,简单、中等、复杂;示例地,特征复杂等级可以分为5个等级,比如,非常简单、简单、中等、复杂、非常复杂。值得注意的是,基于特征复杂等级可以为不同的传感器采集的数据选择不同的特征提取方法,或者,相同的特征提取方法,比如,图像和采用激光的方式感知环境的传感器比如激光雷达采集的激光点云数据均可以采用人工智能算法的方式提取特征。
其中,具有透明特征的物体可以为玻璃、透明水桶等具有较高折射率和反射率的物体,这类物体对采用激光的方式感知环境的传感器比如激光雷达采集的数据具有较大的影响,可能会影响模拟环境的构建,增大模拟环境和真实环境的差异。则可以删除对采用激光的方式感知环境的传感器比如激光雷达采集的激光点云数据中具有透明特征的物体所在区域的数据,降低模拟环境和真实环境的差异。这里,可以通过图像感知具有透明特征的物体所在的区域。在一个例子中,第三配置接口可以配置有具有透明特征的物体的列表,用户可以从列表中选择具有透明特征的物体,比如,玻璃,透明水桶等。在实际应用中,可选地,云服务集群100中的模拟物体库115存储的模拟物体的标识中包括具有透明特征的物体的标识。此时,第三配置接口关联模拟物体库115中具有透明特征的物体的标识。值得注意的是,模拟物体具有物体的描述信息,描述信息可以包括是否透明,则可以基于物体的描述信息判断物体是否具有透明特征。
其中,动态物体复杂等级用于说明需要构建的模拟环境对应的实际环境中的动态物体的数目的大小,等级越高说明活动的动态物体的数目越多,则遮挡的地方越多,模拟环境重建越复杂,因此,可以基于动态物体复杂等级选择不同的模拟环境重建方法,从而减小构建的模拟场景和真实场景之间的差异。比如,对于十字路口这种动态物体较多的场景,可以以摄像头感知的图像为主,采用激光的方式感知环境的传感 器比如激光雷达采集的激光点云数据为辅,实现模拟环境重建。示例地,动态物体复杂等级可以分为3个等级,比如,简单、中等、复杂;示例地,动态物体复杂等级可以分为5个等级,比如,非常简单、简单、中等、复杂、非常复杂。
另外,在实际应用中,可以基于具有透明特征的物体对数据仓库111中采用激光的方式感知环境的传感器比如激光雷达采集的激光点云数据进行处理,基于动态物体复杂等级选择合适的模拟场景重建的方式,为不同传感器赋予不同的权重,基于不同传感器的权重和基于特征复杂等级选择合适的特征提取方式,对不同传感器采集的数据进行特征提取和融合,分析真实场景中物体表面的位置、颜色、纹理等,实现真实环境的模拟,减小模拟环境和真实环境之间的差异。这里,基于不同传感器的权重和基于特征复杂等级选择合适的特征提取方式,对不同传感器采集的数据进行特征提取和融合,可以理解为基于特征复杂等级选择合适的特征提取方式,对不同传感器采集的数据进行特征提取,分析真实场景中物体表面的位置、颜色、纹理等;之后,基于不同传感器的权重对不同传感器采集的数据进行特征提取的特征进行融合,得到可以较为准确反映真实场景中物体表面的位置、颜色、纹理等,从而实现真实环境的模拟。
步骤303、终端设备300向云服务器集群100发送目标模拟环境的标识和目标模拟设备的标识。
步骤304、云服务器集群100提供第二配置接口。
步骤305、终端设备300显示第二配置接口,获取用户对第二配置接口的操作,以获取第一任务指令。
根据一种可行的实现方式,第二配置接口可以为任务编排界面。具体地,云服务器集群100可以发布任务编排界面,对应的,终端设备300会显示任务编排界面,以使用户通过任务编排界面,确定第一任务指令,并上传至云服务器集群100。需要说明的是,第一任务指令指示了需要让模拟设备完成的任务。
在一个例子中,用户通过在终端设备300输入目标账号和密码登录云管理平台的客户端,之后,用户操作终端设备300,终端设备300显示第二配置接口指示的任务编排界面,用户可以在任务编排界面中编排任务,得到第一任务指令。
图5b是本申请实施例提供的一种任务编排页面的示意图。如图5b所示,任务编排页面包括任务描述控件,比如输入框,示例地,可以在任务编排页面中输入任务描述,比如,搬运物资通过消毒点送到目标点。任务编排页面还包括任务流创建控件,在实际应用中,用户可以操作任务流创建控件,在任务流创建的区域内创建任务流。具体地,可以先创建任务开始节点,然后创建子任务的节点,并添加子任务的描述,不断添加子任务,最后得到任务流。比如,对于搬运物资通过消毒点送到目标点,可以创建5个子任务,依次的为抓取物资,到达消毒点,物资消毒,到达目标点,卸货。值得注意的是,任务编排页面还可以显示目标模拟环境的缩略图,通常为二维地图,用户可以在目标模拟环境的缩略图中表明导航点。比如,对于搬运物资通过消毒点送到目标点,则需要标注物资点、消毒点、目标点(物资存放的地方)。
步骤306、终端设备300向云服务器集群100发送第一任务指令。
步骤307、云服务器集群100根据第一任务指令、目标模拟环境的标识和目标模拟设备的标识,利用目标模拟设备在目标模拟场景中执行任务,获得第一执行结果。
在实际应用中,可以基于语义识别,了解第一任务指令需要完成的任务是什么,从而将第一任务指令转化成仿真指令,仿真指令为计算机可读的格式,之后,基于仿真指令、目标模拟环境的标识和目标模拟设的标识,加载目标模拟设备和目标模拟环境,并利用模拟设备在目标模拟场景中执行任务。
在实际应用中,第一配置接口还可以确定目标仿真器的标识;在目标仿真器中加载目标模拟环境和目标模拟设备。如图5a所示,模型配置界面包括仿真器配置控件。在一个例子中,用户点击模型配置界面中的仿真器配置控件,显示仿真器的列表,然后用户选择列表中的仿真器,从而使得第一配置接口可以获取用户配置的目标仿真器的标识。值得注意的是,仿真器的种类多种多样,不同的仿真器的偏重有所不同,比如,有的仿真器的物体表面的细节更为真实,视觉感受好,有的仿真器的三维模型的细节表现更为真实,用户可以选择不同的仿真器实现模拟。
根据一种可行的实现方式,第二配置接口还用于获取进程数。对应的,云服务器集群100创建适配进程数的进程,从而并行利用目标模拟设备在目标模拟场景中执行任务,确保任务执行的效率。举例来说,可以将任务分解成多个子任务,不同的子任务由不同的进程完成,从而实现并行处理。
本方案中,通过第一配置接口和第二配置接口,实现模拟设备、模拟环境、任务的配置,之后,可实现可移动设备和其所在环境的仿真,并在仿真出的环境中执行任务,降低了对实体设备完成任务的难度和成本。
可移动设备210活动的室外开放环境一般面积比较大,生成的模拟环境也很大,一次性加载完整的模拟 环境既需要很长时间也需要消耗很多资源,而且操作时也不流畅,渲染整个模拟环境也要很长时间,对于目前的主流仿真器是个很大的挑战。
为了解决上述问题,本发明实施例通过云上丰富的服务器资源,将大模型分割成若干部分,分别由不同的服务器分布式加载,极大地降低了对单台机器的资源需求。
对应的,在上述图3所示实施例的基础上,本发明实施例中,第一配置接口可以配置实例数目。这里,实例可以包括物理主机(计算设备)、虚拟机、容器中的至少一种。对应的,步骤307可以包括如下内容:
云服务器集群100在实例数目匹配的实例上加载目标模拟环境和目标模拟设备,从而可以由不同的服务器分布式加载目标模拟设备和目标模拟环境,极大地降低了对单台机器的资源需求。
进一步地,还可以基于模拟设备上的模拟传感器的感知范围,自适应的确定局部实时模型加载范围,可以在一定程度上避免对模拟环境的无关区域的加载,减少了模型的整体加载时间,并提高了操作的流畅度,实现了超大规模模型的高效加载和操作。
对应的,步骤307可以包括如下内容:
云服务器集群100在加载目标模拟设备后,在基于第一任务指令执行任务的过程中,基于模拟设备上的模拟传感器的感知范围,加载目标模拟环境在感知范围内的环境。
根据一种可行的实现方式,在加载目标模拟设备后,可以初始化目标模拟设备在目标模拟环境的位置,在基于第一任务指令执行任务的过程中,基于模拟设备上的模拟传感器的感知范围,加载目标模拟环境在感知范围内的环境。
根据另一种可行的实现方式,可以先加载目标模拟环境的部分,然后在加载的目标模拟环境中随机加载目标模拟设备,之后,基于模拟设备上的模拟传感器的感知范围,加载感知范围内的目标模拟环境和目标模拟设备。
综上,本方案通过分布式加载和自适应的确定局部实时模型加载范围这两种方式,减少了模型的整体加载时间,并提高了操作的流畅度,实现了超大规模模型的高效加载和操作。
在上述图3所示实施例的基础上,本发明实施例中,目标模拟环境包括至少一个三维模型和其各自携带的物体参数。可选地,第一配置接口还可以确定目标模拟环境中的至少一个三维模型对应的物理参数。
在实际应用中,如图5a所示,第一配置接口指示的模型配置页面还包括目标模拟环境中的三维模型的物理参数配置控件。在一个例子中,物理参数配置控件可以显示三维模型对应的物理参数,从而使得第一配置接口可以从三维模型的备选物理参数中,选择目标物理参数。这里,三维模型通常为路面。
对应的,在步骤307中,云服务器集群100利用目标模拟设备和至少一个三维模型的物理参数,在目标模拟场景中执行任务。
在实际应用中,对于室内场景,在涉及到目标模拟设备和三维模型交互的时候,需要物理参数。
示例地,三维模型为物资,此时物资的物理参数可以包括摩擦力系数和物资的重量,当目标模拟设备需要抓取物资时,需要基于物资的摩擦力系数和重量,计算摩擦力,确定目标模拟设备需要用多大的力才能抓取物资。
示例地,三维模型为路面,路面的物理参数可以包括摩擦力系数,假设目标模拟设备需要巡检时,此时目标模拟设备需要在路面上运动时,此时需要基于路面的摩擦力系数,目标模拟设备的重量,计算目标模拟设备和路面之间的摩擦力,然后结合目标模拟设备的空气阻力系数,以及,室内环境下的预设的空气信息,可以得到目标模拟设备在路面上运动需要的最小的速度。进一步地,若目标模拟设备搬运有物资,还需要考虑保证物资不会掉落的最大的速度。
本方案中,使用物理参数辨识技术赋予模拟环境真实的物理参数,持续降低真实-数字之间的差异,提供了一个可靠的仿真环境。
本发明实施例中通过收集环境中真实物体的数据,构建接近真实物体的模拟物体,建立模拟物体库116,可以在模拟环境中增加或减少模拟物体,生成复杂的模拟环境供机器人训练,解决现实环境下意外情形发生概率低,数据量少的问题。
对应的,在上述图3所示实施例的基础上,本发明实施例中,步骤302还可以包括如下内容:
终端设备300显示目标模拟场景的缩略图,获取用户对第一配置接口的操作,以获取目标模拟物体的标识、目标模拟物体在目标模拟场景中的第一位置。
在实际应用中,如图5a所述,第一配置接口指示的模型配置页面还可以在环境缩略图的显示区域显示 目标模拟场景的缩略图,模拟物体配置控件。模拟物体配置控件用于显示模拟物体库115中的模拟物体的缩略图的列表,用户可以拖拽缩略图到目标模拟场景的缩略图中,从而使得第一配置接口可以获取目标模拟物体的标识、目标模拟物体在目标模拟场景中的第一位置。
对应的,步骤307具体包括如下内容:
云服务器集群100加载目标模拟环境,并在目标模拟环境中的第一位置加载目标模拟物体。
值得注意的是,在分布式加载和自适应加载的过程中,在第一位置位于目标模拟设备的模拟传感器的感知范围内,此时,可以在第一位置加载目标模拟物体。
值得注意的是,在实际应用中,用户可以将目标模拟设备放置到目标模拟环境中的某一位置,该位置作为目标模拟设备的初始位置。具体地,如图5a所示,模型配置页面在设备缩略图的显示区域显示目标模拟设备的缩略图,用户可以拖拽目标模拟设备的缩略图到目标模拟场景的缩略图中,确定目标模拟设备在目标模拟设备的初始位置。
进一步地,本发明实施例中通过收集环境中物体的运动行为,建立模拟行为库116,可以在模拟环境中模拟出物体的真实的运动行为,生成各种案例供机器人训练,解决现实环境下案例发生概率低,数据量少的问题。
对应的,对应的,在上述图3所示实施例的基础上,本发明实施例中,步骤302还可以包括如下内容:
终端设备300显示目标模拟场景的缩略图,获取用户对第一配置接口的操作,以获取目标模拟物体的目标行为的标识。
在实际应用中,如图5a所示,第一配置接口指示的模型配置页面可以显示目标模拟场景的缩略图(显示在在环境缩略图的显示区域),模拟物体配置控件、模拟行为配置控件。在一个例子中,模拟行为配置控件用于显示模拟行为库116中的目标模拟物体的模拟行为的缩略图的列表,用户可以点击模拟行为的缩略图,从而使得第一配置接口可以获取目标模拟物体的目标行为的标识。
对应的,步骤307具体包括如下内容:
云服务器集群100加载目标模拟环境,并在目标模拟环境中的第一位置加载目标模拟物体,控制目标模拟物体按照目标行为运动,得到最终的目标模拟环境。
值得注意的是,在分布式加载和自适应加载的过程中,在第一位置位于目标模拟设备的模拟传感器的感知范围内,此时,可以在第一位置加载目标模拟物体,并控制该目标模拟物体按照目标行为运动。
本发明实施例中通过收集环境中天气的运动行为,可以在模拟环境中模拟出真实的天气行为,生成各种模拟环境供机器人训练,解决现实环境下异常天气发生概率低,数据量少的问题。
对应的,在上述图3所示实施例的基础上,本发明实施例中,在步骤303之后,还可以包括如下内容:
终端设备300获取用户对第一配置接口的操作,以获取目标天气行为的标识。对应的,步骤307具体包括如下内容:
云服务器集群100加载目标模拟环境,并在目标模拟环境中加载目标天气行为。
值得注意的是,天气行为可以包括风速变化、雨量变化等,从而可以得到目标模拟设备在不同天气下的执行结果,适配真实场景的复杂性。
对于维护和保养机器人,一般都是机器人在运行过程中出现问题,比如设备损坏,然后进行维修作业,这种做法存在以下问题:
1.机器人的维修作业用户一般并不能在短时间内完成,需要联系厂商或开发者定位问题并提供解决方案,会耽误工作进度。
2.机器人的设备维修会带来一定的经济成本,尤其是激光雷达,高精度摄像头等价格动辄几千上万的传感器。
3.机器人出现问题时,往往会造成一些意外,会给周边环境和人员带来一定的风险,比如机器人直接撞到行人或车辆。
基于此,本发明实施例可以通过模拟设备和模拟环境对可移动设备210进行预测性维护,尽可能的延长可移动设备210的设备使用寿命,降低出现意外的风险。
在上述图3所示实施例的基础上,本发明实施例中,第一任务指令指示的任务可以为预测性维护任务;对应的,任务包括至少一个维护指标。执行结果包括至少一个维护指标的指标值。
示例地,至少一个维护指标包括可移动设备中部件的温度阈值、可移动设备的电量阈值和/或可 移动设备的运行时长阈值。这里的部件可以为中央处理器(central processing unit,简称CPU)。
则步骤307具体可以包括如下内容:
云服务器集群100在目标模拟环境中对目标模拟设备进行模拟测试,确定目标模拟设备的至少一个维护指标的指标值。
进一步地,云服务器集群100可以接收目标模拟设备对应的目标设备发送的运行信息;基于运动信息和第一执行结果,确定目标设备的损耗情况,在损耗较高时进行告警。
值得注意的是,在预测性维护任务这一场景下,用户可以配置多个目标模拟环境。后续,在多个目标模拟环境中对目标模拟设备进行模拟测试,确定至少一个维护指标的指标值,从而能够更为精准对可移动设备210进行损耗分析。
这里,目标设备的运行信息可以包括电量、CPU温度和/或运行总时长等,则可以通过一个维护指标确定损耗程度,也可以通过多个维护指标确定损耗程度。
需要说明的是,在通过CPU温度或电量确定损耗程度时,可以将运行信息中的CPU温度和CPU的温度阈值进行比较,比如,可以将运行信息中的CPU温度和CPU温度阈值的比值作为损耗程度,一旦运行信息中的CPU温度接近CPU的温度阈值,就告警。电量和运行时长类同,不再赘述。
本发明实施例可以实现模拟设备的技能训练测试,而模拟设备的运动需要模拟设备中的关节的动力学参数,从而确保可以实现模拟设备的技能训练测试。在实际应用中,在涉及到目标模拟设备的关节运动的任务时,需要动力学参数。比如,目标模拟设备需要抓取物体、跳舞、打招呼、拖地、撒消毒水、卸货、搬运石头等。
在上述图3所示实施例的基础上,本发明实施例中,目标模拟设备包括至少一个关节,第一配置接口还可以确定目标模拟设备中的至少一个关节对应的动力学参数。
在实际应用中,如图5a所示,第一配置接口指示的模型配置页面还可以包括目标模拟设备的动力学参数配置控件。在一个例子中,动力学参数配置控件可以显示目标模拟设备的多套动力学参数,从而使得第一配置接口可以从目标模拟设备的备选的多套动力学参数中,选择动力学参数。这里,每套动力学参数包括目标模拟设备中所有关节的动力学参数。
可选地,用户无需配置目标模拟设备中至少一个关节的动力学参数,直接采用默认的动力学参数即可。
对应的,在步骤307中,云服务器集群100利用目标模拟设备中的至少一个关节的动力学参数,控制目标模拟设备在目标模拟环境中执行任务。
值得注意的是,可以利用目标模拟设备中的关节的动力学参数,可以控制关节运动需要的力,从而实现目标模拟设备的关节活动,使得目标模拟设备可以完成各种各样的动作,比如,跳舞,打招呼,走路,搬运等。
举例来说,假设目标模拟设备为机器人,任务为跨越土堆,在执行跨越土堆的任务时,云服务器集群100可以设置抬脚的多个高度以及每个高度各自对应的抬脚速度,然后,可以基于机器人抬脚时需要的关节的动力学参数,确定机器人抬脚时如何控制关节的活动;之后,可以对每个高度和其对应的抬脚速度进行模拟测试,看机器人是否跨越土堆。
根据一种可行的实现方式,第一任务指令指示的任务包括若干个技能。技能指示了可移动设备210可以实现的活动。对应的,第一执行结果包括若干个技能各自的技能实现策略。在实际应用中,目标模拟设备对应的目标设备执行技能实现策略实现技能。在一个例子中,若干个技能包括导航、避障、拍摄、跳舞、打招呼、拖地、撒消毒水、卸货、抓取货物、搬运石头、跨越土堆等。
在一个例子中,若干个技能包括导航,则技能实现策略包括运动轨迹,换言之,执行结果包括运动轨迹。可选地,云服务器集群100还可以显示运动轨迹。值得注意的是,在技能为导航时,此时需要标注好多个任务点,这些任务点包括起始点和终点,在一些可能的场景,起始点和终点之间还存在若干个任务点。值得注意的是,任务点可以关联一个或多个技能,即需要在任务点完成这些技能,任务点的技能可以为拍摄、跳舞、打招呼、卸货、抓取货物、搬运石头等。示例地,第一任务指令指示的任务包括N个技能和M个任务点,假设技能的数目有N个,N个技能包括导航和N-1个其他的技能,N-1个其他的技能和M个任务点关联。举例来说,如图5b所示,任务编排页面还可以显示目标模拟环境的缩略图,通常为二维地图,用户可以在目标模拟环境的缩略图中表明若干个任务点,比如,物资点、消毒点、目标点(物资存放的地方),这里,物资点可以为起始点、目标点可以为终点,物资点关联的技能为抓取物资,消毒点关联的技能为物资消毒,目的点关联的技能为卸载物资。
需要说明的是,在实际应用中,第一任务指令指示的任务需要结合目标模拟设备在目标模拟环境执行任务实际遇到的情况灵活增加,可以不断的增加新的技能。比如,第一任务指令为搬运物资通过消毒点送到目标点,假设目标模拟环境中存在正在运动的物体,目标模拟设备在执行任务的过程中遇到该物体,此时,可以新增一个技能为避障;再比如,目标模拟环境中存在土堆,目标模拟设备在执行任务的过程中遇到该土堆,此时,可以新增一个技能为跨越土堆。
在一种可能的情况,第一执行结果中的技能实现策略为表现较好的策略。举例来说,假设目标模拟设备为机器人,任务为跨越土堆,在执行跨越土堆的任务时,云服务器集群100可以设置抬脚的多个高度以及每个高度各自对应的抬脚速度,之后,可以对每个高度和其对应的抬脚速度进行模拟测试,看机器人是否跨越土堆,之后,可以分析每个高度和其对应的抬脚速度下的机器人是否跨越成功、能耗、消耗时间和动作的平滑程度,最后可以选择跨越成功、能耗较低、消耗时间较低、动作较为平滑的高度和其对应的抬脚速度。
如图3所示,在上述图3所示步骤301到步骤307的基础上,本发明实施例中,至少还可以包括如下步骤:
步骤308、云服务器集群100向目标模拟设备对应的目标设备下发第一执行结果。
在实际应用中,云服务器集群100可以将第一执行结果下发到目标模拟设备对应的目标设备即真实设备,使得目标设备可以按照第一执行结果执行任务。需要说明的是,目标设备为可移动设备210。
举例来说,假设任务为搬运物资通过消毒点送到目标点,则目标设备可以按照执行结果中的运动轨迹运行,并在达到物资点时搬运物资,将搬运的物资搬运到消毒点消毒,然后将消毒后的物资搬运到目标点,最后将消毒后的物资放到目标点。
在一种可能的情况,第一执行结果中的技能实现策略为能够实现技能的策略,为了满足用户的需求,可以由用户决定实现技能需要的技能实现策略。为了便于用户决策,第一执行结果还包括任务执行情况,任务执行情况指示了目标模拟设备按照技能实现策略执行时的执行情况,比如,执行时间,资源消耗,目标设备在实现技能过程中是否平稳等。
如图3所示,在上述图3所示步骤301到步骤307的基础上,本发明实施例中,至少还可以包括如下步骤:
步骤309、云服务器集群100向终端设备300发送第一执行结果中的任务执行情况。
在一个例子中,云服务器集群100可以将技能实现策略发送到目标设备,接收目标设备发送的自身执行技能实现策略的执行情况。
在一个例子中,云服务器集群100可以控制目标模拟设备按照技能实现策略在目标模拟环境中实现技能,得到执行情况。
之后,汇总第一执行结果中所有的技能实现策略的执行情况,得到任务执行情况。
步骤310、终端设备300显示任务执行情况,确定任务部署策略。
根据一种可行的实现方式,终端设备300可以显示任务执行情况,为了便于用户决策部署技能,任务执行情况还可以包括目标模拟设备执行每个技能实现策略时的缩略视频,对应的,终端设备300还可以显示目标模拟设备执行技能实现策略的缩略视频,从而便于用户决策。
这里,任务部署策略包括任务执行情况中的多个技能各自对应的标识、部署策略以及目标技能实现策略的标识。
其中,部署策略可以为部署在目标设备,部署在云服务器集群100,或者,自适用决策,即可以结合目标设备的资源情况,确定自身执行,还是云服务器集群100执行。在实际应用时,用户在看到不同的多个技能各自的执行情况后,可以决定技能部署在云服务器集群100,还是部署在目标设备,还是自适应决策。
对于任一技能,第一执行结果中该技能可以有多个技能实现策略,此时,用户可以选择表现较好的技能实现策略作为该技能的目标技能实现策略。进一步地,若目标技能实现策略有多个,即用户选择了多个目标技能实现策略,则任务部署策略还包括该技能的多个目标技能实现策略各自的执行顺序,从而确保在实际应用过程中技能的实现。
步骤311、终端设备300向云服务器集群100发送任务部署策略。
步骤312、云服务器集群100基于任务部署策略,确定任务执行策略。
这里,任务执行策略指示了多个技能各自的部署策略、目标技能实现策略。在目标技能实现策略有多个时,还包括目标技能实现策略的执行顺序。
在具体实现时,将任务部署策略中的技能实现策略的标识替换为第一执行结果中的技能实现策略,得到任务执行策略。
步骤313、云服务器集群100向目标设备下发任务执行策略。
在实际应用中,云服务器集群100可以将任务执行策略下发到目标模拟设备对应的目标设备即真实设备,使得目标设备可以按照任务执行策略执行任务。
本方案中,用户可以预测整个任务的执行过程中相关技能的执行时间,执行效果,为工作量的规划和分配提供参考,确保用户体验。
本发明实施例可以通过模拟设备和模拟环境可以实现和可移动设备210的实时交互,降低可移动设备210出现意外的风险。
根据一种可行的实现方式,实时交互可以为位姿同步。第一任务指令指示的任务可以为交互式定位。
如图4所述,在上述图3所示的步骤307之前,本发明实施例中还包括:
步骤401、终端设备300上传第一位姿。
对应的,步骤307具体可以包括如下内容:
步骤307a、云服务器集群100根据第一任务指令、目标模拟环境的标识和目标模拟设备的标识、第一位姿,利用目标模拟设备在目标模拟场景中执行任务,获得第一执行结果。
具体地,云服务器集群100获取目标模拟设备对应的目标设备的第一位姿;根据第一位姿,更新目标模拟环境中目标模拟设备的位姿,对应的,第一执行结果为更新后的目标模拟环境和目标模拟设备。
考虑到可移动设备210采集的位姿数据上传到云服务器集群100存在延迟,因此,在云服务器集群100处理位姿数据时,现实中可移动设备210的位姿可能已经发生了变化,为了确保能够与现实中的可移动设备120的位姿同步,可以基于第一位姿和仿真的加速时长,预测现实中可移动设备210的真实位姿。
可选地,第二配置接口还可以上传加速时长;则云服务器集群100可以根据第一位姿和加速时长,更新仿真场景中目标模拟设备的位姿。这里,加速时长可以为用户手动输入的。
可选地,第二配置接口还可以指示采用默认加速时长。则云服务器集群100可以计算自身与可移动设备210之间的通信延迟,将该通信延迟作为默认加速时长。对应的,云服务器集群100可以根据第一位姿和默认加速时长,更新仿真场景中目标模拟设备的位姿。
进一步地,在上述步骤307a的基础上,至少还包括如下步骤:
步骤402、终端设备300显示第一执行结果。
云服务器集群100显示目标模拟设备和目标模拟场景,以及第四配置接口,第四配置接口可以为目标模拟环境的语义显示接口、目标模拟环境的几何更新接口、目标模拟环境的物理参数的更新接口。在实际应用中,终端设备300也会显示目标模拟设备和目标模拟场景,以及第四配置接口,用户可以通过终端设备300对第四配置接口进行操作,对应的,云服务器集群100可以基于第四配置接口提供的信息,实现对目标模拟环境的语义显示、几何更新、物理参数的更新,这里,物理参数的更新不仅包括模拟物体的物理参数的更新,还可以包括天气的物理参数的更新。另外,若目标模拟设备的动力学参数可以更新,则第四配置接口还可以包括目标模拟设备的动力学参数的更新接口。
值得注意的是,后续,云服务器集群100可以获取目标模拟设备对应的目标设备的第二位姿;根据第一位姿和第二位姿,更新目标模拟环境中所述目标模拟设备的位姿和第一位姿和第二位姿的运动轨迹。从而实现实时位姿的更新。
步骤403、终端设备300获取第三任务指令。
需要说明的是,可移动设备210的传感器的感知范围有限,可能无法感知到潜在的危险动作(比如,下坡),或者,监控死角的移动障碍物,此时,可移动设备210有较高的风险,需要规避风险。此时,可以预测评估可移动设备210之后的运动情况,从而控制机器人的操作以规避风险。
对应的,第三任务指令指示的任务为预测任务。在一个例子中,上述第四配置接口还可以包括预测接口,在实际应用中,终端设备300显示预测接口,用户可以通过终端设备300对预测接口进行操作,设置预测时长等信息,对应的,云服务器集群100可以基于预测接口提供的信息,实现运动轨迹预测。
步骤404、云服务器集群100基于第三任务指令,加速目标模拟场景的仿真时间,预测目标模拟设备的运动轨迹,得到预测运动轨迹。
在一些可能的场景中,目标模拟场景中可以存在一些固定场所的运动物体,这些运动物体可以处于可移动设备的监控死角,此时通过仿真预测,可以提前感知到风险,从而为后续的风险规避提供依据。
在一些可能的场景中,目标模拟场景中可以存在一些剧烈变化的路段,比如上坡路段、下坡路段、地面起伏较大的路段,对于这些路段,会增加可移动设备210的风险,此时通过仿真预测,可以提前感知到风险,从而为后续的风险规避提供依据。
步骤405、云服务器集群100确定预测运动轨迹对应的监控数据。
值得注意的是,在一些可能的场景中,目标模拟场景中可以存在一些固定场所的运动物体,这些运动物体可以处于可移动设备的监控死角,此时通过监控数据,可以提前感知到风险,从而为后续的风险规避提供依据。
需要说明的是,监控数据可以为边侧设备200采集到的,比如,对于园区运行的多个机器人和园区的监控设备,监控数据可以为综合园区运行的多个机器人采集到的数据和监控设备采集到的数据综合确定。
步骤406、云服务器集群100向终端设备300发送预测运动轨迹和其对应的监控数据。
步骤407、终端设备300显示预测运动轨迹和其对应的监控数据。
在实际应用中,用户可以结合预测后的预测运动轨迹和并根据该预测运动轨迹对应的监控数据,分析可移动设备210视野盲区是否有潜在的移动障碍物,若判定有碰撞的风险,会提示用户提前下发减速或停止指令进行规避。
步骤408、终端设备300确定目标设备的操作指令。
在实际应用中,可以对目标模拟设备下发加速、减速、停止等操作指令,从而控制可移动设备210规避风险。
步骤409、终端设备300向云服务器集群100发送目标设备的操作指令。
云服务器集群100下发对目标模拟设备的操作指令至目标设备,以使目标设备按照操作指令进行操作。
本方案中,一方面通过加速仿真时间,可以使得真实环境和模拟环境中的可移动设备的位姿同步。另一方面,通过云端丰富的计算资源,在模拟环境中可以对模拟设备的运动轨迹进行预测,规避潜在的风险。
根据一种可行的实现方式,实时交互可以为可移动设备210在实际运运行过程上传到云服务器集群100的边侧任务。
如图3所示,本发明实施例中,可移动设备210在实际运行的过程中,还可以包括如下步骤:
步骤314、目标模拟设备对应的目标设备向云服务器集群100发送的第二任务指令和任务数据。
步骤315、云服务器集群100根据第二任务指令,处理任务数据执行任务,得到第二执行结果。
需要说明的是,真实的场景比较复杂,本发明实施例构建的模拟环境一般只考虑到了环境中几乎不会发生变化的物体,因此,对于实时性要求较高的任务:实时避障和局部路径规划,需要部署在可移动设备210本体执行。目前,随着大场景地图的使用和人工智能模型的引入,可移动设备210本体的资源消耗得越来越多,导致可移动设备210本体资源紧张,通常可移动设备210在执行一些必要的任务后,可能已经没有额外的资源执行资源消耗较多的任务。本发明实施例为了解决该问题,可以通过云服务器集群100执行可移动设备210本体资源消耗较大的任务。
根据一种可行的实现方式,第二任务指令指示的任务可以为内容识别。则任务数据可以为传感器采集的数据。可选地,内容识别需要识别的内容可以为声纹识别、语音识别、手势识别。
根据一种可行的实现方式,第二任务指令指示的任务的资源消耗大于设定的资源消耗阈值。在实际应用中,目标模拟设备对应的目标设备可以实时监控执行非必要的任务的资源消耗,在资源消耗大于设定的资源消耗阈值,可以将该任务的第二任务指令上传到云服务器集群100。这里,非必要的任务可以理解为实时性要求较低的任务,比如,实时避障和局部路径规划之外的任务。
根据一种可行的实现方式,第二任务指令指示的任务可以为局部地图下发。
本发明实施例中,还基于机器人传感器感知范围,自适应的确定机器人当前运行所需局部地图,并从云上下发给机器人,从而避免了大规模地图部署给机器人本体带来的资源消耗。具体地,云服务器集群100基于可移动设备210的传感器的感知范围,自适应的确定可移动设备210当前运行所需的局部地图后下发至可移动设备210,从而避免了大规模地图部署给可移动设备210本体带来的资源消耗。
需要指出,在确定局部地图的基础上,目标模拟设备对应的目标设备只需要关注环境中的移动障碍物即可,可以在一定程度上降低资源消耗。
在一个例子中,任务数据可以为目标模拟设备对应的目标设备的传感器采集的数据(可以指示目标设备可以感知的最大区域),则云服务器集群100接收目标模拟设备对应的目标设备发送的任务数据,基于任务数据和目标模拟环境进行匹配,确定出局部区域;之后,即可确定目标模拟场景在局部区域的局部 地图。
在一个例子中,若云服务器集群100中目标模拟设备在目标模拟环境中的位置和真实世界适配,则可以基于目标模拟设备的当前的模拟传感器的感知范围,确定目标模拟设备对应的目标设备的局部地图,并将局部地图下发到目标模拟设备对应的目标设备中。若采用该方式,则无需任务数据。
步骤316、云服务器集群100向目标设备下发第二执行结果。
本方案中,可移动设备可以将资源消耗大或者比较难执行的任务上传到云上,云端平台处理完后,将结果返回给边侧的可移动设备,降低了对可移动设备的硬件的要求。并且云端资源是弹性的,可以让用户按需取用,进一步降低了经济成本。
下面结合具体的场景对本发明实施例提供的模拟训练方法的具体应用进行描述。下面以可移动设备210为机器人为例进行描述。
场景1:机器人在园区内进行巡检。传统的做法为:先对园区进行建图,得到二维栅格地图或三维点云地图后,在图上标注任务点,然后下发任务到机器人上。机器人预先在真实环境中或仿真世界中进行导航技能的开发与测试,随后按照地图进行导航巡检。
本发明实施例旨在通过建立一个模拟环境和模拟设备(可以统称为数字孪生世界),让机器人能高效地进行技能训练,从而完成园区巡检任务。下面结合上述内容和图6a,对巡检任务的训练流程进行描述:
A1、多机器人协同作业,采集园区的激光点云,视觉图像,机器人自身的速度,机器人的型号和结构数据等多源异构数据,并将其上传到云服务器集群100。云服务器集群100进行降噪处理后,将数据放到数据仓库111。
A2、云服务器集群100将数据仓库111中的点云、视频、图像等数据进行时间/空间对齐处理,基于激光点云数据,进行SLAM建图,生成点云模型,结合图片数据,进行网格纹理重建,生成园区的模拟环境,并将模拟环境加入到模拟环境库112;另外,还可以生成园区中的物体的模拟物体,并将模拟物体加入到模拟物体库115。提取点云和图片数据中的物体的描述信息,存储至语义资料库118,基于语义资料库118中的物体和模拟环境中的三维模型的描述信息的匹配,得到模拟环境的语义地图。将模拟环境和机器人模型作为数字孪生世界。
A3、云服务器集群100基于算法库119中的算法,对采集到的机器人的运动数据和环境数据进行动力学辨识、空气阻力、交互力的分析,一方面,可以得到机器人中各关节的动力学参数,使模拟设备更加贴近现实的机器人;另一方面,可以得到模拟环境中的三维模型的摩擦系数、空气阻力系数等物理参数,使上一步得到的模拟环境中的模拟物体和/或模拟环境的天气具有贴近现实的物理参数,进一步降低模拟环境和真实环境的差异。
A4.云服务器集群100基于采集到的视频数据抽取运动物体的行为片段,并加以统计,生成行为模式,得到行为模式库116,然后基于行为模式库中的行为模式进行建模,得到模拟行为库116,从而模拟出真实的物体行为和天气行为,生成不同的模拟环境以供机器人训练,提高机器人在现实世界中应对复杂环境的处理能力。
A5.用户通过终端设备300对云服务器集群100提供的模型配置界面(上述第一配置接口)进行操作,确定模拟设备的标识、模拟环境的标识,并可以确定模拟环境中新增的一些模拟物体的标识、模拟物体的模拟行为的标识、模拟天气行为的标识。
在一些可能的实现方式中,模拟设备的标识可以由机器人型号和其结构参数确定,则云服务器集群100可以基于机器人型号和其结构参数,匹配模拟设备库113中的模拟设备,选择最接近的模拟设备作为机器人模型。
A6.云服务器集群100基于模型配置界面(上述第一配置接口)提供的标识,构建数据孪生世界。
可选地,数据孪生世界可以由模拟设备、模拟环境、模拟物体、模拟物体的模拟行为、模拟天气行为进行切分和分布式加载得到,可以减少整个模型的加载时间,提高仿真的流畅度。
A7.用户通过终端设备300对云服务器集群100提供的任务编排页面(上述第二配置接口)进行操作,确定巡检任务。
A8.云服务器集群100通过算法库119中的人工智能算法,在A6构建的数字孪生世界中,执行任务编排页面(上述第二配置接口)上传的巡检任务,得到第一执行结果。
这里,云服务器集群100形成感知,决策,规划,控制的闭环模拟训练体系,从而提升机器人技能开发测试的效率。
A9.云服务器集群100将第一执行结果作为任务执行策略进行应用部署的打包,将打包好的机器人应用下发给边侧的多个机器人。其中,机器人应用,又称机器人原生应用,是指针对机器人平台及场景设计开发的应用程序。
这里,任务执行策略可以为规划后的运动路径。
A10.边侧的多个机器人各自运行收到的机器人应用,在园区进行巡检。
A11.边侧的多个机器人各自在巡检过程中采集到的多源异构数据再次上传到云服务器集群100,由云服务器集群100针对已有的模拟环境进行更新,对发生变化的部分进行网格纹理重建,并更新语义资料库。
本方案中,通过采集园区多源传感器数据,建立一个模拟环境,并通过对模拟环境进行持续更新,以及通过给模拟环境赋予了贴近真实环境的物理参数,进一步降低两者的差异,给机器人提供了精确的导航用的地图。另外,生成的数字孪生世界用于赋能进行闭环仿真,减少了机器人实地测试带来的时间和经济成本。利用边云协同技术,大幅提升了机器人本地和云上资源利用效率,并加速机器人技能的开发和测试。
场景2:在一些特殊场所,比如疫情隔离区,或者方舱医院,人工执行任务会带来感染的风险,并且要经过各种消毒防疫的流程,有很大的时间和经济成本。因此可以使用机器人来代替人工执行任务,与室外园区巡检不同的是,方舱医院这种场所环境复杂,机器人要执行的任务也比传统的巡检要更难,包括搬运物资,进入指定区域消毒,到达目标点卸货,和隔离场所内人员的互动,还要评估机器人执行任务的质量和完成时间,以确定每天的物资运送量。
下面结合图6b和上述内容,对方舱医院场景下的模拟训练流程进行描述:
下面结合上述内容对具体的流程进行描述:
B1、机器人在方舱医院中采集多源异构数据,并将数据上传到云服务器集群100,基于上述A1至A6描述的构建数字孪生世界的流程,生成方舱医院的第一数字孪生世界。
值得注意的是,第一数字孪生世界为用户配置的用于实现模拟训练的数字孪生世界,可以构建各种各样复杂的、真实世界少见的环境。
B2.用户通过终端设备300对物资消毒运送任务进行语义分析和拆解,比如将“搬运物资通过消毒点送到目标点”转化为一个任务流:抓取物资,到达消毒点,消毒,到达目标点,卸货。
B3、用户通过云服务器集群100提供的任务编排页面(上述第二配置接口),对任务编排页面(上述第二配置接口)对拆解后的任务流进行编辑。
比如在数字孪生世界上标明起点,物资抓取点,消毒点、卸货点。
B4、云服务器集群100基于B1确定的第一数据孪生世界中,对任务编排页面(上述第二配置接口)上传的任务流中的多个技能进行模拟训练,确定第一执行结果。
需要说明的是,在实际应用中,云服务器集群100可以对任务流的任务拆解成多个技能(不可再分割),对每个技能进行模拟训练。
这里,第一执行结果可以包括多个技能各自的若干个技能实现策略。
B5、云服务器集群100向终端设备100发送第一执行结果中的任务执行情况。
B6、终端设备300显示任务执行情况,确定任务部署策略。
详细内容参见上文对步骤309至步骤312的描述,不再赘述。
B7.云服务器集群100基于任务部署策略确定任务执行策略,基于任务执行策略进行应用部署的打包准备工作,将打包好的机器人应用下发给边侧的机器人。
B8.边侧的机器人运行收到的机器人应用,在方舱医院中进行物资消毒运送,实时监控任务的资源消耗以及执行难易,若遇到资源消耗大或较难处理的任务时,将该边侧的任务上传到云服务器集群100执行。
这里,边侧的任务可以为:与隔离点内人员交互、手势识别等任务。
B9、云服务器集群100在执行完任务后,将第二执行结果下发到边侧的机器人。
B10.云服务器集群100基于上述A1至A6描述的构建数字孪生世界的流程,生成方舱医院的第二数字孪生世界,终端设备300显示第二数字孪生世界。
值得注意的是,第二数字孪生世界为用户配置的用于实现实时交互的数字孪生世界,比较接近真实的环境。
B11、云服务器集群100通过接收边侧的机器人的实时位姿数据,不断地更新B9确定的第二数字孪生世界中机器人模型的位姿,实现与现实中机器人的位姿同步。
在实际应用中,云服务器集群100根据机器人的实际位姿数据,加速当前数字孪生世界的时间,减少仿真与实际世界的延迟,以达到一个实时遥操作的目的。详细内容参见上文对步骤307a的描述。
B12、云服务器集群100可以基于终端设备300对第二数字孪生世界的预测接口的操作,可以预测第二数字孪生世界中的机器人模型之后的运动轨迹,得到预测运动轨迹,并确定该预测运动轨迹对应的方舱医院的监控数据。在一个例子中,云服务器集群100可以分析机器人视野盲区是否有潜在的移动障碍物,若判定有碰撞的风险,会向终端设备300询问是否同意边侧的机器人执行操作指令(减速或停止指令)进行规避,若终端设备300同意该操作指令,则云服务器集群100可以将该操作指令下发到边侧的机器人。在一个例子中,云服务器集群100可以向终端设备300发送预测运动轨迹和其对应的方舱医院的监控数据,让分析机器人视野盲区是否有潜在的移动障碍物,若用户判定有碰撞的风险,可以向云服务器集群100上传边侧的机器人的操作指令,云服务器集群100可以将该操作指令下发到对应的边侧的机器人。
本方案中,机器人在方舱医院中执行任务,避免了人工执行的病毒感染风险,简化了消毒防疫流程,减少了时间和经济成本。基于本发明建立的方舱医院的数字孪生世界,除了给机器人技能训练提供了一个符合现实世界的仿真环境,提升开发测试效率,还支持用户半自主遥操作机器人,在仿真环境里模拟机器人的实际运动状态,通过云上大算力预测机器人的运动轨迹,从而规避潜在的风险,进一步提高了工作效率。
综上,本发明实施例具有如下技术效果。
第一方面,过边侧设备采集的数据在云服务器集群构建模拟环境、模拟物体、模拟行为等,可以构建各种各样的复杂模拟环境进行模拟训练测试,解决真实环境下意外情形发生概率小,数据量的问题。
第二方面,通过识别模拟环境中模拟物体的物理参数,在模拟环境中采用与实体可移动设备物理属性相同的模拟设备执行任务,降低任务在真实环境中执行和在模拟环境中执行的差异。
第三方面,支持感知、决策、规划、控制等闭环模拟对技能的训练测试,同时实体边侧设备采集的多源数据也反馈回云服务器集群,用于闭环模拟的技能的训练测试,实现动态闭环、持续进化的智能云端系统。
第四方面,可以实现实体的可移动设备和模拟设备的位姿同步,同时可以预测模拟设备之后的运动轨迹,并将预测运动轨迹和该轨迹的监控数据反馈给用户,使得用户可以了解可移动设备的运行情况,以操作实体的可移动设备规避风险,降低了对可移动设备完成任务的操控难度和成本。
第五方面,云服务器集群可以处理可移动设备资源消耗较大的任务或者较难处理的任务,降低了对实体的可移动设备的硬件要求。
第六方面,通过模拟设备和模拟环境对可移动设备210进行预测性维护,尽可能的延长可移动设备210的设备使用寿命,降低出现意外的风险。
本申请还提供一种模拟训练装置,如图7所示,包括:
第一接口提供模块701,用于提供第一配置接口,所述第一配置接口用于获取目标环境的标识和目标模拟设备的标识;
第二接口提供模块702,用于提供第二配置接口,所述第二配置接口用于获取任务指令;
任务执行模块703,用于根据所述任务指令,利用目标模拟设备在所述仿真场景中执行任务,获得执行结果。
其中,第一接口提供模块701、第二接口提供模块702和任务执行模块703均可以通过软件实现,或者可以通过硬件实现。示例性的,接下来以第一接口提供模块701为例,介绍第一接口提供模块701的实现方式。类似的,第二接口提供模块702和任务执行模块703的实现方式可以参考第一接口提供模块701的实现方式。
模块作为软件功能单元的一种举例,第一接口提供模块701可以包括运行在计算实例上的代码。其中,计算实例可以包括物理主机(计算设备)、虚拟机、容器中的至少一种。进一步地,上述计算实例可以是一台或者多台。例如,第一接口提供模块701可以包括运行在多个主机/虚拟机/容器上的代码。需要说明的是,用于运行该代码的多个主机/虚拟机/容器可以分布在相同的区域(region)中,也可以分布在不同的region中。进一步地,用于运行该代码的多个主机/虚拟机/容器可以分布在相同的可用区(availability zone,AZ)中,也可以分布在不同的AZ中,每个AZ包括一个数据中心或多个地理位置相近的数据中心。其中,通常一个region可以包括多个AZ。
同样,用于运行该代码的多个主机/虚拟机/容器可以分布在同一个虚拟私有云(virtual private cloud,VPC)中,也可以分布在多个VPC中。其中,通常一个VPC设置在一个region内,同一region内两个VPC之间,以及不同region的VPC之间跨区通信需在每个VPC内设置通信网关,经通信网关实现VPC之间的互连。
模块作为硬件功能单元的一种举例,第一接口提供模块701可以包括至少一个计算设备,如服务器等。或者,A模块也可以是利用专用集成电路(application-specific integrated circuit,ASIC)实现、或可编程逻辑器件(programmable logic device,PLD)实现的设备等。其中,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD)、现场可编程门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合实现。
第一接口提供模块701包括的多个计算设备可以分布在相同的region中,也可以分布在不同的region中。第一接口提供模块701包括的多个计算设备可以分布在相同的AZ中,也可以分布在不同的AZ中。同样,第一接口提供模块701包括的多个计算设备可以分布在同一个VPC中,也可以分布在多个VPC中。其中,所述多个计算设备可以是服务器、ASIC、PLD、CPLD、FPGA和GAL等计算设备的任意组合。
需要说明的是,在其他实施例中,第一接口提供模块701可以用于执行模拟训练方法中的任意步骤,第二接口提供模块702可以用于执行模拟训练方法中的任意步骤,和任务执行模块703可以用于执行模拟训练方法中的任意步骤,第一接口提供模块701、第二接口提供模块702、以及任务执行模块703负责实现的步骤可根据需要指定,通过第一接口提供模块701、第二接口提供模块702、以及任务执行模块703分别实现模拟训练方法中不同的步骤来实现模拟训练装置的全部功能。
本申请还提供一种计算设备800。如图8所示,计算设备800包括:总线802、处理器804、存储器806和通信接口808。处理器804、存储器806和通信接口808之间通过总线802通信。计算设备800可以是服务器或终端设备。应理解,本申请不限定计算设备800中的处理器、存储器的个数。
总线802可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图3中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线804可包括在计算设备800各个部件(例如,存储器806、处理器804、通信接口808)之间传送信息的通路。
处理器804可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器806可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器804还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。
存储器806中存储有可执行的程序代码,处理器804执行该可执行的程序代码以分别实现前述第一接口提供模块701、第二接口提供模块702、以及任务执行模块703的功能,从而实现模拟训练方法。也即,存储器806上存有用于执行模拟训练方法的指令。
通信接口808使用例如但不限于网络接口卡、收发器一类的收发模块,来实现计算设备800与其他设备或通信网络之间的通信。
本申请实施例还提供了一种计算设备集群,对应上述云服务器集群100。该计算设备集群包括至少一台计算设备。该计算设备可以是服务器,例如是中心服务器、边缘服务器,或者是本地数据中心中的本地服务器。在一些实施例中,计算设备也可以是台式机、笔记本电脑或者智能手机等终端设备。
如图9所示,所述计算设备集群包括至少一个计算设备800。计算设备集群中的一个或多个计算设备800中的存储器806中可以存有相同的用于执行模拟训练方法的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备800的存储器806中也可以分别存有用于执行模拟训练方法的部分指令。换言之,一个或多个计算设备800的组合可以共同执行用于执行模拟训练方法的指令。
需要说明的是,计算设备集群中的不同的计算设备800中的存储器806可以存储不同的指令,分别用于执行模拟训练装置的部分功能。也即,不同的计算设备800中的存储器806存储的指令可以实现 第一接口提供模块701、第二接口提供模块702、以及任务执行模块703中的一个或多个模块的功能。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图10示出了一种可能的实现方式。如图10所示,两个计算设备800A和100B之间通过网络进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。在这一类可能的实现方式中,计算设备800A中的存储器806中存有执行第一接口提供模块701、第二接口提供模块702的功能的指令。同时,计算设备800B中的存储器806中存有执行任务执行模块703的功能的指令。
图10所示的计算设备集群之间的连接方式可以是考虑到本申请提供的模拟训练方法需要通过大量的资源加载模拟环境和模拟设备,以及进行模拟训练,因此考虑将任务执行模块703实现的功能交由计算设备800B执行。
应理解,图10中示出的计算设备800A的功能也可以由多个计算设备800完成。同样,计算设备800B的功能也可以由多个计算设备800完成。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在计算设备上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算设备上运行时,使得至少一个计算设备执行模拟训练方法。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是计算设备能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示计算设备执行模拟训练方法。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。

Claims (16)

  1. 一种模拟训练方法,其特征在于,应用于云管理平台,所述方法包括:
    提供第一配置接口,所述第一配置接口用于获取目标模拟环境的标识和目标模拟设备的标识;
    提供第二配置接口,所述第二配置接口用于获取任务指令;
    根据所述任务指令,利用目标模拟设备在所述目标模拟环境中执行任务,获得执行结果。
  2. 根据权利要求1所述的方法,其特征在于,所述获取目标模拟场景的标识前,所述方法包括:
    获取所述目标模拟环境对应的采集数据;
    提供第三配置接口,所述第三配置接口用于获取所述目标模拟环境的类型参数;
    根据所述目标模拟环境对应的采集数据和所述目标模拟环境的类型参数,生成所述模拟环境。
  3. 根据权利要求2所述的方法,其特征在于,所述类型参数包括下述的一种或多种:
    室内场景、室外场景、天气类型。
  4. 根据权利要求1至3中所述的方法,其特征在于,所述目标模拟环境中包括至少一个三维模型和其各自携带的物理参数,所述根据所述任务指令,利用目标模拟设备在所述目标模拟环境中执行任务,包括:
    根据所述任务指令,利用目标模拟设备和所述至少一个三维模型的物理参数,在所述目标模拟环境中执行任务。
  5. 根据权利要求4中所述的方法,其特征在于,所述物理参数包括摩擦系数和/或空气阻力系数。
  6. 根据权利要求1至5中任一所述的方法,其特征在于,所述第二配置接口,还用于获取所述任务对应的进程数。
  7. 根据权利要求1至6中任一所述的方法,其特征在于,所述任务包括起始点和终点,所述第二配置接口,还用于获取用户设置的所述起始点和所述终点。
  8. 根据权利要求1至7中任一所述的方法,其特征在于,所述获取目标模拟环境的标识,包括:
    所述第一配置接口用于获取用户从多个备选模拟环境中选择的所述目标模拟环境的标识;
    和/或,所述获取模拟设备的标识,包括:
    所述第一配置接口用于获取所述用户从多个备选模拟设备中选择的所述目标模拟设备的标识。
  9. 根据权利要求1至8中任一所述的方法,其特征在于,所述方法还包括:
    将所述执行结果下发至与所述目标模拟设备对应的目标设备。
  10. 根据权利要求1至9中任一所述的方法,其特征在于,所述根据所述任务指令,利用模拟设备在所述目标模拟环境中执行任务,包括:
    基于语义识别,将所述任务指令转换为仿真指令,所述仿真指令为计算机可读的格式;
    基于所述仿真指令,利用所述目标模拟设备在所述目标模拟环境中执行所述任务。
  11. 根据权利要求1至10中任一所述的方法,其特征在于,所述执行结果包括运动轨迹,所述方法还包括显示所述运动轨迹。
  12. 根据权利要求1至11中任一所述的方法,其特征在于,所述目标模拟设备包括至少一个关节和其各自对应的动力学参数,所述根据所述任务指令,利用目标模拟设备在所述目标模拟环境中执行任务,包括:
    根据所述任务指令,利用目标模拟设备中至少一个关节的动力学参数,控制目标模拟设备在所述目标模拟环境中执行任务。
  13. 一种模拟训练装置,其特征在于,应用于云管理平台,所述装置包括:
    第一接口提供模块,用于提供第一配置接口,所述第一配置接口用于获取目标环境的标识和目标模拟设备的标识;
    第二接口提供模块,用于提供第二配置接口,所述第二配置接口用于获取任务指令;
    任务执行模块,用于根据所述任务指令,利用目标模拟设备在所述仿真场景中执行任务,获得执行结果。
  14. 一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;
    所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使得 所述计算设备集群执行如权利要求1-12任一所述的方法。
  15. 一种包含指令的计算机程序产品,其特征在于,当所述指令被计算设备集群运行时,使得所述计算设备集群执行如权利要求的1-12任一所述的方法。
  16. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,所述计算设备集群执行如权利要求1-12任一所述的方法。
PCT/CN2023/101580 2022-10-10 2023-06-21 模拟训练方法、装置及计算设备集群 WO2024078003A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211237632.0 2022-10-10
CN202211237632.0A CN117910188A (zh) 2022-10-10 2022-10-10 模拟训练方法、装置及计算设备集群

Publications (1)

Publication Number Publication Date
WO2024078003A1 true WO2024078003A1 (zh) 2024-04-18

Family

ID=90668641

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/101580 WO2024078003A1 (zh) 2022-10-10 2023-06-21 模拟训练方法、装置及计算设备集群

Country Status (2)

Country Link
CN (1) CN117910188A (zh)
WO (1) WO2024078003A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227660A (zh) * 2016-07-21 2016-12-14 中国科学院计算技术研究所 一种用于模拟真实物理环境的仿真数据生成方法
CN108789421A (zh) * 2018-09-05 2018-11-13 厦门理工学院 基于云平台的云机器人交互方法和云机器人及云平台
CN109664298A (zh) * 2018-12-26 2019-04-23 深圳市越疆科技有限公司 机器人动力学参数辨识方法、装置、终端设备及存储介质
WO2021183898A1 (en) * 2020-03-13 2021-09-16 Brain Corporation Systems and methods for route synchronization for robotic devices
WO2022116716A1 (zh) * 2020-12-01 2022-06-09 达闼机器人股份有限公司 云端机器人系统、云服务器、机器人控制模块和机器人
WO2022141506A1 (zh) * 2020-12-31 2022-07-07 华为技术有限公司 一种仿真场景的构建方法、仿真方法以及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227660A (zh) * 2016-07-21 2016-12-14 中国科学院计算技术研究所 一种用于模拟真实物理环境的仿真数据生成方法
CN108789421A (zh) * 2018-09-05 2018-11-13 厦门理工学院 基于云平台的云机器人交互方法和云机器人及云平台
CN109664298A (zh) * 2018-12-26 2019-04-23 深圳市越疆科技有限公司 机器人动力学参数辨识方法、装置、终端设备及存储介质
WO2021183898A1 (en) * 2020-03-13 2021-09-16 Brain Corporation Systems and methods for route synchronization for robotic devices
WO2022116716A1 (zh) * 2020-12-01 2022-06-09 达闼机器人股份有限公司 云端机器人系统、云服务器、机器人控制模块和机器人
WO2022141506A1 (zh) * 2020-12-31 2022-07-07 华为技术有限公司 一种仿真场景的构建方法、仿真方法以及设备

Also Published As

Publication number Publication date
CN117910188A (zh) 2024-04-19

Similar Documents

Publication Publication Date Title
CN112668687B (zh) 云端机器人系统、云服务器、机器人控制模块和机器人
US11202036B2 (en) Merged reality system and method
CN112861975B (zh) 分类模型的生成方法、分类方法、装置、电子设备与介质
CN112950773A (zh) 基于建筑信息模型的数据处理方法与装置、处理服务器
JP2008254167A (ja) 認識/推論の水準に応じた階層的構造の室内情報を有する移動サービスロボットの中央情報処理システム及び方法
US10785297B2 (en) Intelligent dataset migration and delivery to mobile internet of things devices using fifth-generation networks
CN113066160B (zh) 一种室内移动机器人场景数据的生成方法
CN108074278A (zh) 视频呈现方法、装置和设备
US20180004393A1 (en) Three dimensional visual programming interface for a network of devices
CN112180795B (zh) 一种建筑设备运行预警与联动控制的装置及方法
CN109859562A (zh) 数据生成方法、装置、服务器及存储介质
CN109492070A (zh) 一种城市宏观场景三维可视化平台系统
CN113744390A (zh) 一种面向园区的可视化智能管控平台
CN111744199A (zh) 图像处理方法及装置、计算机可读存储介质、电子设备
Mekni Automated generation of geometrically-precise and semantically-informed virtual geographic environments populated with spatially-reasoning agents
CN111429583A (zh) 一种基于三维地理信息的时空态势感知方法和系统
CN114723904A (zh) 机场数据的动态管理方法、系统、计算机设备和存储介质
WO2024078003A1 (zh) 模拟训练方法、装置及计算设备集群
Bai et al. Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform
US20210343091A1 (en) Deported compute for teleoperation and autonomous systems
Leon-Garza et al. A fuzzy logic based system for cloud-based building information modelling rendering optimization in augmented reality
CN108491530A (zh) 一种空间地理信息系统
Boroujerdian et al. The role of compute in autonomous aerial vehicles
CN110012021A (zh) 一种移动边缘计算下的自适应计算迁移方法
US20230259110A1 (en) Identifying natural solutions to problems