CN114372356A - Artificial enhancement method, device and medium based on digital twins - Google Patents

Artificial enhancement method, device and medium based on digital twins Download PDF

Info

Publication number
CN114372356A
CN114372356A CN202111644024.7A CN202111644024A CN114372356A CN 114372356 A CN114372356 A CN 114372356A CN 202111644024 A CN202111644024 A CN 202111644024A CN 114372356 A CN114372356 A CN 114372356A
Authority
CN
China
Prior art keywords
data
digital twin
space model
space
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111644024.7A
Other languages
Chinese (zh)
Other versions
CN114372356B (en
Inventor
黄晓庆
王勇
马世奎
陈原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202111644024.7A priority Critical patent/CN114372356B/en
Publication of CN114372356A publication Critical patent/CN114372356A/en
Priority to PCT/CN2022/108917 priority patent/WO2023124055A1/en
Application granted granted Critical
Publication of CN114372356B publication Critical patent/CN114372356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a digital twin-based artificial enhancement method, a device and a medium, wherein the method comprises the steps of fusing optimization equipment data and scene artificial enhancement task data to an initial digital twin space model to update to obtain a digital twin space model, and then sending the digital twin space model to an access end when a user access instruction sent by the access end is detected; then, acquiring operation data which is sent by the access end and corresponds to the digital twin space model; and finally, when the operation data are determined to meet the triggering conditions corresponding to the scene artificial enhancement task data, sending feedback data corresponding to the operation data to an access end. According to the scheme, the target scene of the cloud robot can be converted into the optimized virtual scene, and the data processing efficiency is improved.

Description

Artificial enhancement method, device and medium based on digital twins
Technical Field
The embodiment of the application relates to the technical field of digital twins, in particular to an artificial enhancement method and device based on digital twins and a storage medium.
Background
The cloud robot is more and more widely applied, for example, the cloud robot is composed of artificial enhanced intelligence, multi-modal fusion artificial intelligence, digital twins, continuous closed-loop learning and intelligent evolution, and the robot is controlled to complete various tasks (for example, the cloud robot is remotely controlled to complete tasks of picking up objects and the like) through a 5G secure network. The artificial in-loop is an important loop in cloud robot control and is divided into online real-time or quasi-real-time artificial enhancement and offline teaching training.
However, if the robot trainer is performing on-line real-time manual enhancement, quasi-real-time manual enhancement, or off-line teaching training, if the virtual model or the real image of the object to be operated is directly displayed on the monitor on the robot trainer side (however, the object to be operated is not suitable for directly displaying the virtual model or the real image with high reduction degree), the data processing efficiency is reduced because the operation scene needs to be highly reduced
Disclosure of Invention
The embodiment of the application provides a digital twin-based artificial enhancement method, device and medium, which can convert a target scene of a cloud robot into an optimized virtual scene and improve data processing efficiency.
In a first aspect, an embodiment of the present application provides a digital twin-based artificial enhancement method from a server perspective, where the method includes:
acquiring optimized equipment data, and mapping the optimized equipment data to an initial digital twin space model through digital twin modeling so as to update to obtain a digital twin space model;
acquiring set scene artificial enhancement task data, and fusing the scene artificial enhancement task data and the digital twin space model to update the digital twin space model;
when a user access instruction sent by an access end is detected, sending the digital twin space model to the access end;
acquiring operation data which is sent by an access end and corresponds to the digital twin space model;
and when the operation data are determined to meet the triggering conditions corresponding to the scene artificial enhancement task data, sending feedback data corresponding to the operation data to an access end.
In a second aspect, an embodiment of the present application further provides an intelligent device, where the intelligent device includes: a transmitting unit, a receiving unit and a processing unit;
the processing unit is used for acquiring optimized equipment data, and mapping the optimized equipment data to an initial digital twin space model through digital twin modeling so as to update to obtain a digital twin space model;
the processing unit is further configured to acquire the set scene artificial enhancement task data, and fuse the scene artificial enhancement task data with the digital twin space model to update the digital twin space model;
the sending unit is used for sending the digital twin space model to the access end when a user access instruction sent by the access end is detected;
the receiving unit is used for acquiring operation data which is sent by the access end and corresponds to the digital twin space model;
the sending unit is further configured to send feedback data corresponding to the operation data to the access end when it is determined that the operation data meets the trigger condition corresponding to the scene artificial enhancement task data.
In a third aspect, an embodiment of the present application further provides a processing device, which includes a processor and a memory, where the memory stores a computer program, and the processor executes, when calling the computer program in the memory, the steps in any of the digital twin-based artificial enhancement methods provided by the embodiments of the present application.
In a fourth aspect, the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to perform the steps in any one of the digital twin-based artificial enhancement methods provided by the embodiments of the present application.
According to the method, optimization equipment data and scene artificial enhancement task data are fused to an initial digital twin space model to obtain a digital twin space model through updating, and then the digital twin space model is sent to an access end when a user access instruction sent by the access end is detected; then, acquiring operation data which is sent by the access end and corresponds to the digital twin space model; and finally, when the operation data are determined to meet the triggering conditions corresponding to the scene artificial enhancement task data, sending feedback data corresponding to the operation data to an access end. The target scene of the cloud robot can be converted into the optimized virtual scene, and the data processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a digital twin-based artificial enhancement method of the present application;
FIG. 2 is a schematic diagram of an embodiment of the smart device of the present application;
FIG. 3 is a schematic diagram of a processing apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to herein, for a number of times, as being performed by a computer, embodiments of the present application refer to computer-implemented operations involving data being processed by a computer processing unit in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The principles of the present application may be employed in numerous other general-purpose or special-purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the application include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions.
First, before the embodiments of the present application are described, the relevant contents of the present application about the application background will be described.
The execution main body of the Digital twin-based artificial enhancement method provided by the application can be the device provided by the application, or a server device, a physical host, or a User Equipment (UE) and other processing devices integrated with the device, wherein the device can be implemented in a hardware or software manner, and the UE can be specifically a terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or a Personal Digital Assistant (PDA).
In the following, the artificial enhancement method based on digital twinning provided by the present application is introduced.
Referring to fig. 1, fig. 1 shows a schematic flow chart of the digital twin-based artificial enhancement method of the present application, which is applied to a server. The method provided by the application specifically comprises the following steps:
101. and acquiring optimized equipment data, and mapping the optimized equipment data to the initial digital twin space model through digital twin modeling so as to update to obtain the digital twin space model.
In the embodiment of the application, the initial digital twin space model is stored in the server in advance, and each submodel in the initial digital twin space model is mapped to the initial digital twin space model based on the digital twin modeling, so that the real world is completely or partially reduced. More specifically, the initial digital twin space model mainly restores the buildings in the real world, but does not restore other objects (such as garbage cans on roads, real people, automobiles, etc.).
At this time, when a target operation object of the cloud robot to be controlled needs to be added to the initial digital twin space model through a digital twin technology, optimization device data (generally including size data, operation parameter data and the like, where the size data is a length, a width, a high level parameter of the target operation object, and the operation parameter data is an average movement speed and the like of the target operation object) obtained through conversion based on real device data of the target operation object needs to be obtained, and then the optimization device data is mapped to the initial digital twin space model through digital twin modeling so as to obtain the digital twin space model through updating.
In the process of mapping the optimized device data to the initial digital twin space model through digital twin modeling, the coordinates of the optimized device data in the physical space can be correspondingly mapped to 1: 1, corresponding space position in the initial digital twin space of the reduction, thereby realizing accurate mapping after modeling.
In one embodiment, the step 101 comprises:
acquiring real equipment data, and correspondingly converting the real equipment data into optimized equipment data according to a preset equipment conversion strategy.
In the embodiment of the application, for example, the real device data of the target operation object is corresponding waste, and the optimized device data of the target operation object is a flower, so that the waste is converted into the flower to be displayed in the digital twin space model, not only can the real display of the target operation object be avoided, but also the modeling difficulty can be reduced to improve the data processing efficiency. The scheme of the application can be suitable for the following scenes, for example, the task of sorting various wastes by the cloud robot in the real world can be converted into the task of arranging various flowers in a beautiful garden under the digital twin space model. The scheme is also suitable for off-line training data collection, for example, the collection of the robot path planning training data is converted into a racing game, human players race in various scenes, and the race data is converted into training data for the robot to learn the path planning.
In one embodiment, the step 101 comprises:
obtaining size data included in the optimization device data, and enabling the size data to pass through a 1: mapping the digital twin modeling of the 1 to the initial digital twin space model so as to update to obtain a digital twin space model; the updated digital twin space model comprises an optimization equipment twin model corresponding to the optimization equipment data;
and acquiring operating parameter data included in the optimized equipment data, and fusing the operating parameter data with an optimized equipment twin model in the digital twin space model to update the digital twin space model.
In the embodiment of the present application, the size data is obtained by dividing 1: the digital twin modeling of 1 is mapped to the initial digital twin space model, and the ratio of 1: 1, mapping the digital twin model restored by 1 into the initial digital twin space model without carrying out operation of enlarging or reducing the size, and correspondingly mapping coordinates of the optimization equipment data in a physical space to 1: 1 corresponding spatial position in the initial digital twin space of the reduction. As can be seen, by passing 1: the digital twinning modeling technique of 1 can quickly map real devices into a digital twinning space model.
102. And acquiring the set scene artificial enhancement task data, and fusing the scene artificial enhancement task data and the digital twin space model to update the digital twin space model.
In the embodiment of the application, after the digital twin modeling is completed based on the optimization device data, the set scene artificial enhancement task data can be acquired, and the scene artificial enhancement task data and the digital twin space model are fused to update the digital twin space model. The above operations are equivalent to selecting any one digital twin model from the digital twin space models and specifically setting scene artificial enhancement task data for the digital twin model, where the scene artificial enhancement task data may be understood as adding a new attribute data, i.e. task data, to the selected digital twin model after selecting any one or more digital twin models from the digital twin space models (refer to a scene in which a task prompt may be triggered by a player operating a game character to click on an object or a game character and performing a related operation in a network game). After setting the scene artificial enhancement task data of one or more selected digital twin models, the scene artificial enhancement task data and the digital twin space model may be fused to update the digital twin space model. Therefore, by adding scene artificial enhancement task data to the digital twin model, the data of the digital twin space model in the dimension of task attribute can be increased.
In one embodiment, the step 102 comprises:
and acquiring a target twin model corresponding to the scene artificial enhancement task data in the digital twin space model, and fusing the scene artificial enhancement task data and the target twin model to update the digital twin space model.
In this embodiment, a target twin model corresponding to the scene artificial enhancement task data in the digital twin space model is obtained, where the target twin model may be one digital twin model (for example, a digital twin model corresponding to the optimization device data) or multiple digital twin models (for example, multiple other selected digital twin models are included in addition to the digital twin model corresponding to the optimization device data), and then the scene artificial enhancement task data is added to the target twin model to serve as attribute data of the target twin model, and the attribute data may be triggered by a subsequent operation to display or prompt a user.
103. And when a user access instruction sent by the access end is detected, sending the digital twin space model to the access end.
In the embodiment of the application, when the server detects a login request of an access terminal (such as VR equipment) and successfully verifies login information of the access terminal, a user access instruction (which can be understood as a user access instruction generated by a robot trainer operating the access terminal) sent by the access terminal is acquired to request to access the digital twin space model, and at this time, a digital twin space model corresponding to the user access instruction can be acquired and sent to the access terminal. Thereafter, a virtual model corresponding to the digital twin space model may be viewed on the access side based on VR technology. Therefore, the digital twin space model after modeling can be rapidly sent to the access end for displaying by detecting the access request of the access end.
In one embodiment, the step 103 further comprises:
sending prompt data corresponding to the scene artificial enhancement task data to an access end; the prompt data is text prompt data or voice prompt data.
In the embodiment of the application, after the access end receives the digital twin space model and displays the digital twin space model locally, if an operation object corresponding to the access end is a target twin model, the prompt data corresponding to the scene artificial enhancement task data is sent to the access end so as to prompt a user to perform corresponding operation on the target twin model through text prompt data or voice prompt data.
For example, when the scene artificial enhancement task data is used for operating the target twin model to pick up another target twin model and moving the other target twin model to a specified spatial position in the digital twin space model, it can be more vividly understood that the user operates the cloud robot model in the digital twin space model to pick up the model of the object a, transports the model of the object a to the spatial position of the model of the object B in the digital twin space model, and finally places the model of the object a in the model of the object B. In order to intuitively prompt the user to complete the above operation, the prompt data corresponding to the scene manual enhancement task data needs to be sent to the access terminal. By sending the scene artificial enhancement task data corresponding to the text type prompt data or the voice type prompt data, the user can be prompted to perform corresponding operation more visually.
104. And acquiring operation data which is sent by the access end and corresponds to the digital twin space model.
In the embodiment of the application, when the access terminal receives the digital twin space model and obtains scene artificial enhancement task data included in the digital twin space model, a target twin model can be selected according to the scene artificial enhancement task data, then the target twin model is correspondingly operated to generate corresponding operation data, and the generated operation data is sent to the server through the access terminal. Therefore, by receiving the operation data sent by the access end, the target twin model in the digital twin space model can be correspondingly operated.
In one embodiment, the step 104 further comprises:
acquiring space starting point data and space end point data corresponding to the operation data;
acquiring a space starting point object corresponding to the space starting point data and acquiring a space end point object corresponding to the space end point data;
and acquiring a current operation object set consisting of the space starting point object and the space end point object, and judging that the operation data meets a trigger condition corresponding to the scene artificial enhancement task data if the current operation object set is the same as the task object set corresponding to the scene artificial enhancement task data.
In this embodiment of the application, after the server obtains the operation data uploaded by the access terminal, the space starting point data and the space ending point data corresponding to the operation data may be obtained first, because the operation data corresponds to a specific motion trajectory, and a space starting point object (refer to the above-mentioned model of the object a and the cloud robot model in the digital twin space model) may be selected at the starting point of the motion trajectory, and a space ending point object (refer to the above-mentioned model of the object B) may be selected at the ending point of the motion trajectory, so that after obtaining the space starting point object and the space ending point object corresponding to the space starting point data, it may be determined whether the space starting point object and the space ending point object are the same as a set of task objects corresponding to the scene artificial enhancement task data. If the space starting point object and the space end point object are the same as the task object set corresponding to the scene artificial enhancement task data, it indicates that the user has completed corresponding operation according to the scene artificial enhancement task data, and at this time, it is determined that the operation data meets the triggering condition corresponding to the scene artificial enhancement task data. Therefore, whether to send feedback data can be judged by judging the operation data.
105. And when the operation data are determined to meet the triggering conditions corresponding to the scene artificial enhancement task data, sending feedback data corresponding to the operation data to an access end.
In the embodiment of the application, when it is determined that the operation data meets the triggering condition corresponding to the scene artificial enhancement task data, the operation corresponding to the scene artificial enhancement task data is represented by a user, and at this time, corresponding reward prompt data and feedback data can be triggered and sent to the access terminal. Therefore, by sending the feedback data, the user can be timely reminded that the corresponding operation is successfully completed according to the scene artificial enhancement task data
In one embodiment, the step 105 comprises:
acquiring motion trail data according to the space starting point data and the space end point data;
generating motion animation data according to the space starting point object, the space end point object and the motion trail data;
acquiring task completion prompt data corresponding to the scene artificial enhancement task data;
and forming feedback data by the motion animation data and the task completion prompt data and sending the feedback data to an access end.
In the embodiment of the application, in order to show feedback data with richer data dimensions, operation data generated based on scene artificial enhancement task data can be used for acquiring space starting point data and space end point data, and motion trajectory data is acquired by the space starting point data and the space end point data; then generating motion animation data based on the motion track data corresponding to the space starting point object, the space end point object and the operation data aimed at by the operation data; and then, pre-stored task completion prompt data is acquired, and finally, the motion animation data and the task completion prompt data form feedback data and are sent to an access end. Therefore, the feedback data fused with the multidimensional data is sent to the access end for displaying, so that the interactivity can be increased, and the information quantity of the user for acquiring the completed task is increased.
In order to better implement the method of the present application, the embodiment of the present application further provides an intelligent device 20. The smart device 20 may be understood as a server, and the access terminal appearing in the subsequent scenario is a VR device or a PC device.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an intelligent device 20 according to the present application, wherein the intelligent device 20 specifically includes the following structure: a transmitting unit 201, a receiving unit 202 and a processing unit 203.
The processing unit 203 is configured to obtain optimized device data, and map the optimized device data to the initial digital twin space model through digital twin modeling to obtain a digital twin space model through updating.
In the embodiment of the application, the initial digital twin space model is stored in the server in advance, and each submodel in the initial digital twin space model is mapped to the initial digital twin space model based on the digital twin modeling, so that the real world is completely or partially reduced. More specifically, the initial digital twin space model mainly restores the buildings in the real world, but does not restore other objects (such as garbage cans on roads, real people, automobiles, etc.).
At this time, when a target operation object of the cloud robot to be controlled needs to be added to the initial digital twin space model through a digital twin technology, optimization device data (generally including size data, operation parameter data and the like, where the size data is a length, a width, a high level parameter of the target operation object, and the operation parameter data is an average movement speed and the like of the target operation object) obtained through conversion based on real device data of the target operation object needs to be obtained, and then the optimization device data is mapped to the initial digital twin space model through digital twin modeling so as to obtain the digital twin space model through updating.
In the process of mapping the optimized device data to the initial digital twin space model through digital twin modeling, the coordinates of the optimized device data in the physical space can be correspondingly mapped to 1: 1, corresponding space position in the initial digital twin space of the reduction, thereby realizing accurate mapping after modeling.
In an embodiment, the processing unit 203 is specifically configured to:
acquiring real equipment data, and correspondingly converting the real equipment data into optimized equipment data according to a preset equipment conversion strategy.
In the embodiment of the application, for example, the real device data of the target operation object is corresponding waste, and the optimized device data of the target operation object is a flower, so that the waste is converted into the flower to be displayed in the digital twin space model, not only can the real display of the target operation object be avoided, but also the modeling difficulty can be reduced to improve the data processing efficiency. The scheme of the application can be suitable for the following scenes, for example, the task of sorting various wastes by the cloud robot in the real world can be converted into the task of arranging various flowers in a beautiful garden under the digital twin space model. The scheme is also suitable for off-line training data collection, for example, the collection of the robot path planning training data is converted into a racing game, human players race in various scenes, and the race data is converted into training data for the robot to learn the path planning.
In an embodiment, the processing unit 203 is specifically configured to:
obtaining size data included in the optimization device data, and enabling the size data to pass through a 1: mapping the digital twin modeling of the 1 to the initial digital twin space model so as to update to obtain a digital twin space model; the updated digital twin space model comprises an optimization equipment twin model corresponding to the optimization equipment data;
and acquiring operating parameter data included in the optimized equipment data, and fusing the operating parameter data with an optimized equipment twin model in the digital twin space model to update the digital twin space model.
In the embodiment of the present application, the size data is obtained by dividing 1: the digital twin modeling of 1 is mapped to the initial digital twin space model, and the ratio of 1: 1, mapping the digital twin model restored by 1 into the initial digital twin space model without carrying out operation of enlarging or reducing the size, and correspondingly mapping coordinates of the optimization equipment data in a physical space to 1: 1 corresponding spatial position in the initial digital twin space of the reduction. As can be seen, by passing 1: the digital twinning modeling technique of 1 can quickly map real devices into a digital twinning space model.
The processing unit 203 is further configured to obtain the set scene artificial enhancement task data, and fuse the scene artificial enhancement task data and the digital twin space model to update the digital twin space model.
In the embodiment of the application, after the digital twin modeling is completed based on the optimization device data, the set scene artificial enhancement task data can be acquired, and the scene artificial enhancement task data and the digital twin space model are fused to update the digital twin space model. The above operations are equivalent to selecting any one digital twin model from the digital twin space models and specifically setting scene artificial enhancement task data for the digital twin model, where the scene artificial enhancement task data may be understood as adding a new attribute data, i.e. task data, to the selected digital twin model after selecting any one or more digital twin models from the digital twin space models (refer to a scene in which a task prompt may be triggered by a player operating a game character to click on an object or a game character and performing a related operation in a network game). After setting the scene artificial enhancement task data of one or more selected digital twin models, the scene artificial enhancement task data and the digital twin space model may be fused to update the digital twin space model. Therefore, by adding scene artificial enhancement task data to the digital twin model, the data of the digital twin space model in the dimension of task attribute can be increased.
In an embodiment, the processing unit 203 is further specifically configured to:
and acquiring a target twin model corresponding to the scene artificial enhancement task data in the digital twin space model, and fusing the scene artificial enhancement task data and the target twin model to update the digital twin space model.
In this embodiment, a target twin model corresponding to the scene artificial enhancement task data in the digital twin space model is obtained, where the target twin model may be one digital twin model (for example, a digital twin model corresponding to the optimization device data) or multiple digital twin models (for example, multiple other selected digital twin models are included in addition to the digital twin model corresponding to the optimization device data), and then the scene artificial enhancement task data is added to the target twin model to serve as attribute data of the target twin model, and the attribute data may be triggered by a subsequent operation to display or prompt a user.
The sending unit 201 is configured to send the digital twin space model to the access end when a user access instruction sent by the access end is detected.
In the embodiment of the application, when the server detects a login request of an access terminal (such as VR equipment) and successfully verifies login information of the access terminal, a user access instruction (which can be understood as a user access instruction generated by a robot trainer operating the access terminal) sent by the access terminal is acquired to request to access the digital twin space model, and at this time, a digital twin space model corresponding to the user access instruction can be acquired and sent to the access terminal. Thereafter, a virtual model corresponding to the digital twin space model may be viewed on the access side based on VR technology. Therefore, the digital twin space model after modeling can be rapidly sent to the access end for displaying by detecting the access request of the access end.
In one embodiment, the sending unit 201 is further configured to:
sending prompt data corresponding to the scene artificial enhancement task data to an access end; the prompt data is text prompt data or voice prompt data.
In the embodiment of the application, after the access end receives the digital twin space model and displays the digital twin space model locally, if an operation object corresponding to the access end is a target twin model, the prompt data corresponding to the scene artificial enhancement task data is sent to the access end so as to prompt a user to perform corresponding operation on the target twin model through text prompt data or voice prompt data.
For example, when the scene artificial enhancement task data is used for operating the target twin model to pick up another target twin model and moving the other target twin model to a specified spatial position in the digital twin space model, it can be more vividly understood that the user operates the cloud robot model in the digital twin space model to pick up the model of the object a, transports the model of the object a to the spatial position of the model of the object B in the digital twin space model, and finally places the model of the object a in the model of the object B. In order to intuitively prompt the user to complete the above operation, the prompt data corresponding to the scene manual enhancement task data needs to be sent to the access terminal. By sending the scene artificial enhancement task data corresponding to the text type prompt data or the voice type prompt data, the user can be prompted to perform corresponding operation more visually.
The receiving unit 202 is configured to obtain operation data corresponding to the digital twin space model sent by the access terminal.
In the embodiment of the application, when the access terminal receives the digital twin space model and obtains scene artificial enhancement task data included in the digital twin space model, a target twin model can be selected according to the scene artificial enhancement task data, then the target twin model is correspondingly operated to generate corresponding operation data, and the generated operation data is sent to the server through the access terminal. Therefore, by receiving the operation data sent by the access end, the target twin model in the digital twin space model can be correspondingly operated.
In an embodiment, the step processing unit 203 is further specifically configured to:
acquiring space starting point data and space end point data corresponding to the operation data;
acquiring a space starting point object corresponding to the space starting point data and acquiring a space end point object corresponding to the space end point data;
and acquiring a current operation object set consisting of the space starting point object and the space end point object, and judging that the operation data meets a trigger condition corresponding to the scene artificial enhancement task data if the current operation object set is the same as the task object set corresponding to the scene artificial enhancement task data.
In this embodiment of the application, after the server obtains the operation data uploaded by the access terminal, the space starting point data and the space ending point data corresponding to the operation data may be obtained first, because the operation data corresponds to a specific motion trajectory, and a space starting point object (refer to the above-mentioned model of the object a and the cloud robot model in the digital twin space model) may be selected at the starting point of the motion trajectory, and a space ending point object (refer to the above-mentioned model of the object B) may be selected at the ending point of the motion trajectory, so that after obtaining the space starting point object and the space ending point object corresponding to the space starting point data, it may be determined whether the space starting point object and the space ending point object are the same as a set of task objects corresponding to the scene artificial enhancement task data. If the space starting point object and the space end point object are the same as the task object set corresponding to the scene artificial enhancement task data, it indicates that the user has completed corresponding operation according to the scene artificial enhancement task data, and at this time, it is determined that the operation data meets the triggering condition corresponding to the scene artificial enhancement task data. Therefore, whether to send feedback data can be judged by judging the operation data.
The sending unit 201 is further configured to send feedback data corresponding to the operation data to the access end when it is determined that the operation data meets the trigger condition corresponding to the scene artificial enhancement task data.
In the embodiment of the application, when it is determined that the operation data meets the triggering condition corresponding to the scene artificial enhancement task data, the operation corresponding to the scene artificial enhancement task data is represented by a user, and at this time, corresponding reward prompt data and feedback data can be triggered and sent to the access terminal. Therefore, by sending the feedback data, the user can be timely reminded that the corresponding operation is successfully completed according to the scene artificial enhancement task data
In an embodiment, the sending unit 201 is further specifically configured to:
acquiring motion trail data according to the space starting point data and the space end point data;
generating motion animation data according to the space starting point object, the space end point object and the motion trail data;
acquiring task completion prompt data corresponding to the scene artificial enhancement task data;
and forming feedback data by the motion animation data and the task completion prompt data and sending the feedback data to an access end.
In the embodiment of the application, in order to show feedback data with richer data dimensions, operation data generated based on scene artificial enhancement task data can be used for acquiring space starting point data and space end point data, and motion trajectory data is acquired by the space starting point data and the space end point data; then generating motion animation data based on the motion track data corresponding to the space starting point object, the space end point object and the operation data aimed at by the operation data; and then, pre-stored task completion prompt data is acquired, and finally, the motion animation data and the task completion prompt data form feedback data and are sent to an access end. Therefore, the feedback data fused with the multidimensional data is sent to the access end for displaying, so that the interactivity can be increased, and the information quantity of the user for acquiring the completed task is increased.
The present application further provides a processing device, and referring to fig. 3, fig. 3 shows a schematic structural diagram of the processing device of the present application, and specifically, the processing device of the present application includes a processor, and the processor is configured to implement the steps in the embodiment corresponding to fig. 1 when executing the computer program stored in the memory; alternatively, the processor is configured to implement the functions of the modules in the corresponding embodiment of fig. 2 when executing the computer program stored in the memory.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in a memory and executed by a processor to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The processing device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the illustration is merely an example of a processing device and is not meant to be limiting, and that more or fewer components than those illustrated may be included, or some components may be combined, or different components may be included, for example, the processing device may also include input output devices, network access devices, buses, etc., through which the processor, memory, input output devices, network access devices, etc., are connected.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center for the processing device and the various interfaces and lines connecting the various parts of the overall processing device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the processing device, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The display screen is used for displaying characters of at least one character type output by the input and output unit.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus, the processing device and the corresponding modules thereof described above may refer to the description in the embodiment corresponding to fig. 1, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, where a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in the embodiment corresponding to fig. 1 in the present application, and specific operations may refer to the description in the embodiment corresponding to fig. 1, and are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the embodiment corresponding to fig. 1, the beneficial effects that can be achieved in the embodiment corresponding to fig. 1 can be achieved, and the detailed description is given in the foregoing description, and will not be repeated herein.
The above detailed description is provided for the artificial enhancement method, apparatus and storage medium based on digital twins, and the specific examples are applied in the embodiments of the present application to explain the principle and implementation of the present application, and the description of the above embodiments is only used to help understanding the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for digital twin-based artificial enhancement, the method comprising:
acquiring optimized equipment data obtained based on real equipment data conversion, and mapping the optimized equipment data to an initial digital twin space model through digital twin modeling so as to update to obtain a digital twin space model;
acquiring set scene artificial enhancement task data, and fusing the scene artificial enhancement task data and the digital twin space model to update the digital twin space model;
when a user access instruction sent by an access end is detected, sending the digital twin space model to the access end;
acquiring operation data which is sent by an access end and corresponds to the digital twin space model;
and when the operation data are determined to meet the triggering conditions corresponding to the scene artificial enhancement task data, sending feedback data corresponding to the operation data to an access end.
2. The method of claim 1, wherein obtaining optimized device data transformed based on real device data comprises:
acquiring real equipment data, and correspondingly converting the real equipment data into optimized equipment data according to a preset equipment conversion strategy.
3. The method of claim 1, wherein the obtaining optimized device data and mapping the optimized device data to an initial digital twin space model through digital twin modeling to update a digital twin space model comprises:
obtaining size data included in the optimization device data, and enabling the size data to pass through a 1: mapping the digital twin modeling of the 1 to the initial digital twin space model so as to update to obtain a digital twin space model; the updated digital twin space model comprises an optimization equipment twin model corresponding to the optimization equipment data;
and acquiring operating parameter data included in the optimized equipment data, and fusing the operating parameter data with an optimized equipment twin model in the digital twin space model to update the digital twin space model.
4. The method of claim 3, wherein the obtaining the set scene artificial enhancement task data and fusing the scene artificial enhancement task data with the digital twin space model to update the digital twin space model comprises:
and acquiring a target twin model corresponding to the scene artificial enhancement task data in the digital twin space model, and fusing the scene artificial enhancement task data and the target twin model to update the digital twin space model.
5. The method of claim 1, wherein after sending the digital twin space model to an access terminal upon detecting a user access command sent by the access terminal, further comprising:
sending prompt data corresponding to the scene artificial enhancement task data to an access end; the prompt data is text prompt data or voice prompt data.
6. The method according to claim 1, wherein said obtaining operation data corresponding to said digital twin space model transmitted by the access terminal further comprises:
acquiring space starting point data and space end point data corresponding to the operation data;
acquiring a space starting point object corresponding to the space starting point data and acquiring a space end point object corresponding to the space end point data;
and acquiring a current operation object set consisting of the space starting point object and the space end point object, and judging that the operation data meets a trigger condition corresponding to the scene artificial enhancement task data if the current operation object set is the same as the task object set corresponding to the scene artificial enhancement task data.
7. The method of claim 6, wherein sending feedback data corresponding to the operational data to an access terminal comprises:
acquiring motion trail data according to the space starting point data and the space end point data;
generating motion animation data according to the space starting point object, the space end point object and the motion trail data;
acquiring task completion prompt data corresponding to the scene artificial enhancement task data;
and forming feedback data by the motion animation data and the task completion prompt data and sending the feedback data to an access end.
8. A smart device, the smart device comprising: a transmitting unit, a receiving unit and a processing unit;
the processing unit is used for acquiring optimized equipment data, and mapping the optimized equipment data to an initial digital twin space model through digital twin modeling so as to update to obtain a digital twin space model;
the processing unit is further configured to acquire the set scene artificial enhancement task data, and fuse the scene artificial enhancement task data with the digital twin space model to update the digital twin space model;
the sending unit is used for sending the digital twin space model to the access end when a user access instruction sent by the access end is detected;
the receiving unit is used for acquiring operation data which is sent by the access end and corresponds to the digital twin space model;
the sending unit is further configured to send feedback data corresponding to the operation data to the access end when it is determined that the operation data meets the trigger condition corresponding to the scene artificial enhancement task data.
9. A processing device comprising a processor and a memory, a computer program being stored in the memory, the processor performing the method according to any of claims 1 to 7 when calling the computer program in the memory.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 7.
CN202111644024.7A 2021-12-29 2021-12-29 Artificial enhancement method, device and medium based on digital twins Active CN114372356B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111644024.7A CN114372356B (en) 2021-12-29 2021-12-29 Artificial enhancement method, device and medium based on digital twins
PCT/CN2022/108917 WO2023124055A1 (en) 2021-12-29 2022-07-29 Digital-twin-based artificial enhancement method and apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111644024.7A CN114372356B (en) 2021-12-29 2021-12-29 Artificial enhancement method, device and medium based on digital twins

Publications (2)

Publication Number Publication Date
CN114372356A true CN114372356A (en) 2022-04-19
CN114372356B CN114372356B (en) 2023-02-28

Family

ID=81142993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111644024.7A Active CN114372356B (en) 2021-12-29 2021-12-29 Artificial enhancement method, device and medium based on digital twins

Country Status (2)

Country Link
CN (1) CN114372356B (en)
WO (1) WO2023124055A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278110A (en) * 2022-07-12 2022-11-01 时空穿越(深圳)科技有限公司 Information processing method, device and system based on digital twin cabin and readable storage medium
WO2023124055A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Digital-twin-based artificial enhancement method and apparatus, and medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291583B (en) * 2023-11-27 2024-02-23 贵州联广科技股份有限公司 Internet of things data management method and system
CN117475041B (en) * 2023-12-28 2024-03-29 湖南视觉伟业智能科技有限公司 Digital twin shore bridge simulation method based on RCMS
CN117781896B (en) * 2024-02-23 2024-05-10 天生桥二级水力发电有限公司 Infrared detection processing method and device suitable for carbon brush of unit and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047148A (en) * 2019-04-10 2019-07-23 珠海梅西互动技术有限公司 A kind of the emulation interactive visual system and implementation method of virtual robot work station
CN111161410A (en) * 2019-12-30 2020-05-15 中国矿业大学(北京) Mine digital twinning model and construction method thereof
CN112668687A (en) * 2020-12-01 2021-04-16 达闼机器人有限公司 Cloud robot system, cloud server, robot control module and robot
CN113246122A (en) * 2021-04-26 2021-08-13 广东工贸职业技术学院 Digital twin practical training method and system of industrial robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11584020B2 (en) * 2018-12-04 2023-02-21 Cloudminds Robotics Co., Ltd. Human augmented cloud-based robotics intelligence framework and associated methods
EP3798747A1 (en) * 2019-09-26 2021-03-31 Siemens Aktiengesellschaft Controlling a machine based on an online digital twin
CN112428272A (en) * 2020-11-16 2021-03-02 同济大学 Robot-environment dynamic interactive rendering system and method for digital twin
CN113344505A (en) * 2021-05-11 2021-09-03 广东省科学院智能制造研究所 Sanitary ware product assembly production management system and method based on digital twinning
CN114372356B (en) * 2021-12-29 2023-02-28 达闼机器人股份有限公司 Artificial enhancement method, device and medium based on digital twins

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047148A (en) * 2019-04-10 2019-07-23 珠海梅西互动技术有限公司 A kind of the emulation interactive visual system and implementation method of virtual robot work station
CN111161410A (en) * 2019-12-30 2020-05-15 中国矿业大学(北京) Mine digital twinning model and construction method thereof
CN112668687A (en) * 2020-12-01 2021-04-16 达闼机器人有限公司 Cloud robot system, cloud server, robot control module and robot
CN113246122A (en) * 2021-04-26 2021-08-13 广东工贸职业技术学院 Digital twin practical training method and system of industrial robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124055A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Digital-twin-based artificial enhancement method and apparatus, and medium
CN115278110A (en) * 2022-07-12 2022-11-01 时空穿越(深圳)科技有限公司 Information processing method, device and system based on digital twin cabin and readable storage medium
CN115278110B (en) * 2022-07-12 2023-08-25 时空穿越(深圳)科技有限公司 Information processing method, device and system based on digital twin cabin and readable storage medium

Also Published As

Publication number Publication date
CN114372356B (en) 2023-02-28
WO2023124055A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN114372356B (en) Artificial enhancement method, device and medium based on digital twins
US11127311B2 (en) Systems and methods for programming instruction
CN104461318B (en) Reading method based on augmented reality and system
CN111260764B (en) Method, device and storage medium for making animation
KR20210110620A (en) Interaction methods, devices, electronic devices and storage media
WO2021143278A1 (en) Image processing method and apparatus, and electronic device and storage medium
US9595202B2 (en) Programming learning center
CN112836064A (en) Knowledge graph complementing method and device, storage medium and electronic equipment
CN103548012A (en) Remotely emulating computing devices
AU2014101627A4 (en) Computer-implemented frameworks and methodologies for generating, delivering and managing adaptive tutorials
US20160328984A1 (en) Computer-implemented frameworks and methodologies for enabling adaptive functionality based on a knowledge model
CN109848985A (en) A kind of the graphical programming method, apparatus and intelligent terminal of robot
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN112911052A (en) Information sharing method and device
CN116543082A (en) Digital person generation method and device and digital person generation system
CN109815557B (en) Robot model display method and device and intelligent terminal
CN115797517A (en) Data processing method, device, equipment and medium of virtual model
CN112435316B (en) Method and device for preventing mold penetration in game, electronic equipment and storage medium
CN112221124B (en) Virtual object generation method and device, electronic equipment and storage medium
CN115617429A (en) Data processing method and related equipment
CN113476833A (en) Game action recognition method and device, electronic equipment and storage medium
JPH1166351A (en) Method and device for controlling object operation inside three-dimensional virtual space and recording medium recording object operation control program
CN210119873U (en) Supervision device based on VR equipment
CN114489416A (en) Operation demonstration method and device, electronic equipment and storage medium
CN117215554A (en) Simulation method, related device, equipment and storage medium of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant