CN112428272A - Robot-environment dynamic interactive rendering system and method for digital twin - Google Patents

Robot-environment dynamic interactive rendering system and method for digital twin Download PDF

Info

Publication number
CN112428272A
CN112428272A CN202011276714.7A CN202011276714A CN112428272A CN 112428272 A CN112428272 A CN 112428272A CN 202011276714 A CN202011276714 A CN 202011276714A CN 112428272 A CN112428272 A CN 112428272A
Authority
CN
China
Prior art keywords
environment
robot
contact
digital twin
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011276714.7A
Other languages
Chinese (zh)
Inventor
何斌
李鑫
王志鹏
朱忠攀
李刚
周艳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011276714.7A priority Critical patent/CN112428272A/en
Publication of CN112428272A publication Critical patent/CN112428272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a robot-environment dynamic interactive rendering system facing to digital twins, which comprises a physical space, a communication interface and a digital twins space; the physical space comprises a robot, an action environment, a data sensor and a main end controller; the data sensor is arranged in a working scene, is used for collecting data and is connected with the communication interface through the data interface; the main controller controls the operation of the robot and the information transmission of the physical space; the communication interface is used for connecting the physical space and the digital twin digital space to realize real-time data communication; the digital twin space comprises a geometric expression module, a physical expression module and a three-dimensional display and control platform.

Description

Robot-environment dynamic interactive rendering system and method for digital twin
Technical Field
The invention relates to the field of robot-environment interaction, in particular to a system and a method for rendering robot-environment dynamic interaction facing to digital twins.
Background
In a robot-environment interaction-oriented scene, due to the complexity and uncertainty of a working environment and polymorphism and variability of interaction, an accurate, stable and efficient machine-loop interaction process cannot be guaranteed. The intelligent manufacturing means that advanced manufacturing technology and new generation information technology are deeply fused, such as big data, internet and artificial intelligence technology, the digitization, networking and intelligence of the manufacturing are realized, the product quality, benefit and service level are continuously improved, countries in the world have developed respective advanced manufacturing development strategies, such as the American industrial internet and the Germany industry 4.0, one of the purposes is to borrow the new generation information technology to realize the interconnection and intelligent operation of the physical world and the information world, and further realize an intelligent system. The emergence and rapid development of the digital twin technology provides a new idea for achieving the purposes. The digital twins are a virtual model of a physical entity created in a digital mode, the behavior of the physical entity in a real environment is simulated by means of data, the monitoring, optimization and regulation of the physical system are realized by means of virtual-real interaction feedback, data fusion analysis, decision iteration and the like, a digital twins system oriented to a machine-ring interaction scene is established, the machine-ring interaction can be effectively monitored, simulated, optimized, regulated and the like, and machine-ring stability and high-efficiency interaction are realized. How to establish an accurate and real-time digital twin system to simulate robot-environment interaction is a difficult point to be solved urgently, and the key is to establish a digital twin interaction model with faithful mapping and high fidelity characteristics.
At present, dynamic interactive modeling of three-dimensional objects is mainly based on computer graphics technology to realize the deformation effect of virtual objects, and focuses on visualization effects, such as video games, virtual simulation and other fields. The main methods include free deformation, framework driven deformation, deformation based on physics and deformation based on mesh curved surface. The deformation based on the mesh curved surface only needs a small number of vertexes of the operation surface, and the mesh surface of the three-dimensional object can be directly operated, so that the deformation is realized, and the real-time interactive deformation operation is greatly facilitated. The physical modeling facing the robot-environment interactive digital twin system is to simulate the response of the environment to the applied excitation during the real machine-ring dynamic interaction, and not only needs to consider the real-time visualization requirement of the three-dimensional object interactive response, but also needs to consider the response characteristic of the real environment in the existing scene. Furthermore, the environment is dynamic, i.e. before the contact is performed, the physical properties of the environment are unknown and will change continuously during the performance as the contacting object changes.
Accordingly, those skilled in the art have endeavored to develop a digital twin-oriented robot-environment dynamic interactive rendering system and method.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the technical problem to be solved by the present invention is to provide a system and a method for digital twin-oriented robot-environment dynamic interactive rendering.
To achieve the above object, the present invention provides in a first aspect a digital twin-oriented robot-environment dynamic interactive rendering system, comprising a physical space, a communication interface, and a digital twin space; the physical space comprises a robot, an action environment, a data sensor and a main end controller; the data sensor is arranged in a working scene, is used for collecting data and is connected with the communication interface through the data interface; the main controller controls the operation of the robot and the information transmission of the physical space; the communication interface is used for connecting the physical space and the digital twin digital space to realize real-time data communication; the digital twin space comprises a geometric expression module, a physical expression module and a three-dimensional display and control platform; wherein the geometric expression module implements geometric modeling of the object in physical space; the physical expression module comprises an interactive dynamics module and a deformation rendering module, wherein the interactive dynamics module comprises an environment contact dynamics model; the environment contact dynamic model is used for providing a virtual contact force between the manipulator and the environment; the deformation rendering module introduces the environmental dynamics characteristic under a real interactive scene by adopting a deformation method based on a mesh curved surface, and directly operates the mesh surface of the three-dimensional object; the three-dimensional display and control platform is used for carrying out immersive rendering on the multi-dimensional virtual model and the calculation result.
Further, the acquired data comprises geometric information and physical information of the working scene, wherein the geometric information comprises the shape, size and position of an entity in the working scene; the physical information comprises contact force of the robot contacting with the environment; the data sensor includes an RGB visual sensor and a moment sensor.
Furthermore, the geometric expression module adopts a unified robot description format to describe a mechanical arm connecting rod and a joint of the robot and relative positions of the mechanical arm connecting rod and the joint; and completing point cloud fusion of the contact environment at multiple angles based on an ICP (iterative closed Point) registration algorithm, reconstructing a three-dimensional model of the contact environment, and increasing texture and color information by using OpenGL.
Further, the environmental contact kinetic model used a Kelvin-Voigt model:
Figure BDA0002779307480000021
wherein FP(k) For the contact force applied at point p on the surrounding surface at time k, [ theta (k) ═ k (k), B (k)]T,
Figure BDA0002779307480000022
K and B represent the stiffness and damping of the environment, x and
Figure BDA0002779307480000023
respectively representing contact point displacement and speed; and obtaining the displacement, the speed and the contact force of the contact point by using a visual sensor, realizing the online identification of model parameters by adopting a self-disturbance recursive least square method, estimating the rigidity and the damping coefficient of the environment, and finishing the dynamic modeling.
Further, in the deformation rendering module, the position of the vertex i at the time t is set as Si(t),Si∈SNThen the position at t + Δ t is expressed as
Si(t+Δt)=Si(t)+Φi(Si,Ni,SP,FP,B,K|t)*(1/r);
Where r 1/Δ t is the rendering rate, Φi(Si,Ni,SP,FPTable of B, K | t)Shows a vertex displacement function and has
Figure BDA0002779307480000024
Figure BDA0002779307480000025
Figure BDA0002779307480000026
Figure BDA0002779307480000027
Wherein N isiIs a normal line, SPIs the position of the contact point, FpIs the contact force; k is the environmental rigidity and B is the damping coefficient; dj,pThe distance from each vertex to the contact point; σ is the standard deviation and R is the radius of deformation.
The invention relates to a robot-environment dynamic interactive rendering method facing to digital twin in a second aspect, which comprises the following steps: the data sensor is arranged on the robot in a physical space working scene and in the environment, is used for collecting data and is connected with the communication interface through the data interface; the operation of the robot and the information transmission of a physical space are controlled through a main end controller; connecting the physical space and the digital twin space through a communication interface to realize real-time data communication; carrying out geometric modeling on an object in a physical space; providing a virtual contact force between the manipulator and the environment through an environment contact dynamics model; in a digital twin space, a deformation rendering module adopts a deformation method based on a mesh curved surface, introduces the environmental dynamics characteristics under a real interactive scene, and directly operates the mesh surface of a three-dimensional object; and carrying out immersive rendering on the multi-dimensional virtual model and the calculation result.
Further, the acquired data comprises geometric information and physical information of the working scene, wherein the geometric information comprises the shape, size and position of an entity in the working scene; the physical information comprises contact force of the robot contacting with the environment; the data sensor includes an RGB visual sensor and a moment sensor.
Furthermore, when geometric modeling is carried out, a unified robot description format is adopted to describe a mechanical arm connecting rod and a joint of the robot and relative positions of the mechanical arm connecting rod and the joint; and completing point cloud fusion of the contact environment at multiple angles based on an ICP (iterative closed Point) registration algorithm, reconstructing a three-dimensional model of the contact environment, and increasing texture and color information by using OpenGL.
Further, the environmental contact kinetic model used a Kelvin-Voigt model:
Figure BDA0002779307480000031
wherein FP(k) For the contact force applied at point p on the surrounding surface at time k, [ theta (k) ═ k (k), B (k)]T,
Figure BDA0002779307480000032
K and B represent the stiffness and damping of the environment, x and
Figure BDA0002779307480000033
respectively representing contact point displacement and speed; and obtaining the displacement, the speed and the contact force of the contact point by using a visual sensor, realizing the online identification of model parameters by adopting a self-disturbance recursive least square method, estimating the rigidity and the damping coefficient of the environment, and finishing the dynamic modeling.
Further, in the deformation rendering module, the position of the vertex i at the time t is set as Si(t),Si∈SNThen the position at t + Δ t is expressed as
Si(t+Δt)=Si(t)+Φi(Si,Ni,SP,FP,B,K|t)*(1/r);
Where r 1/Δ t is the rendering rate, Φi(Si,Ni,SP,FPB, K | t) represents a vertex displacement function and has
Figure BDA0002779307480000034
Figure BDA0002779307480000035
Figure BDA0002779307480000036
Figure BDA0002779307480000037
Wherein N isiIs a normal line, SPIs the position of the contact point, FpIs the contact force; k is the environmental rigidity and B is the damping coefficient; dj,pThe distance from each vertex to the contact point; σ is the standard deviation and R is the radius of deformation.
The invention obtains robot-environment interaction information through sensors such as vision, force and the like, establishes an environment contact dynamic model, and adopts an improved deformation algorithm based on a mesh curved surface to realize real-time deformation rendering of a virtual object based on the dynamic parameters of the environment.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram of the overall structure of a preferred embodiment of the present invention;
FIG. 2 is a flowchart of rendering a mesh surface-based virtual object deformation in a preferred embodiment of the present invention;
FIG. 3 is a diagram illustrating the deformation rendering effect of the elastic three-dimensional film under different forces according to a preferred embodiment of the present invention;
FIG. 4 is a diagram of a robot-environment interaction digital twin in a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
As shown in fig. 1-4, a robot-environment dynamic interactive rendering system facing digital twin according to the present invention includes a physical space, a communication interface, and a digital twin digital space.
The physical space consists of a robot, an action environment, a data sensor, a main end controller and various devices. The data sensor is arranged in a working scene and used for collecting related data and is connected with the communication interface through different data interfaces. The data mainly comprises geometric information and physical information of a working scene, wherein the geometric information comprises three-dimensional information such as the shape, the size, the dimension and the position of a main entity in the working scene. The physical information is mainly the contact force of the robot in contact with the environment. Specifically, depth information and color information of an environment in a work scene are acquired from multiple angles through a plurality of depth and RGB visual sensors arranged around a work space; the method comprises the following steps of obtaining information such as movement force, moment direction and action point through sensors such as moment and the like arranged on joints of the robot; and the main controller controls the operation of the robot and the information transmission of the physical space.
The communication interface comprises a wireless communication interface and a wired communication interface, mainly comprises Wifi, 5G, TCP/IP and the like, and is used for connecting a physical space and a digital twin digital space and ensuring reliable real-time data communication. The digital space comprises a geometric expression module, a physical expression module and a three-dimensional display and control platform. The geometric representation module implements geometric modeling of the primary objects in physical space, including the robotic arms and the contact environment. Wherein, a unified robot description format is adopted to describe the mechanical arm connecting rod, the joint and the relative position of the mechanical arm connecting rod and the joint; and completing point cloud fusion of the contact environment at multiple angles based on an ICP (iterative closed Point) registration algorithm, reconstructing a three-dimensional model of the contact environment, and finally increasing texture and color information by using OpenGL to provide a display basis for the calculation result of the physical expression module. The physical expression module comprises an interactive dynamics module and a deformation rendering module, and the interactive dynamics module comprises an environment contact dynamics model and a dynamics parameter online identification algorithm. The environmental contact dynamics model provides a virtual contact force between the manipulator and the environment. A suitable contact kinetics model is selected based on the following assumptions: the end effector is rigid, and the contact area is small; the object is static and the surface is smooth. Based on this assumption, the simplicity and clear physical representation of the Kelvin-Voigt model makes it suitable for use in the present system, with its discrete representation as shown in equation (1).
Figure BDA0002779307480000051
Wherein FP(k) For the contact force applied at point p on the surrounding surface at time k, [ theta (k) ═ k (k), B (k)]T,
Figure BDA0002779307480000052
K and B represent the stiffness and damping of the environment, x and
Figure BDA0002779307480000053
respectively, contact point displacement and velocity. The displacement, the velocity quantity and the contact force of a contact point are obtained by using a vision sensor, then the online identification of model parameters is realized by adopting a self-disturbance recursive least square method, the rigidity and the damping coefficient of the environment are estimated, and the machine-environment interactive dynamics modeling under the non-static environment is completed.
The deformation rendering module adopts an improved deformation technology based on a mesh curved surface, introduces the environmental dynamics characteristic under a real interactive scene, and directly operates the mesh surface of the three-dimensional object. To simplify the analysis, the following are assumed: first, all vertices are no mass, stiffness and stand alone units, no spring connection. Secondly, the dynamic properties of the object are isotropic, including stiffness and damping parameters. Let the position of the vertex i at the time t be Si(t),Si∈SNThen the position at t + Δ t can be expressed as
Si(t+Δt)=Si(t)+Φi(Si,Ni,SP,FP,B,K|t)*(1/r) (2)
Where r 1/Δ t is the rendering rate, Φi(Si,Ni,SP,FPB, K | t) represents the vertex displacement function, which follows the normal NiContact point position SPContact force FpEnvironmental stiffness and damping coefficient. Therefore, the key to implementing morphed rendering is to establish the change function of the vertices. The displacement function of the vertices is established as follows.
(1) Determining direction of deformation
The deformation direction can be determined by the vector sum of vertex normals, and since the distances from each vertex to the contact point are different, the closer the distance is, the greater the influence of the vertex on the final deformation direction is, the deformation direction can be calculated by adopting a vertex weighted average method, and the expression is shown in formula (3). Wherein Dj,pThe distance from each vertex to the point of contact.
Figure BDA0002779307480000054
(2) Calculating a displacement factor
The displacement factor represents the effect of the contact force on each vertex and, obviously, is also related to the distance from the contact point to the vertex, the closer the distance, the greater the displacement factor. In order to realize smooth transition of the deformation surface, a displacement factor is calculated by adopting a Gaussian fuzzy algorithm, and a calculation expression is shown as an expression (4). Where σ is the standard deviation and R is the radius of deformation.
Figure BDA0002779307480000055
Thus, the force applied at the vertex j is
Fj(t)=Gj(t)*Fp(t) (5)
(3) Constructing a Displacement function
However, the above analysis only considers the influence of the contact force simply, and in order to simulate the deformation behavior of the surface of the virtual object more accurately, it is necessary to increase the environmental dynamics affecting the deformation. The environment in real scenes is a solid that resists deformation by compression or stretching as they deform. To represent this phenomenon, a spring is established connecting between the two versions of each vertex, increasing the stiffness characteristic of the environment, and furthermore, to prevent permanent oscillations, a damping effect is also taken into account. Based on this, formula (5) can be rewritten as shown in formula (6). Wherein the stiffness K and the damping B can be estimated by the interactive dynamics module.
Figure BDA0002779307480000061
According to newton's second law, the displacement function of the vertex j can be calculated by equation (7).
Figure BDA0002779307480000062
And (3) substituting the formula (7) into the formula (2) to obtain the position of any vertex at the moment t + delta t, thereby finishing the rendering of the surface deformation of the virtual object.
Due to the fact that the data are displayed in a multi-source and complex interactive environment, the three-dimensional display and control platform utilizes a Unity3D development tool to conduct immersive rendering on a multi-dimensional virtual model and a calculation result based on a virtual reality technology, and provides a human-assembly system interactive interface, so that an operator can observe the machine-environment interactive condition in real time in the three-dimensional environment and conduct manual intervention guidance on the machine-environment interactive condition when necessary.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A robot-environment dynamic interactive rendering system facing digital twin is characterized by comprising a physical space, a communication interface and a digital twin space; wherein,
the physical space comprises a robot, an action environment, a data sensor and a main end controller; wherein
The data sensor is arranged in a working scene, is used for collecting data and is connected with the communication interface through a data interface;
the main controller controls the operation of the robot and the information transmission of the physical space;
the communication interface is used for connecting the physical space and the digital twin digital space to realize real-time data communication;
the digital twin space comprises a geometric expression module, a physical expression module and a three-dimensional display and control platform; wherein
The geometric expression module realizes geometric modeling of an object in a physical space;
the physical expression module comprises an interactive dynamics module and a deformation rendering module, wherein,
the interaction dynamics module comprises an environmental contact dynamics model; the environment contact dynamic model is used for providing a virtual contact force between the manipulator and the environment; the deformation rendering module introduces the environmental dynamics characteristic under a real interactive scene by adopting a deformation method based on a mesh curved surface, and directly operates the mesh surface of the three-dimensional object;
the three-dimensional display and control platform is used for carrying out immersive rendering on the multi-dimensional virtual model and the calculation result.
2. The digital twin-oriented robot-environment dynamic interactive rendering system of claim 1, wherein the collected data comprises geometrical information, physical information of the work scene, wherein the geometrical information comprises shape, size and position of the entity in the work scene; the physical information comprises contact force of the robot contacting with the environment; the data sensor includes an RGB visual sensor and a moment sensor.
3. The digital twin-oriented robot-environment dynamic interactive rendering system as claimed in claim 2, wherein the geometric expression module describes the mechanical arm links and joints of the robot and their relative positions in a unified robot description format; and completing point cloud fusion of the contact environment at multiple angles based on an ICP (iterative closed Point) registration algorithm, reconstructing a three-dimensional model of the contact environment, and increasing texture and color information by using OpenGL.
4. The digital twin oriented robot-environment dynamic interactive rendering system of claim 3, wherein the environment contact dynamics model employs a Kelvin-Voigt model:
Figure FDA0002779307470000011
wherein FP(k) For the contact force applied at point p on the surrounding surface at time k, [ theta (k) ═ k (k), B (k)]T
Figure FDA0002779307470000012
K and B represent the stiffness and damping of the environment, x and
Figure FDA0002779307470000013
respectively representing contact point displacement and speed; and obtaining the displacement, the speed and the contact force of the contact point by using a visual sensor, realizing the online identification of model parameters by adopting a self-disturbance recursive least square method, estimating the rigidity and the damping coefficient of the environment, and finishing the dynamic modeling.
5. The digital twin-oriented robot-environment dynamic interactive rendering system according to claim 4, wherein in the deformation rendering module, the position of the vertex i at the time t is set as Si(t),Si∈SNThen the position at t + Δ t is expressed as
Si(t+Δt)=Si(t)+Φi(Si,Ni,SP,FP,B,K|t)*(1/r);
Where r 1/Δ t is the rendering rate, Φi(Si,Ni,SP,FPB, K | t) represents a vertex displacement function and has
Figure FDA0002779307470000021
Figure FDA0002779307470000022
Figure FDA0002779307470000023
Figure FDA0002779307470000024
Wherein N isiIs a normal line, SPIs the position of the contact point, FpIs the contact force; k is the environmental rigidity and B is the damping coefficient; dj,pThe distance from each vertex to the contact point; σ is the standard deviation and R is the radius of deformation.
6. A robot-environment dynamic interactive rendering method facing digital twin is characterized by comprising the following steps:
the data sensor is arranged on the robot in a physical space working scene and in the environment, is used for collecting data and is connected with the communication interface through the data interface;
the operation of the robot and the information transmission of a physical space are controlled through a main end controller;
connecting the physical space and the digital twin space through a communication interface to realize real-time data communication;
carrying out geometric modeling on an object in a physical space;
providing a virtual contact force between the manipulator and the environment through an environment contact dynamics model;
in a digital twin space, a deformation rendering module adopts a deformation method based on a mesh curved surface, introduces the environmental dynamics characteristics under a real interactive scene, and directly operates the mesh surface of a three-dimensional object;
and carrying out immersive rendering on the multi-dimensional virtual model and the calculation result.
7. The digital twin-oriented robot-environment dynamic interactive rendering method as claimed in claim 6, wherein the collected data includes geometrical information, physical information of the work scene, wherein the geometrical information includes shape, size and position of the entity in the work scene; the physical information comprises contact force of the robot contacting with the environment; the data sensor includes an RGB visual sensor and a moment sensor.
8. The digital twin-oriented robot-environment dynamic interactive rendering method as claimed in claim 7, wherein, in the geometric modeling, a unified robot description format is adopted to describe the mechanical arm connecting rod and the joint of the robot and the relative positions of the mechanical arm connecting rod and the joint; and completing point cloud fusion of the contact environment at multiple angles based on an ICP (iterative closed Point) registration algorithm, reconstructing a three-dimensional model of the contact environment, and increasing texture and color information by using OpenGL.
9. The digital twin oriented robot-environment dynamic interactive rendering method of claim 8, wherein the environment contact dynamics model adopts a Kelvin-Voigt model:
Figure FDA0002779307470000025
wherein FP(k) For the contact force applied at point p on the surrounding surface at time k, [ theta (k) ═ k (k), B (k)]T
Figure FDA0002779307470000026
K andb represents the stiffness and damping of the environment, x and
Figure FDA0002779307470000027
respectively representing contact point displacement and speed; and obtaining the displacement, the speed and the contact force of the contact point by using a visual sensor, realizing the online identification of model parameters by adopting a self-disturbance recursive least square method, estimating the rigidity and the damping coefficient of the environment, and finishing the dynamic modeling.
10. The digital twin-oriented robot-environment dynamic interactive rendering method according to claim 9, wherein in the morphing rendering module, the position of the vertex i at the time t is set as Si(t),Si∈SNThen the position at t + Δ t is expressed as
Si(t+Δt)=Si(t)+Φi(Si,Ni,SP,FP,B,K|t)*(1/r);
Where r 1/Δ t is the rendering rate, Φi(Si,Ni,SP,FPB, K | t) represents a vertex displacement function and has
Figure FDA0002779307470000031
Figure FDA0002779307470000032
Figure FDA0002779307470000033
Figure FDA0002779307470000034
Wherein N isiIs a normal line, SPIs the position of the contact point, FpIs the contact force; k is the environmental rigidity and B is the damping coefficient; dj,pThe distance from each vertex to the contact point; σ is the standard deviation and R is the radius of deformation.
CN202011276714.7A 2020-11-16 2020-11-16 Robot-environment dynamic interactive rendering system and method for digital twin Pending CN112428272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276714.7A CN112428272A (en) 2020-11-16 2020-11-16 Robot-environment dynamic interactive rendering system and method for digital twin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276714.7A CN112428272A (en) 2020-11-16 2020-11-16 Robot-environment dynamic interactive rendering system and method for digital twin

Publications (1)

Publication Number Publication Date
CN112428272A true CN112428272A (en) 2021-03-02

Family

ID=74700341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276714.7A Pending CN112428272A (en) 2020-11-16 2020-11-16 Robot-environment dynamic interactive rendering system and method for digital twin

Country Status (1)

Country Link
CN (1) CN112428272A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917851A (en) * 2021-09-16 2022-01-11 北京天玛智控科技股份有限公司 Virtual test environment construction method based on digital twinning
CN113954066A (en) * 2021-10-14 2022-01-21 国电南瑞科技股份有限公司 Distribution network operation robot control method and device based on digital twin system
CN114505852A (en) * 2021-12-07 2022-05-17 中国科学院沈阳自动化研究所 Man-machine cooperation solid fuel shaping system based on digital twin and establishment method
CN115070780A (en) * 2022-08-24 2022-09-20 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115556112A (en) * 2022-10-28 2023-01-03 北京理工大学 Robot teleoperation method and system based on digital twins
WO2023124055A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Digital-twin-based artificial enhancement method and apparatus, and medium
CN116843831A (en) * 2023-06-20 2023-10-03 成都信息工程大学 Agricultural product storage fresh-keeping warehouse twin data management method and system
CN116863804A (en) * 2023-09-05 2023-10-10 锱云(上海)物联网科技有限公司 Machining demonstration method and machining demonstration system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102322798A (en) * 2011-08-18 2012-01-18 大连康基科技有限公司 Industrial measuring system based on optical imaging
CN103869983A (en) * 2014-03-26 2014-06-18 南京信息工程大学 Flexible object deformation simulation method for force haptic human-computer interaction
CN110119566A (en) * 2019-05-08 2019-08-13 武汉理工大学 A kind of cutting depth prediction technique and device suitable for the grinding and polishing of complex-curved robot abrasive band
CN110851966A (en) * 2019-10-30 2020-02-28 同济大学 Digital twin model correction method based on deep neural network
EP3623884A1 (en) * 2018-09-12 2020-03-18 Siemens Aktiengesellschaft Production or machine tool and method of operating a production or machine tool
WO2020055903A1 (en) * 2018-09-10 2020-03-19 Fanuc America Corporation Robot calibration for ar and digital twin
CN111208759A (en) * 2019-12-30 2020-05-29 中国矿业大学(北京) Digital twin intelligent monitoring system for unmanned fully mechanized coal mining face of mine
CN111274671A (en) * 2019-12-31 2020-06-12 东南大学 Precise repairing and assembling method for complex product assembling process based on digital twinning and operation system thereof
CN111708332A (en) * 2020-05-28 2020-09-25 上海航天精密机械研究所 Digital twin system of production line
CN111862298A (en) * 2020-06-09 2020-10-30 山东捷瑞数字科技股份有限公司 Coating line-oriented digital twin spraying simulation system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102322798A (en) * 2011-08-18 2012-01-18 大连康基科技有限公司 Industrial measuring system based on optical imaging
CN103869983A (en) * 2014-03-26 2014-06-18 南京信息工程大学 Flexible object deformation simulation method for force haptic human-computer interaction
WO2020055903A1 (en) * 2018-09-10 2020-03-19 Fanuc America Corporation Robot calibration for ar and digital twin
EP3623884A1 (en) * 2018-09-12 2020-03-18 Siemens Aktiengesellschaft Production or machine tool and method of operating a production or machine tool
CN110119566A (en) * 2019-05-08 2019-08-13 武汉理工大学 A kind of cutting depth prediction technique and device suitable for the grinding and polishing of complex-curved robot abrasive band
CN110851966A (en) * 2019-10-30 2020-02-28 同济大学 Digital twin model correction method based on deep neural network
CN111208759A (en) * 2019-12-30 2020-05-29 中国矿业大学(北京) Digital twin intelligent monitoring system for unmanned fully mechanized coal mining face of mine
CN111274671A (en) * 2019-12-31 2020-06-12 东南大学 Precise repairing and assembling method for complex product assembling process based on digital twinning and operation system thereof
CN111708332A (en) * 2020-05-28 2020-09-25 上海航天精密机械研究所 Digital twin system of production line
CN111862298A (en) * 2020-06-09 2020-10-30 山东捷瑞数字科技股份有限公司 Coating line-oriented digital twin spraying simulation system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HIROYUKI ISHII: "Development and Experimental Evaluation of OralRehabilitation Robot that Provides Maxillofacial Massage to Patients withOral Disorders", 《DEVELOPMENT AND EXPERIMENTAL EVALUATION OF ORALREHABILITATION ROBOT THAT PROVIDES MAXILLOFACIAL MASSAGE TO PATIENTS WITHORAL DISORDERS》 *
XIN LI: "Multisource Model-Driven Digital Twin System of Robotic Assembly", 《MULTISOURCE MODEL-DRIVEN DIGITAL TWIN SYSTEM OF ROBOTIC ASSEMBLY》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917851A (en) * 2021-09-16 2022-01-11 北京天玛智控科技股份有限公司 Virtual test environment construction method based on digital twinning
CN113954066A (en) * 2021-10-14 2022-01-21 国电南瑞科技股份有限公司 Distribution network operation robot control method and device based on digital twin system
CN113954066B (en) * 2021-10-14 2023-02-21 国电南瑞科技股份有限公司 Digital twin system-based distribution network operation robot control method and device
CN114505852A (en) * 2021-12-07 2022-05-17 中国科学院沈阳自动化研究所 Man-machine cooperation solid fuel shaping system based on digital twin and establishment method
WO2023124055A1 (en) * 2021-12-29 2023-07-06 达闼机器人股份有限公司 Digital-twin-based artificial enhancement method and apparatus, and medium
CN115070780A (en) * 2022-08-24 2022-09-20 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115070780B (en) * 2022-08-24 2022-11-18 北自所(北京)科技发展股份有限公司 Industrial robot grabbing method and device based on digital twinning and storage medium
CN115556112A (en) * 2022-10-28 2023-01-03 北京理工大学 Robot teleoperation method and system based on digital twins
CN116843831A (en) * 2023-06-20 2023-10-03 成都信息工程大学 Agricultural product storage fresh-keeping warehouse twin data management method and system
CN116843831B (en) * 2023-06-20 2024-03-15 成都信息工程大学 Agricultural product storage fresh-keeping warehouse twin data management method and system
CN116863804A (en) * 2023-09-05 2023-10-10 锱云(上海)物联网科技有限公司 Machining demonstration method and machining demonstration system

Similar Documents

Publication Publication Date Title
CN112428272A (en) Robot-environment dynamic interactive rendering system and method for digital twin
Li et al. Multisource model-driven digital twin system of robotic assembly
CN110026987B (en) Method, device and equipment for generating grabbing track of mechanical arm and storage medium
CN104484522B (en) A kind of construction method of robot simulation's drilling system based on reality scene
JP6987508B2 (en) Shape estimation device and method
CN106709975B (en) A kind of interactive three-dimensional facial expression animation edit methods, system and extended method
CN110385694A (en) Action teaching device, robot system and the robot controller of robot
Shen et al. Acid: Action-conditional implicit visual dynamics for deformable object manipulation
Bi et al. Zero-shot sim-to-real transfer of tactile control policies for aggressive swing-up manipulation
Kudoh et al. Painting robot with multi-fingered hands and stereo vision
CN112512755A (en) Robotic manipulation using domain-invariant 3D representations predicted from 2.5D visual data
JP4942924B2 (en) A method of moving a virtual articulated object in a virtual environment by continuous motion
CN115578236A (en) Pose estimation virtual data set generation method based on physical engine and collision entity
Li et al. Depth camera based remote three-dimensional reconstruction using incremental point cloud compression
Taylor et al. VR props: an end-to-end pipeline for transporting real objects into virtual and augmented environments
CN115578460A (en) Robot grabbing method and system based on multi-modal feature extraction and dense prediction
JP2022184829A (en) Deep parameterization for three-dimensional shape optimization
CN114131616A (en) Three-dimensional virtual force field visual enhancement method applied to mechanical arm control
Kim et al. Digital twin for autonomous collaborative robot by using synthetic data and reinforcement learning
Yuan et al. Presim: A 3d photo-realistic environment simulator for visual ai
Frank et al. Learning deformable object models for mobile robot navigation using depth cameras and a manipulation robot
Knopf et al. Deformable mesh for virtual shape sculpting
CN115481489A (en) System and method for verifying suitability of body-in-white and production line based on augmented reality
Alenya et al. 3d object reconstruction from swissranger sensor data using a spring-mass model
Li et al. DeformNet: Latent Space Modeling and Dynamics Prediction for Deformable Object Manipulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination