WO2024055397A1 - 基于数字孪生的观测视角可追踪方法、系统及终端 - Google Patents

基于数字孪生的观测视角可追踪方法、系统及终端 Download PDF

Info

Publication number
WO2024055397A1
WO2024055397A1 PCT/CN2022/129723 CN2022129723W WO2024055397A1 WO 2024055397 A1 WO2024055397 A1 WO 2024055397A1 CN 2022129723 W CN2022129723 W CN 2022129723W WO 2024055397 A1 WO2024055397 A1 WO 2024055397A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
digital twin
digital
information
several assembly
Prior art date
Application number
PCT/CN2022/129723
Other languages
English (en)
French (fr)
Inventor
周世林
付傲然
Original Assignee
上海智能制造功能平台有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海智能制造功能平台有限公司 filed Critical 上海智能制造功能平台有限公司
Publication of WO2024055397A1 publication Critical patent/WO2024055397A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of virtual reality technology, and more specifically to a digital twin-based observation perspective traceable method, system and terminal.
  • digital twins map the physical world to virtual space through digital means and use Internet of Things technology to store data in real time. Collected to the digital twin platform, the working area conditions are fed back to the digital twin platform in real time through the wireless network, which shows that digital twins have the potential to break the barriers between physical space and cyberspace in smart manufacturing.
  • the observer's position information cannot be transmitted to the digital twin platform, so that the perspective of the working environment displayed by the digital twin platform is relatively fixed or can only move according to certain rules. It cannot well simulate the scenes actually observed by workers, which may affect workers' assessment of workshop functions, performance, risks, etc.
  • the Chinese patent with application number CN202010902457.7 discloses a method for constructing a digital twin of the human skeleton, which uses VR motion capture and sensor technology to collect important information about the human skeleton based on the real human skeleton.
  • Location data uses artificial intelligence to classify, filter, reduce and calculate key data to obtain key data.
  • the spatial orientation information and mechanical information of the target bone are obtained by solving key data using human body inverse dynamics and biomechanics algorithms. After merging part of the sensor data with the calculation results, the target bone is simulated to obtain the biomechanical properties of the target bone and utilize them.
  • a variety of prediction algorithms predict the biomechanical properties of bones in unknown postures; finally, the performance data is modeled and rendered to obtain a high-fidelity digital twin of the real bone, achieving a faithful twin mapping of the biomechanical properties of the bone.
  • the present invention can calculate the biomechanical properties of the target bone in real time using wearable VR equipment and a small number of sensors under various human body movement postures, and can realize real-time health detection of the target bone. Although it adds human posture and movement information, it does not solve the problem of real-time feedback of working area conditions through wireless networks to the digital twin platform required in the field of Internet of Things technology.
  • the purpose of the present invention is to provide a digital twin-based observation perspective traceability method, system and terminal.
  • a digital twin-based observation perspective traceability method including:
  • the display interface displays a scene of observing several assembly stations at the position of the human body.
  • the establishment of several assembly stations for performing the same operations includes:
  • assembly stations have the same settings, including an assembly platform, a material transfer platform, a transfer trolley and a working robot;
  • the assembly table and the working robot are respectively connected to the bus;
  • the Unreal Engine is used to realize digital twins of the several assembly stations, including:
  • the bus obtains the state information and the time pose information, and packages the state information and the time pose information into a structure and returns it to the Unreal Engine;
  • the Unreal Engine parses the structure message, and adjusts and virtually displays the 3D image twin model in real time on the display interface.
  • the position information of the human body is obtained through a tracking device combined with a wearable device, including:
  • the human body position information is transmitted to the bus in real time.
  • the display interface displays scenes of observing several assembly stations at the human body position, including:
  • the Unreal Engine adjusts the observation angle of the 3D image twin model according to the human body position information
  • the scene in which the observer actually observes the assembly table is displayed on the display interface.
  • the virtualization engine includes the UE4 engine or the digital twin construction software unity; the tracking device uses an optitrack camera.
  • a digital twin-based observation perspective tracking system including:
  • a digital twin module that implements digital twins of several assembly stations
  • a data processing module which obtains the position information of the human body through tracking devices and combined with wearable devices;
  • the display module actually displays the scene of observing several assembly stations at the position of the human body.
  • the digital twin module can obtain real-time data in actual operations of several assembly platforms.
  • the real-time data includes data from the programmable logic controller in the electrical control system of the assembly platform and pose and joint information of the work robot.
  • the data processing module is a DT demonstration machine
  • the display module is a DT display.
  • a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the feature is that when the processor executes the program, it can be used for Perform the method described in any one of the above, or run the system described in any one of the above.
  • the present invention has the following beneficial effects:
  • the observation perspective traceability method and system based on digital twins of the embodiment of the present invention uses digital twin technology to model and simulate the assembly actions of the assembly platform to achieve real-time monitoring of the assembly platform;
  • optitrack technology is used to track the position of the human body and return data to obtain the posture of the human body. This can be used to display the angle of the model in the virtual scene, showing that as the human body moves, the monitor displays different perspectives as the staff moves; Compared with the existing digital twin technology, this embodiment adds the part of acquiring human body posture, which can provide help for the further development of digital twin technology and its integration with VR and other technologies.
  • Figure 1 is a schematic structural diagram of an assembly platform provided in an implementation case of the present invention.
  • Figure 2 is an installation diagram of the optitrack equipment provided in another embodiment of the present invention.
  • Figure 3 is an overall rendering provided in a preferred implementation case of the present invention.
  • the present invention provides an embodiment, a digital twin-based observation perspective traceability method, including:
  • S1 establish several equipment platforms for performing the same operation, and install display interfaces behind the several assembly platforms;
  • S3 obtains the position information of the human body through tracking devices and combined with wearable devices
  • This embodiment enables the digital twin system to present a virtual state of the work area.
  • the digital twin interface updates the working conditions presented by the workers' observation angles in real time as they move.
  • a mirror is installed on the other side of several assembly tables, opposite to the display interface. Through the mirror, the mirror image of the position of the human body can be intuitively obtained, which can be compared with the illusory scene of the display interface to adjust the twin effect.
  • Figure 1 shows the hardware part of the assembly platform, which includes an assembly platform underneath, a UR5 robot, a material placement platform, a number of materials, a transfer trolley and a material base.
  • Executing S2 in another embodiment of the present invention specifically includes:
  • S201 establish models of assembly tables, robots and related assembly materials, and use rendering tools to make the models similar to real items;
  • the bus obtains real-time data information and packages it into a structure and returns it to Unreal Engine UE4;
  • the data information includes the pose and joint information of the work robot and the opening and release information on the assembly platform. Packed into structures to facilitate the stability and integrity of signal transmission;
  • executing S3 includes:
  • the installation situation of the optitrack equipment in this embodiment is shown.
  • position observed position
  • the number of optitrack devices is not limited and can be set according to actual conditions.
  • executing S4 includes:
  • the UE4 engine uses its underlying code to change the digital twin camera position based on the observer's human body position information; the digital twin camera position here refers to the angle of the display model in the virtual scene.
  • the corresponding program is started to realize the simultaneous assembly actions of several assembly stations, the optitrack position acquisition and the corresponding data returned by the bus; and then the digital twin display program is started.
  • the staff can display the situation of the assembly table at the angle they observe on the screen in real time.
  • the specific effect is shown in Figure 3.
  • the hardware configuration of the digital twin display machine is as follows:
  • Minimum server configuration X64 computing, 8 cores 16G, data disk 500G, bandwidth 30Mbit/s;
  • Display machine configuration system window10, CPU i911900k, graphics card GTX 3080Ti.
  • the present invention provides a digital twin system with traceable observation perspective, including: a digital twin module, a data processing module and a display module; the digital twin module realizes digital twins of several assembly stations; data processing The module obtains the position information of the human body through the tracking device and combined with the wearable device; the display module actually displays the scene of observing several assembly stations at the position of the human body.
  • the digital twin module can obtain real-time data from the actual operation of several assembly platforms.
  • the real-time data includes data from the programmable logic controller in the electrical control system of the assembly platform and pose and joint information of the work robot.
  • the data processing module is a UE4 engine; the display module includes a DT demonstration machine and a DT demonstration screen.
  • the information of the three modules is connected to the bus through a network cable.
  • the bus packages the data and sends it to the DT demonstration machine (a high-performance workstation with UE4 software installed).
  • the DT demonstration machine changes the model pose and observation angle by changing the corresponding data information of the digital twin model, achieving the effect of showing different angles of view as the human body moves and the monitor displays as the worker moves.
  • the UE4 engine used is software used to build digital twin models, interact with data, and display final effects.
  • Unity the current mainstream digital twin construction software, may also be used.
  • a terminal including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, it can be used to execute The method of any one of the above, or running the system of any of the above.
  • system and its various devices provided by the present invention can be completely programmed with logic gates, Switches, application-specific integrated circuits, programmable logic controllers and embedded microcontrollers are used to achieve the same function. Therefore, the system and its various devices provided by the present invention can be regarded as a hardware component, and the devices included in it for implementing various functions can also be regarded as structures within the hardware components; The means for implementing various functions are considered to be either software modules implementing methods or structures within hardware components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于数字孪生的观测视角可追踪方法,包括:建立进行相同作业的若干个装配台,并在若干个装配台后方安装展示界面;通过虚幻引擎,实现若干个装配台的数字孪生;通过追踪设备并结合可穿戴设备,获取人体的位置信息;展示界面显示位于人体位置处观察若干个装配台的场景。本方法较现有的数字孪生技术,增加了获取人体位姿的部分,可以为数字孪生技术的进一步发展以及与VR等技术结合提供帮助。

Description

基于数字孪生的观测视角可追踪方法、系统及终端 技术领域
本发明涉及虚拟现实技术领域,更具体的涉及一种基于数字孪生的观测视角可追踪方法、系统及终端。
背景技术
随着网络物理系统(CPS)的多项技术进步,工业4.0革命带来了数字孪生的新兴概念,数字孪生作为新兴技术,将物理世界通过数字化手段映射到虚拟空间,采用物联网技术将数据实时采集至数字孪生平台,将作业区情况实时通过无线网络反馈至数字孪生平台,这表明了数字孪生有潜力打破智能制造中物理空间和网络空间之间的壁垒。然而,在现实的制造环境中,存在着不能将观测者的位置信息传入到数字孪生平台中,使得数字孪生平台展现的工作环境视角存在相对固定或者只能按照一定的规律进行移动的效果。不能很好的模拟工作人员实际观测到的场景,可能会影响工作人员对于车间功能、性能、风险等情况的评估。
经检索,申请号为CN202010902457.7的中国专利一种人体骨骼的数字孪生体构建方法,公开了一种人体骨骼的数字孪生体构建方法,针对真实人体骨骼利用VR动作捕捉和传感器技术采集人体重要位置处数据,通过人工智能进行数据分类、筛选、约简与计算得到关键数据。通过人体反向动力学与生物力学算法对关键数据求解得到目标骨骼的空间方位信息与力学信息,将部分传感器数据与计算结果融合后对目标骨骼进行仿真模拟,得到目标骨骼的生物力学性能并利用多种预测算法对未知姿态下骨骼的生物力学性能进行预测;最后将性能数据建模渲染得到真实骨骼的高保真数字孪生体,实现对骨骼生物力学性能的忠实孪生映射。本发明在多种人体动作姿态下,利用穿戴式VR设备和少量传感器即可实时计算出目标骨骼的生物力学性能,能够实现对目标骨骼实时的健康检测。其虽然加入了人的位姿、动作信息,但其并没有解决物联网技术领域所需的将作业区域情况实时通过无线网络反馈指数字孪生平台。
综上,在实现数字孪生观测视角同步的问题上,如何对人体位置进行定位并将 位置信息转递到数字孪生平台,确保数字孪生领域在复杂的实际工况环境下能够定位观测者位置是亟需突破的技术问题。
发明内容
针对现有技术中的缺陷,本发明的目的是提供一种基于数字孪生的观测视角可追踪方法、系统及终端。
根据本发明的一个方面,提供一种基于数字孪生的观测视角可追踪方法,包括:
建立进行相同作业的若干个装配台,并在所述若干个装配台后方安装展示界面;
通过虚幻引擎,实现所述若干个装配台的数字孪生;
通过追踪设备并结合可穿戴设备,获取人体的位置信息;
所述展示界面显示位于所述人体位置处观察若干个装配台的场景。
优选地,所述建立进行相同作业的若干个装配台,包括:
若干个装配台设置相同,均包括装配平台、物料传输平台、传输小车以及一个作业机器人;
所述装配台和作业机器人分别和总线连接;
通过所述总线控制所述装配台的开启、关闭、放行并获取其状态信息;
通过所述总线控制所述作业机器人作业并获取其实时位姿信息。
优选地,通过虚幻引擎,实现所述若干个装配台的数字孪生,包括:
建立若干个所述装配台3D图像孪生模型并进行渲染;
总线获取所述状态信息和所述时位姿信息,并将所述状态信息和所述时位姿信息打包成结构体返回到虚幻引擎;
所述虚幻引擎解析所述结构体消息,将所述3D图像孪生模型在所述展示界面上进行实时调整和虚拟显示。
优选地,所述通过追踪设备并结合可穿戴设备,获取人体的位置信息,包括:
将打标球放置于可穿戴装置上;
工作人员穿戴所述可穿戴设备;
通过若干台相同的追踪设备捕捉所述可穿戴设备,结合所述打标球,获取人体位置信息;
将所述人体位置信息实时传递给所述总线。
优选地,所述展示界面显示位于所述人体位置处观察若干个装配台的场景,包括:
将总线获得的所述人体位置信息发送给虚幻引擎;
所述虚幻引擎根据人体位置信息调整3D图像孪生模型的观测角度;
基于所述观测角度,在所述展示界面显示观测者实际观察装配台的场景。
优选地,所述虚化引擎包括UE4引擎或者数字孪生构建软件unity;所述追踪设备选用optitrack相机。
根据本发明的第二个方面,提供一种基于数字孪生的观测视角可追踪系统,包括:
数字孪生模块,所述数字孪生模块实现若干个装配台的数字孪生;
数据处理模块,所述数据处理模块通过追踪设备并结合可穿戴设备,获取人体的位置信息;
显示模块,所述显示模块实显示位于所述人体位置处观察若干个装配台的场景。
优选地,所述数字孪生模块能够获取若干个装配台实际操作中的实时数据,所述实时数据包括装配平台电气控制系统中可编程逻辑控制器的数据、作业机器人的位姿关节信息。
优选地,所述数据处理模块选用DT演示机器,所述显示模块选用DT显示器。
根据本发明的第三个方面,提供一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时可用于执行任一项所述的方法,或,运行任一项所述的系统。
与现有技术相比,本发明具有如下的有益效果:
本发明实施例的基于数字孪生的观测视角可追踪方法和系统,利用数字孪生技术建模模拟装配台的装配动作,实现对装配平台的实时监测;
此外,利用optitrack技术追踪人体的位置,并返回数据得到人体的位姿,以此来虚拟场景中显示模型的角度,呈现出随着人体移动,显示器显示随着工作人员移动显示不同视角的效果;本实施例较现有的数字孪生技术,增加了获取人体位姿的部分,可以为数字孪生技术的进一步发展以及与VR等技术结合提供帮助。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1为本发明一实施案例中提供的装配平台结构示意图;
图2为本发明另一实施案例中提供的optitrack设备的安装图;
图3为本发明一优选实施案例中提供的整体效果图。
具体实施方式
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进。这些都属于本发明的保护范围。
本发明提供一个实施例,一种基于数字孪生的观测视角可追踪方法,包括:
S1,建立进行相同作业的若干个装备台,并在所述若干个装配台后方安装展示界面;
S2,通过虚幻引擎,实现所述若干个装配台的数字孪生;
S3,通过追踪设备并结合可穿戴设备,获取人体的位置信息;
S4,通过S3中得到的人体位置的输入,在镜像数字孪生的展示界面上实现从观测位置观察装配台的效果。
本实施例能够使数字孪生系统呈现作业区虚拟状况。数字孪生界面随着工作人员的移动实时更新其观测角度呈现的工况。较佳的,在若干个装配台的另一侧安装一面镜子,与展示界面相对,通过镜子可直观获得人体所在位置处的镜像,可与展示界面的虚幻情景对比,以调整孪生效果。
本发明的一个优选实施例中执行S1,如图1所示为装配台的硬件部分,包括底下的装配平台、UR5机器人一个、物料放置平台一个、若干物料、传送小车以及物料底座。
对于上述硬件部分,需将其通电通气并通过网线连接到总线上,并将若干个装配台设置到联机模式。打开机器人,通过网线将机器人的电控柜连接到装配台的交换机上。通过制造执行系统的服务器MESSever获取若干个装配台的信息,编程实现在检测到物料底座的时候,同时启动机器人加工,在装配完成后,同时放行,实现若干个装配台的实时同时作业。
在本发明的另一个实施例中执行S2,具体包括:
S201,建立装配台、机器人以及相关装配物料的模型,通过渲染工具将模型与真实物品相似;
S202,总线获取实时的数据信息,并将其打包成结构体返回到虚幻引擎UE4;其中数据信息包括作业机器人的位姿关节信息以及装配平台上的开启、放行等信息。打包成结构体为了方便信号传输的稳定性和完整性;
S203,通过解析结构体消息,调整模型状态,实现对真实场景的实时虚拟现实,实现数字孪生效果。
在本发明的一优选实施例中执行S3,包括:
S301,将打标球放置于头盔或胸牌等可穿戴装置上,让工作人员穿戴该可穿戴设备;
S302,通过8台相同的optitrack动作捕捉设备,结合打标球,获取人体位置信息;
S303,将人体位置信息实时传递给总线。
如图2所示,为本实施例中的optitrack设备的安装情况,一共有8个设备,保证在各个角度上,打标球都至少能被3个以上设备看到,即可以定位打标球位置(观测者位置)。当然,optitrack设备的个数并不限制,可根据实际情况设定。
在发明的一优选实施例中执行S4,包括:
S401,通过总线将获得的观测者的人体位置信息发送给UE4引擎;
S402,UE4引擎通过其底层代码来根据观测者的人体位置信息改变数字孪生摄像头位置;此处的数字孪生摄像头位置指的是虚拟场景中显示模型的角度。
S403,调整摄像头位置后,在展示界面上出现观测者实际观察装配台的情况。
即启动相应程序实现若干个装配台的同时装配动作、optitrack位置获取以及总线返回的相应数据;之后启动数字孪生展示程序。工作人员通过穿戴已经安装好打标球的头盔或胸牌等设备,即可实时在屏幕上显示其所观测角度上装配台的情况。具体效果如图3所示,该实施例中,数字孪生展示机的硬件配置如下所示:
服务器最小配置:X64计算,8核16G,数据盘500G,带宽30Mbit/s;
展示机配置:系统window10,CPU i911900k,显卡GTX 3080Ti。
基于上述实施例相同的发明构思,本发明提供一种观测视角可追踪的数字孪生系统,包括:数字孪生模块、数据处理模块和显示模块;数字孪生模块实现若干个装配台的数字孪生;数据处理模块通过追踪设备并结合可穿戴设备,获取人体的位置信息;显示模块实显示位于所述人体位置处观察若干个装配台的场景。
进一步的,数字孪生模块能够获取若干个装配台实际操作中的实时数据,所述实时数据包括装配平台电气控制系统中可编程逻辑控制器的数据、作业机器人的位姿关节信息。
数据处理模块为UE4引擎;显示模块包括DT演示机器和DT演示屏幕,三个模块的信息通过网线连接到总线,总线将数据打包后发送给DT演示机器(高性能工作站,安装了UE4软件),DT演示机器通过改变数字孪生模型的相应数据信息来改变模型位姿和 观测视角,实现呈现出随着人体移动,显示器显示随着工作人员移动显示不同视角的效果。
上所述实施例中,采用的UE4引擎是用于数字孪生模型搭建与数据交互和展示最终效果的软件。在本发明的其他实施例中,也可采用目前主流的数字孪生构建软件unity。
基于相同的发明构思,在其他实施例中,提供一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时可用于执行任一项所述的方法,或,运行任一项所述的系统。
需要说明的是,本发明提供的所述方法中的步骤,可以利用所述系统中对应的模块、装置、单元等予以实现,本领域技术人员可以参照所述系统的技术方案实现所述方法的步骤流程,即,所述系统中的实施例可理解为实现所述方法的优选例,在此不予赘述。
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统及其各个装置以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统及其各个装置以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同功能。所以,本发明提供的系统及其各项装置可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构;也可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变形或修改,这并不影响本发明的实质内容。上述各优选特征在互不冲突的情况下,可以任意组合使用。

Claims (10)

  1. 一种基于数字孪生的观测视角可追踪方法,其特征在于,包括:
    建立进行相同作业的若干个装配台,并在所述若干个装配台后方安装展示界面;
    通过虚幻引擎,实现所述若干个装配台的数字孪生;
    通过追踪设备并结合可穿戴设备,获取人体的位置信息;
    所述展示界面显示位于所述人体位置处观察若干个装配台的场景。
  2. 根据权利要求1所述的一种基于数字孪生的观测视角可追踪方法,其特征在于,所述建立进行相同作业的若干个装配台,包括:
    若干个装配台设置相同,均包括装配平台、物料传输平台、传输小车以及一个作业机器人;
    所述装配台和作业机器人分别和总线连接;
    通过所述总线控制所述装配台的开启、关闭、放行并获取其状态信息;
    通过所述总线控制所述作业机器人作业并获取其实时位姿信息。
  3. 根据权利要求2所述的一种基于数字孪生的观测视角可追踪方法,其特征在于,通过虚幻引擎,实现所述若干个装配台的数字孪生,包括:
    建立若干个所述装配台3D图像孪生模型并进行渲染;
    总线获取所述状态信息和所述时位姿信息,并将所述状态信息和所述时位姿信息打包成结构体返回到虚幻引擎;
    所述虚幻引擎解析所述结构体消息,将所述3D图像孪生模型在所述展示界面上进行实时调整和虚拟显示。
  4. 根据权利要求3所述的一种基于数字孪生的观测视角可追踪方法,其特征在于,所述通过追踪设备并结合可穿戴设备,获取人体的位置信息,包括:
    将打标球放置于可穿戴装置上;
    工作人员穿戴所述可穿戴设备;
    通过若干台相同的追踪设备捕捉所述可穿戴设备,结合所述打标球,获取人体位置信息;
    将所述人体位置信息实时传递给所述总线。
  5. 根据权利要求4所述的一种基于数字孪生的观测视角可追踪方法,其特征在于,所述展示界面显示位于所述人体位置处观察若干个装配台的场景,包括:
    将总线获得的所述人体位置信息发送给虚幻引擎;
    所述虚幻引擎根据人体位置信息调整3D图像孪生模型的观测角度;
    基于所述观测角度,在所述展示界面显示观测者实际观察装配台的场景。
  6. 根据权利要求1-5任一项所述的一种基于数字孪生的观测视角可追踪方法,其特征在于,所述虚幻引擎包括UE4引擎或者数字孪生构建软件unity;所述追踪设备选用optitrack相机。
  7. 一种基于数字孪生的观测视角可追踪系统,其特征在于,包括:
    数字孪生模块,所述数字孪生模块实现若干个装配台的数字孪生;
    数据处理模块,所述数据处理模块通过追踪设备并结合可穿戴设备,获取人体的位置信息;
    显示模块,所述显示模块实显示位于所述人体位置处观察若干个装配台的场景。
  8. 根据权利要求7所述的一种基于数字孪生的观测视角可追踪系统,其特征在于,所述数字孪生模块能够获取若干个装配台实际操作中的实时数据,所述实时数据包括装配平台电气控制系统中可编程逻辑控制器的数据、作业机器人的位姿关节信息。
  9. 根据权利要求7所述的一种基于数字孪生的观测视角可追踪系统,其特征在于,所述数据处理模块选用DT演示机器,所述显示模块选用DT显示器。
  10. 一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时可用于执行权利要求1-6中任一项所述的方法,或,运行权利要求7-9任一项所述的系统。
PCT/CN2022/129723 2022-09-14 2022-11-04 基于数字孪生的观测视角可追踪方法、系统及终端 WO2024055397A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211113302.0 2022-09-14
CN202211113302.0A CN115695765A (zh) 2022-09-14 2022-09-14 基于数字孪生的观测视角可追踪方法、系统及终端

Publications (1)

Publication Number Publication Date
WO2024055397A1 true WO2024055397A1 (zh) 2024-03-21

Family

ID=85062176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129723 WO2024055397A1 (zh) 2022-09-14 2022-11-04 基于数字孪生的观测视角可追踪方法、系统及终端

Country Status (2)

Country Link
CN (1) CN115695765A (zh)
WO (1) WO2024055397A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347131A (zh) * 2019-07-18 2019-10-18 中国电子科技集团公司第三十八研究所 面向生产的数字孪生系统
CN112132955A (zh) * 2020-09-01 2020-12-25 大连理工大学 人体骨骼的数字孪生体构建方法
US20210201584A1 (en) * 2019-12-31 2021-07-01 VIRNECT inc. System and method for monitoring field based augmented reality using digital twin
CN113204826A (zh) * 2021-05-31 2021-08-03 深圳市智慧空间平台技术开发有限公司 一种数字孪生三维场景视角操作方法及装置
CN114260893A (zh) * 2021-12-22 2022-04-01 武汉理工大学 一种工业机器人装配拾放过程数字孪生模型构建方法
CN115033137A (zh) * 2022-06-10 2022-09-09 无锡途因思网络信息技术有限公司 一种基于数字孪生的虚拟现实交互方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347131A (zh) * 2019-07-18 2019-10-18 中国电子科技集团公司第三十八研究所 面向生产的数字孪生系统
US20210201584A1 (en) * 2019-12-31 2021-07-01 VIRNECT inc. System and method for monitoring field based augmented reality using digital twin
CN112132955A (zh) * 2020-09-01 2020-12-25 大连理工大学 人体骨骼的数字孪生体构建方法
CN113204826A (zh) * 2021-05-31 2021-08-03 深圳市智慧空间平台技术开发有限公司 一种数字孪生三维场景视角操作方法及装置
CN114260893A (zh) * 2021-12-22 2022-04-01 武汉理工大学 一种工业机器人装配拾放过程数字孪生模型构建方法
CN115033137A (zh) * 2022-06-10 2022-09-09 无锡途因思网络信息技术有限公司 一种基于数字孪生的虚拟现实交互方法

Also Published As

Publication number Publication date
CN115695765A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
Matsas et al. Design of a virtual reality training system for human–robot collaboration in manufacturing tasks
WO2022166264A1 (zh) 作业机械的模拟训练系统、方法、装置和电子设备
US20190202055A1 (en) Industrial robot training using mixed reality
Naceri et al. The vicarios virtual reality interface for remote robotic teleoperation: Teleporting for intuitive tele-manipulation
CN111161422A (zh) 一种用于增强虚拟场景实现的模型展示方法
EP2568355A2 (en) Combined stereo camera and stereo display interaction
JP6430079B1 (ja) 監視システムおよび監視方法
CN110610547A (zh) 基于虚拟现实的座舱实训方法、系统及存储介质
CN110216674A (zh) 一种冗余自由度机械臂视觉伺服避障系统
CN107577159A (zh) 扩增实境仿真系统
CN107257946B (zh) 用于虚拟调试的系统
Krupke et al. Prototyping of immersive HRI scenarios
Wei et al. Multi-view merging for robot teleoperation with virtual reality
US10964104B2 (en) Remote monitoring and assistance techniques with volumetric three-dimensional imaging
CN110070622A (zh) 一种基于ar技术的图书馆应用系统及使用方法
KR20220128655A (ko) 제조 또는 산업 환경에서 협동 로봇과의 상호작용을 위한 가상/증강 현실 사용 시스템 및 방법
Metzner et al. A system for human-in-the-loop simulation of industrial collaborative robot applications
Ko et al. A study on manufacturing facility safety system using multimedia tools for cyber physical systems
WO2024055397A1 (zh) 基于数字孪生的观测视角可追踪方法、系统及终端
CN107391710B (zh) 机器操作监督系统及服务器
Jacob et al. Digital twins for distributed collaborative work in shared production
CN206366191U (zh) 基于ar的儿童积木搭建系统
TWI740361B (zh) 智慧操作輔助系統及智慧操作輔助方法
Shanmin et al. Motion control of virtual human based on optical motion capture in immersive virtual maintenance system
CN113633937A (zh) 虚拟现实上肢康复作业治疗系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22958585

Country of ref document: EP

Kind code of ref document: A1