WO2022160406A1 - 基于增强现实技术的物联网实训系统的实现方法及系统 - Google Patents

基于增强现实技术的物联网实训系统的实现方法及系统 Download PDF

Info

Publication number
WO2022160406A1
WO2022160406A1 PCT/CN2021/078399 CN2021078399W WO2022160406A1 WO 2022160406 A1 WO2022160406 A1 WO 2022160406A1 CN 2021078399 W CN2021078399 W CN 2021078399W WO 2022160406 A1 WO2022160406 A1 WO 2022160406A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
coordinate system
training
plane
augmented reality
Prior art date
Application number
PCT/CN2021/078399
Other languages
English (en)
French (fr)
Inventor
梁立新
沈永安
Original Assignee
深圳技术大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳技术大学 filed Critical 深圳技术大学
Publication of WO2022160406A1 publication Critical patent/WO2022160406A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/55Education
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals

Definitions

  • the invention relates to the technical field of augmented reality, and in particular, to a method and system for implementing an Internet of Things training system based on augmented reality technology.
  • the school will conduct practical training for students, and will also use the Internet of Things training system to assist students in practical training.
  • the main purpose of the present invention is to provide an implementation method of the Internet of Things training system based on augmented reality technology, which aims to solve the problem of the training system in the prior art, and the technology is relatively old, so that the training system can assist students in training Technical issues with limited effect.
  • a first aspect of the present invention provides an implementation method of an IoT training system based on augmented reality technology, including: acquiring an attachment plane in a real scene; using augmented reality technology to establish a relationship between the attachment plane and the screen; The plane mapping relationship between them, so that the scene in the attachment plane is mapped to the screen, or the scene in the screen is mapped to the attachment plane; in response to the students' training operations on the screen, and according to the mapping Relationships map the training operations to the attachment planes.
  • the method also includes: uploading the training operations of the students on the screen to the cloud computing platform; receiving the cloud computing platform from the training tasks, stages, and overall projects through big data analysis.
  • the feedback results of the assessment of work achievements, professional skills, and professional qualities identify and analyze students' learning behaviors according to the feedback results, and generate learning diagnosis and personalized counseling programs.
  • the acquiring an attachment plane in the real scene includes: acquiring a pre-made mark in a two-dimensional plane; placing the mark in the real scene, and using the plane of the mark in the real scene as the attachment. plane to acquire.
  • the method for establishing the plane mapping relationship includes: using a camera to photograph the real scene where the mark is located; in the photographed real scene, identifying the mark, determining the position of the mark, and determining the position of the mark.
  • the mark carries out attitude evaluation; the center point of the mark is used as the origin to establish a template coordinate system; the template coordinate system is transformed to establish a pre-acquired coordinate system mapping relationship between the screen coordinate system on the screen and the template coordinate system, The coordinate system mapping relationship is used as the plane mapping relationship.
  • the transforming the template coordinate system includes: rotating or translating the template coordinate system to obtain a camera coordinate system in which the template coordinate system is mapped in the camera; according to the pre-acquired image or video captured by the camera and The display relationship between screens is to transform the camera coordinate system into a screen coordinate system.
  • the transforming the template coordinate system further includes: transforming the camera coordinate system into an ideal screen coordinate system according to a pre-acquired ideal display relationship between the image or video captured by the camera and the screen; The obtained error between the actual display relationship between the image or video captured by the camera and the screen and the ideal reality relationship is to transform the ideal screen coordinate system into an actual screen coordinate system.
  • a second aspect of the present invention provides an implementation system of an Internet of Things training system based on augmented reality technology, including: an attachment plane acquiring module for acquiring an attachment plane in a real scene; a plane mapping relationship establishing module for using enhanced The reality technology establishes a plane mapping relationship between the attachment plane and the screen, so that the scene in the attachment plane is mapped to the screen, or the scene in the screen is mapped to the attachment plane; the mapping module is used to respond to students The training operation is performed on the screen, and the training operation is mapped to the attachment plane according to the mapping relationship.
  • a data uploading module for uploading students' training operations on the screen to a cloud computing platform
  • a feedback result receiving module for receiving data from the cloud computing platform through big data analysis from training tasks, stages ⁇ The feedback results of the assessment of students' practical training work results, professional skills, and professional qualities at three levels of the overall project
  • the analysis module is used to identify and analyze students' learning behaviors according to the feedback results, and generate learning diagnosis and personalized counseling programs .
  • a third aspect of the present invention provides an electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program At the time, the implementation method of the Internet of Things training system based on the augmented reality technology described in any one of the above is implemented.
  • a fourth aspect of the present invention provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, any one of the above-mentioned objects based on augmented reality technology is realized.
  • the realization method of the network training system is realized.
  • the present invention provides an implementation method of an Internet of Things training system based on augmented reality technology, which has the beneficial effects of: by using the augmented reality technology, students can display their training operations in real-time scenarios in real time during training. In this way, students can more intuitively recognize the actual effect brought by the training operation. When the training operation is wrong, the students can observe more quickly, so not only makes the interaction between the training system and the students more in-depth , and the errors of the training operation can be observed by the students in time, so that the students can correct the errors more quickly, thus enhancing the auxiliary effect of the training system on the students.
  • FIG. 1 is a schematic flowchart of an implementation method of an Internet of Things training system based on augmented reality technology according to an embodiment of the present invention
  • FIG. 2 is a schematic structural block diagram of an implementation system of an IoT training system based on augmented reality technology according to an embodiment of the present invention
  • FIG. 3 is a schematic block diagram of the structure of an electronic device according to an embodiment of the present invention.
  • FIG. 1 is an implementation method of an IoT training system based on augmented reality technology, including: S1, acquiring an attachment plane in a real scene; S2, using augmented reality technology to establish a plane between the attachment plane and the screen Mapping relationship, so that the scene in the attachment plane is mapped to the screen, or the scene in the screen is mapped to the attachment plane; S3. Respond to students' training operations on the screen, and map the training operations to the attachment plane according to the mapping relationship .
  • the above method will map the graphics or 3D models drawn by students to the attachment plane according to the mapping relationship. Since the attachment plane exists in the real scene, the graphics or 3D models drawn by the students can be displayed. In the real scene, the multi-faceted interaction between the virtual project and the real world can be realized, and more and more in-depth interaction with the training project can be carried out, and the operation can be performed in real time and feedback can be obtained.
  • augmented reality technology students can display their training operations in real-time in real time during training, so that students can more intuitively recognize the actual effects brought by the training operations.
  • the students can also observe it more quickly, so not only makes the interaction between the training system and the students more in-depth, but also can be observed by the students in time. , thus enhancing the auxiliary effect of the training system on students.
  • the implementation method of the Internet of Things training system based on augmented reality technology further includes: S4, uploading the students' training operations on the screen to the cloud computing platform; S5, receiving data from the cloud computing platform through big data analysis The feedback results of the assessment of the work results, professional skills, and professional qualities of the students’ training at three levels: training tasks, stages, and overall projects; S6. Identify and analyze students’ learning behaviors according to the feedback results, and generate learning diagnosis and personalized counseling plans .
  • a pre-trained neural network model is used in the process of generating the learning diagnosis and personalized counseling scheme, the feedback result is used as the input of the neural network model, and the neural network outputs the learning diagnosis and personalized counseling scheme.
  • the sample data used is the data of the tutor's manual diagnosis of the students' learning and the personalized tutoring plan given by the existing training system over the years.
  • the training system can give more help to the students, reduce the workload of the students' training instructors, and improve the students' training efficiency.
  • acquiring an attachment plane in the real scene includes: acquiring a pre-made mark in a two-dimensional plane; placing the mark in the real scene, and acquiring the plane marked in the real scene as the attachment plane .
  • the model can be displayed on the screen as if it is attached to a real object. In essence, it is to find an attached plane in the real scene, and then map the plane in the three-dimensional scene to our two-dimensional screen, and then on this plane. Draw a graphic or 3D model that the students want to present.
  • the logo may be a template card drawn with a predetermined shape. After placing the logo in a real position, it is equivalent to determining a plane in a real scene, and this plane can be used as an attachment plane.
  • the method for establishing a plane mapping relationship includes: using a camera to photograph a real scene where the logo is located; in the photographed real scene, identifying the logo, determining the position of the logo, and performing posture evaluation on the logo; The center point is used as the origin to establish the template coordinate system; the template coordinate system is transformed to establish the pre-acquired coordinate system mapping relationship between the screen coordinate system and the template coordinate system on the screen, and the coordinate system mapping relationship is used as the plane mapping relationship.
  • transforming the template coordinate system includes: rotating or translating the template coordinate system to obtain a camera coordinate system in which the template coordinate system is mapped in the camera; according to a pre-acquired image or video captured by the camera between the screen and the screen The display relationship is transformed from the camera coordinate system to the screen coordinate system.
  • the transformation from the template coordinate system to the real screen coordinate system requires rotation and translation first. to the camera coordinate system, and then from the camera coordinate system to the screen coordinate system.
  • transforming the template coordinate system further includes: transforming the camera coordinate system into an ideal screen coordinate system according to a pre-acquired ideal display relationship between the image or video captured by the camera and the screen; The error between the actual display relationship between the captured image or video and the screen and the ideal reality relationship, transform the ideal screen coordinate system into the actual screen coordinate system.
  • This display relationship is an ideal display relationship. However, in reality, affected by the hardware facilities, after the image captured by the camera is displayed on the screen, it will There are some errors, such as the position and resolution of objects in the scene. This display relationship is the actual display relationship. When students perform training operations, they will undoubtedly be affected by the images of hardware facilities, so it is necessary to convert the ideal screen coordinate system. It is the actual screen coordinate system to enhance students' training experience and training accuracy.
  • an implementation system of an augmented reality technology-based IoT training system includes: an attached plane acquisition module 1, a plane mapping relationship establishment module 2 and a mapping module 3; an attached plane acquisition module 1 is used to obtain an attachment plane in the real scene; the plane mapping relationship establishment module 2 is used to establish a plane mapping relationship between the attachment plane and the screen using augmented reality technology, so that the scene in the attachment plane is mapped to the screen, or the screen is The scene inside is mapped to the attachment plane; the mapping module 3 is used to respond to students' training operations on the screen, and map the training operations to the attachment plane according to the mapping relationship.
  • the implementation system of the Internet of Things training system based on augmented reality technology further includes: a data uploading module 4, a feedback result receiving module 5, and an analysis module 6;
  • the training operation is uploaded to the cloud computing platform;
  • the feedback result receiving module 5 is used to receive the cloud computing platform through big data analysis from three levels of training tasks, stages, and overall projects.
  • the analysis module 6 is used to identify and analyze the learning behavior of students according to the feedback results, and generate learning diagnosis and personalized counseling programs.
  • the attached plane acquisition module 1 includes: an identification acquisition unit and a plane acquisition unit; the identification acquisition unit is used to acquire a pre-made identification in a two-dimensional plane; the plane acquisition unit is used to place the identification in a real scene , the plane identified in the real scene is acquired as the attached plane.
  • the plane mapping relationship establishing module 2 includes: a photographing unit, an identification identifying unit, a template coordinate system establishing unit, and a coordinate system transforming unit; the photographing unit is used for using a camera to photograph the real scene where the logo is located; the logo identifying unit is used for In the photographed real scene, the logo is identified, the position of the logo is determined, and the posture of the logo is evaluated; the template coordinate system establishment unit is used to establish the template coordinate system using the center point of the logo as the origin; the coordinate system transformation unit is used to The template coordinate system is transformed to establish a pre-acquired coordinate system mapping relationship between the screen coordinate system on the screen and the template coordinate system, and the coordinate system mapping relationship is used as a plane mapping relationship.
  • the coordinate system transformation unit includes: a camera coordinate system calculation subunit and a screen coordinate system transformation subunit; the camera coordinate system calculation subunit is used to rotate or translate the template coordinate system to obtain the template coordinate system mapped on the camera.
  • the camera coordinate system inside; the screen coordinate system transformation subunit is used to transform the camera coordinate system into the screen coordinate system according to the pre-acquired display relationship between the image or video captured by the camera and the screen.
  • the coordinate system transformation unit further includes: an ideal screen coordinate system calculation unit and an actual screen coordinate system calculation unit; the ideal screen coordinate system calculation unit is used to calculate the difference between the image or video captured by the camera and the screen according to the pre-acquired
  • the ideal display relationship is to transform the camera coordinate system into an ideal screen coordinate system;
  • the actual screen coordinate system calculation unit is used to calculate the error between the actual display relationship and the ideal reality relationship between the image or video captured by the camera and the screen in advance, Transform the ideal screen coordinate system to the actual screen coordinate system.
  • the electronic device includes: a memory 601, a processor 602, and a computer program stored in the memory 601 and executable on the processor 602, and the processor 602 executes the computer When the program is executed, the implementation method of the Internet of Things training system based on the augmented reality technology described in the above is realized.
  • the electronic device further includes: at least one input device 603 and at least one output device 604 .
  • the above-mentioned memory 601 , processor 602 , input device 603 and output device 604 are connected through a bus 605 .
  • the input device 603 may specifically be a camera, a touch panel, a physical button, a mouse, or the like.
  • the output device 604 may specifically be a display screen.
  • the memory 601 may be a high-speed random access memory (RAM, Random Access Memory) memory, or non-volatile memory (non-volatile memory), such as disk memory.
  • RAM Random Access Memory
  • non-volatile memory non-volatile memory
  • Memory 601 is used to store a set of executable program codes, and processor 602 is coupled to memory 601 .
  • an embodiment of the present application further provides a computer-readable storage medium, which may be provided in the electronic device in each of the foregoing embodiments, and the computer-readable storage medium may be the foregoing memory 601.
  • a computer program is stored on the computer-readable storage medium, and when the program is executed by the processor 602, the implementation method of the Internet of Things training system based on the augmented reality technology described in the foregoing embodiments is implemented.
  • the computer-storable medium may also be a U disk, a removable hard disk, a read-only memory 601 (ROM, Read-Only Memory), a RAM, a magnetic disk or an optical disk, and other mediums that can store program codes.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or an optical disk and other mediums that can store program codes.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
  • multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only). Memory), random access memory (RAM, Random Various media that can store program codes, such as Access Memory), magnetic disks or CD-ROMs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于增强现实技术的物联网实训系统的实现方法,通过使用增强现实技术,能够将学生的实训操作实时地展示在现实场景中,从而使得学生能够更加直观地认识到实训操作带来的实际效果,在实训操作错误时,学生也能够更加快速的观察到,使得学生更快速地更正错误,因此增强了实训系统对学生的辅助效果;基于增强现实技术的物联网实训系统的实现方法还可以包括:将学生在屏幕上的实训操作上传至云计算平台(S4);接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果(S5);根据反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案(S6)。

Description

基于增强现实技术的物联网实训系统的实现方法及系统 技术领域
本发明涉及增强现实技术领域,尤其涉及一种基于增强现实技术的物联网实训系统的实现方法及系统。
背景技术
为了增强学生的实际操作能力,学校都会对学生进行实训,在实训时也会使用物联网实训系统来辅助学生进行实训。
国外一些国家例如美国、澳大利亚等国家均有自己的实训系统,由于起步较早,具有一定的先发优势,而为了自主的知识产权,减少对国外实训系统的依赖性,国内使用的大多是国内的互联网人才进行研发的。
然而,国内现有的实训系统,大多技术较为陈旧,使得实训系统对学生进行实训的辅助效果较为有限。
发明内容
本发明的主要目的在于提供一种基于增强现实技术的物联网实训系统的实现方法,旨在解决现有技术中的实训系统,技术较为陈旧,使得实训系统对学生进行实训的辅助效果较为有限的技术问题。
为实现上述目的,本发明第一方面提供一种基于增强现实技术的物联网实训系统的实现方法,包括:获取现实场景中的一个依附平面;使用增强现实技术建立所述依附平面与屏幕之间的平面映射关系,以使所述依附平面内的场景映射至屏幕,或使屏幕内的场景映射至所述依附平面内;响应学生在所述屏幕上的实训操作,并根据所述映射关系将所述实训操作映射至所述依附平面。
进一步地,所述方法还包括:将学生在所述屏幕上的实训操作上传至云计算平台;接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;根据所述反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
进一步地,所述获取现实场景中的一个依附平面包括:获取预先制作好的一个二维平面内的标识;将所述标识置于现实场景中,将所述标识在现实场景中的平面作为依附平面进行获取。
进一步地,所述平面映射关系的建立方法包括:使用摄像头拍摄所述标识所在的现实场景;在拍摄到的现实场景中,对所述标识进行识别,确定所述标识的位置,并对所述标识进行姿态评估;将所述标识的中心点作为原点建立模板坐标系;将所述模板坐标系进行变换,建立预先获取的屏幕上的屏幕坐标系与所述模板坐标系的坐标系映射关系,将所述坐标系映射关系作为所述平面映射关系。
进一步地,所述将所述模板坐标系进行变换包括:将模板坐标系进行旋转或平移,得到所述模板坐标系映射在摄像头内的摄像头坐标系;根据预先获取的摄像头拍摄的图像或视频与屏幕之间的显示关系,将所述摄像头坐标系变换为屏幕坐标系。
进一步地,所述将所述模板坐标系进行变换还包括:根据预先获取的摄像头拍摄的图像或视频与屏幕之间的理想显示关系,将所述摄像头坐标系变换为理想屏幕坐标系;根据预先获取的摄像头拍摄的图像或视频与屏幕之间的实际显示关系与理想现实关系之间的误差,将所述理想屏幕坐标系变换为实际屏幕坐标系。
本发明第二方面提供一种基于增强现实技术的物联网实训系统的实现系统,包括:依附平面获取模块,用于获取现实场景中的一个依附平面;平面映射关系建立模块,用于使用增强现实技术建立所述依附平面与屏幕之间的平面映射关系,以使所述依附平面内的场景映射至屏幕,或使屏幕内的场景映射至所述依附平面内;映射模块,用于响应学生在所述屏幕上的实训操作,并根据所述映射关系将所述实训操作映射至所述依附平面。
进一步地,还包括:数据上传模块,用于将学生在所述屏幕上的实训操作上传至云计算平台;反馈结果接收模块,用于接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;分析模块,用于根据所述反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
本发明第三方面提供一种电子装置,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现上述中的任意一项所述基于增强现实技术的物联网实训系统的实现方法。
本发明第四方面提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现上述中的任意一项所述基于增强现实技术的物联网实训系统的实现方法。
本发明提供的一种基于增强现实技术的物联网实训系统的实现方法,有益效果在于:通过使用增强现实技术,使得学生在实训时,能够将学生的实训操作实时地展示在现实场景中,从而使得学生能够更加直观地认识到实训操作带来的实际效果,在实训操作错误时,学生也能够更加快速的观察到,因此不仅使得实训系统与学生之间的交互更加深入,而且能够及时地被学生观察到实训操作的错误,使得学生更快速地更正错误,因此增强了实训系统对学生的辅助效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例基于增强现实技术的物联网实训系统的实现方法的流程示意图;
图2为本发明实施例基于增强现实技术的物联网实训系统的实现系统的结构示意框图;
图3为本发明实施例电子装置的结构示意框图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,为一种基于增强现实技术的物联网实训系统的实现方法,包括:S1、获取现实场景中的一个依附平面;S2、使用增强现实技术建立依附平面与屏幕之间的平面映射关系,以使依附平面内的场景映射至屏幕,或使屏幕内的场景映射至依附平面内;S3、响应学生在屏幕上的实训操作,并根据映射关系将实训操作映射至依附平面。
在学生进行实训的过程中,学生只需要一个具有联网功能,且具有屏幕的终端,例如计算机、手机等,即能够进行实训操作,学生对屏幕进行实训操作时,例如在屏幕上绘制图形或3D模型等,上述的方法会根据映射关系将学生绘制的图形或3D模型等映射至依附平面,而由于依附平面是存在于现实场景中的,因此能够将学生绘制的图形或3D模型展现在现实场景中,实现虚拟项目与现实世界之间的多方面互动,并且可以同实训项目之间进行更多、更深入的交互,能够实时地执行操作,获得反馈。
因此,通过使用增强现实技术,使得学生在实训时,能够将学生的实训操作实时地展示在现实场景中,从而使得学生能够更加直观地认识到实训操作带来的实际效果,在实训操作错误时,学生也能够更加快速的观察到,因此不仅使得实训系统与学生之间的交互更加深入,而且能够及时地被学生观察到实训操作的错误,使得学生更快速地更正错误,因此增强了实训系统对学生的辅助效果。
在一个实施例中,基于增强现实技术的物联网实训系统的实现方法还包括:S4、将学生在屏幕上的实训操作上传至云计算平台;S5、接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;S6、根据反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
在本实施例中,在生成学习诊断及个性化辅导方案的过程中,使用的是预先训练的神经网络模型,将反馈结果作为神经网络模型的输入,神经网络输出学习诊断及个性化辅导方案。神经网络模型在训练的过程中,使用的样本数据是历年来导师根据现有实训系统人工对学生进行的学习诊断及给出的个性化辅导方案的数据。
通过生成对学生实训项目的学习诊断及个性化辅导方案,使得实训系统能够给予学生更多的帮助,且减少了学生实训导师的工作量,提高了学生的实训效率。
在一个实施例中,获取现实场景中的一个依附平面包括:获取预先制作好的一个二维平面内的标识;将标识置于现实场景中,将标识在现实场景中的平面作为依附平面进行获取。
将绘制的图形或 3D 模型可以如同依附在现实物体上一般展现在屏幕上,本质上来讲就是要找到现实场景中的一个依附平面,然后再将这个三维场景下的平面映射到我们二维屏幕上,然后再在这个平面上绘制学生想要展现的图形或3D模型。
在本实施例中,标识可以为绘制着预定规格形状的模板卡片,在将标识放到现实中的一个位置后,就相当于确定了一个现实场景中的平面,这个平面可以作为依附平面。
在一个实施例中,平面映射关系的建立方法包括:使用摄像头拍摄标识所在的现实场景;在拍摄到的现实场景中,对标识进行识别,确定标识的位置,并对标识进行姿态评估;将标识的中心点作为原点建立模板坐标系;将模板坐标系进行变换,建立预先获取的屏幕上的屏幕坐标系与模板坐标系的坐标系映射关系,将坐标系映射关系作为平面映射关系。
在一个实施例中,将模板坐标系进行变换包括:将模板坐标系进行旋转或平移,得到模板坐标系映射在摄像头内的摄像头坐标系;根据预先获取的摄像头拍摄的图像或视频与屏幕之间的显示关系,将摄像头坐标系变换为屏幕坐标系。
通过摄像头对标识进行识别和姿态评估(Pose Estimation),并确定其位置,然后将该标识中心为原点的坐标系,称为模板坐标系,而后续要做的事情实际上是要得到一个变换从而使模板坐标系和屏幕坐标系建立映射关系,这样根据这个变换在屏幕上画出的图形就可以达到该图形依附在标识上的效果,从模板坐标系变换到真实的屏幕坐标系需要先旋转平移到摄像机坐标系,然后再从摄像机坐标系映射到屏幕坐标系。
在一个实施例中,将模板坐标系进行变换还包括:根据预先获取的摄像头拍摄的图像或视频与屏幕之间的理想显示关系,将摄像头坐标系变换为理想屏幕坐标系;根据预先获取的摄像头拍摄的图像或视频与屏幕之间的实际显示关系与理想现实关系之间的误差,将理想屏幕坐标系变换为实际屏幕坐标系。
在理想状态下,摄像头拍摄到的显示到屏幕上后,不会产生误差,这种显示关系是理想显示关系,然而在现实中,受到硬件设施的影响,摄像头拍摄图像显示到屏幕上后,会产生一些误差,例如场景中物体的位置、分辨率等,这种显示关系是实际显示关系,而学生进行实训操作时,毫无疑问会受到硬件设施的影像,因此需要将理想屏幕坐标系转化为实际屏幕坐标系,以加强学生的实训体验,以及实训精度。
请参阅图2,为申请实施例提供的一种基于增强现实技术的物联网实训系统的实现系统,包括:依附平面获取模块1、平面映射关系建立模块2及映射模块3;依附平面获取模块1用于获取现实场景中的一个依附平面;平面映射关系建立模块2用于使用增强现实技术建立依附平面与屏幕之间的平面映射关系,以使依附平面内的场景映射至屏幕,或使屏幕内的场景映射至依附平面内;映射模块3用于响应学生在屏幕上的实训操作,并根据映射关系将实训操作映射至依附平面。
在一个实施例中,基于增强现实技术的物联网实训系统的实现系统还包括:数据上传模块4、反馈结果接收模块5及分析模块6;数据上传模块4用于将学生在屏幕上的实训操作上传至云计算平台;反馈结果接收模块5用于接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;分析模块6用于根据反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
在一个实施例中,依附平面获取模块1包括:标识获取单元及平面获取单元;标识获取单元用于获取预先制作好的一个二维平面内的标识;平面获取单元用于将标识置于现实场景中,将标识在现实场景中的平面作为依附平面进行获取。
在一个实施例中,平面映射关系建立模块2包括:拍摄单元、标识识别单元、模板坐标系建立单元及坐标系变换单元;拍摄单元用于使用摄像头拍摄标识所在的现实场景;标识识别单元用于在拍摄到的现实场景中,对标识进行识别,确定标识的位置,并对标识进行姿态评估;模板坐标系建立单元用于将标识的中心点作为原点建立模板坐标系;坐标系变换单元用于将模板坐标系进行变换,建立预先获取的屏幕上的屏幕坐标系与模板坐标系的坐标系映射关系,将坐标系映射关系作为平面映射关系。
在一个实施例中,坐标系变换单元包括:摄像头坐标系计算子单元及屏幕坐标系变换子单元;摄像头坐标系计算子单元用于将模板坐标系进行旋转或平移,得到模板坐标系映射在摄像头内的摄像头坐标系;屏幕坐标系变换子单元用于根据预先获取的摄像头拍摄的图像或视频与屏幕之间的显示关系,将摄像头坐标系变换为屏幕坐标系。
在一个实施例中,坐标系变换单元还包括:理想屏幕坐标系计算单元及实际屏幕坐标系计算单元;理想屏幕坐标系计算单元用于根据预先获取的摄像头拍摄的图像或视频与屏幕之间的理想显示关系,将摄像头坐标系变换为理想屏幕坐标系;实际屏幕坐标系计算单元用于根据预先获取的摄像头拍摄的图像或视频与屏幕之间的实际显示关系与理想现实关系之间的误差,将理想屏幕坐标系变换为实际屏幕坐标系。
通过使用增强现实技术,使得学生在实训时,能够将学生的实训操作实时地展示在现实场景中,从而使得学生能够更加直观地认识到实训操作带来的实际效果,在实训操作错误时,学生也能够更加快速的观察到,因此不仅使得实训系统与学生之间的交互更加深入,而且能够及时地被学生观察到实训操作的错误,使得学生更快速地更正错误,因此增强了实训系统对学生的辅助效果。
本申请实施例提供一种电子装置,请参阅图3,该电子装置包括:存储器601、处理器602及存储在存储器601上并可在处理器602上运行的计算机程序,处理器602执行该计算机程序时,实现前述中描述的基于增强现实技术的物联网实训系统的实现方法。
进一步的,该电子装置还包括:至少一个输入设备603以及至少一个输出设备604。
上述存储器601、处理器602、输入设备603以及输出设备604,通过总线605连接。
其中,输入设备603具体可为摄像头、触控面板、物理按键或者鼠标等等。输出设备604具体可为显示屏。
存储器601可以是高速随机存取记忆体(RAM,Random Access Memory)存储器,也可为非不稳定的存储器(non-volatile memory),例如磁盘存储器。存储器601用于存储一组可执行程序代码,处理器602与存储器601耦合。
进一步的,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是设置于上述各实施例中的电子装置中,该计算机可读存储介质可以是前述中的存储器601。该计算机可读存储介质上存储有计算机程序,该程序被处理器602执行时实现前述实施例中描述的基于增强现实技术的物联网实训系统的实现方法。
进一步的,该计算机可存储介质还可以是U盘、移动硬盘、只读存储器601(ROM,Read-Only Memory)、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的一种基于增强现实技术的物联网实训系统的实现方法及系统的描述,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    包括:
    获取现实场景中的一个依附平面;
    使用增强现实技术建立所述依附平面与屏幕之间的平面映射关系,以使所述依附平面内的场景映射至屏幕,或使屏幕内的场景映射至所述依附平面内;
    响应学生在所述屏幕上的实训操作,并根据所述映射关系将所述实训操作映射至所述依附平面。
  2. 根据权利要求1所述的基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    所述方法还包括:
    将学生在所述屏幕上的实训操作上传至云计算平台;
    接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;
    根据所述反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
  3. 根据权利要求1所述的基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    所述获取现实场景中的一个依附平面包括:
    获取预先制作好的一个二维平面内的标识;
    将所述标识置于现实场景中,将所述标识在现实场景中的平面作为依附平面进行获取。
  4. 根据权利要求3所述的基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    所述平面映射关系的建立方法包括:
    使用摄像头拍摄所述标识所在的现实场景;
    在拍摄到的现实场景中,对所述标识进行识别,确定所述标识的位置,并对所述标识进行姿态评估;
    将所述标识的中心点作为原点建立模板坐标系;
    将所述模板坐标系进行变换,建立预先获取的屏幕上的屏幕坐标系与所述模板坐标系的坐标系映射关系,将所述坐标系映射关系作为所述平面映射关系。
  5. 根据权利要求4所述的基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    所述将所述模板坐标系进行变换包括:
    将模板坐标系进行旋转或平移,得到所述模板坐标系映射在摄像头内的摄像头坐标系;
    根据预先获取的摄像头拍摄的图像或视频与屏幕之间的显示关系,将所述摄像头坐标系变换为屏幕坐标系。
  6. 根据权利要求5所述的基于增强现实技术的物联网实训系统的实现方法,其特征在于,
    所述将所述模板坐标系进行变换还包括:
    根据预先获取的摄像头拍摄的图像或视频与屏幕之间的理想显示关系,将所述摄像头坐标系变换为理想屏幕坐标系;
    根据预先获取的摄像头拍摄的图像或视频与屏幕之间的实际显示关系与理想现实关系之间的误差,将所述理想屏幕坐标系变换为实际屏幕坐标系。
  7. 一种基于增强现实技术的物联网实训系统的实现系统,其特征在于,
    包括:
    依附平面获取模块,用于获取现实场景中的一个依附平面;
    平面映射关系建立模块,用于使用增强现实技术建立所述依附平面与屏幕之间的平面映射关系,以使所述依附平面内的场景映射至屏幕,或使屏幕内的场景映射至所述依附平面内;
    映射模块,用于响应学生在所述屏幕上的实训操作,并根据所述映射关系将所述实训操作映射至所述依附平面。
  8. 根据权利要求7所述的基于增强现实技术的物联网实训系统的实现系统,其特征在于,
    还包括:
    数据上传模块,用于将学生在所述屏幕上的实训操作上传至云计算平台;
    反馈结果接收模块,用于接收云计算平台通过大数据分析从实训任务、阶段、整体项目三个层次对学生实训的工作成果、专业技能、专业素质进行考核的反馈结果;
    分析模块,用于根据所述反馈结果识别分析学生的学习行为,生成学习诊断及个性化辅导方案。
  9. 一种电子装置,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现权利要求1至6中的任意一项所述方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现权利要求1至6中的任意一项所述方法。
PCT/CN2021/078399 2021-01-29 2021-03-01 基于增强现实技术的物联网实训系统的实现方法及系统 WO2022160406A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110125618.0A CN112911266A (zh) 2021-01-29 2021-01-29 基于增强现实技术的物联网实训系统的实现方法及系统
CN202110125618.0 2021-01-29

Publications (1)

Publication Number Publication Date
WO2022160406A1 true WO2022160406A1 (zh) 2022-08-04

Family

ID=76120943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078399 WO2022160406A1 (zh) 2021-01-29 2021-03-01 基于增强现实技术的物联网实训系统的实现方法及系统

Country Status (2)

Country Link
CN (1) CN112911266A (zh)
WO (1) WO2022160406A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664825A (zh) * 2023-06-26 2023-08-29 北京智源人工智能研究院 面向大场景点云物体检测的自监督对比学习方法及系统
CN117130488A (zh) * 2023-10-24 2023-11-28 江西格如灵科技股份有限公司 基于vr场景的训练考核方法、装置、计算机设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201927211U (zh) * 2010-10-29 2011-08-10 山东星科智能科技有限公司 一种炼钢生产技能训练与考核模拟仿真系统
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
CN107622524A (zh) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 用于移动终端的显示方法和显示装置
CN108198044A (zh) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 商品信息的展示方法、装置、介质及电子设备
CN109685905A (zh) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 基于增强现实的小区规划方法和系统
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107393404A (zh) * 2017-08-11 2017-11-24 上海健康医学院 一种部件虚拟组装实训教学系统
CN209417958U (zh) * 2017-11-17 2019-09-20 国家电网公司 一种基于增强现实技术的技能培训装置
CN108833454B (zh) * 2018-03-23 2020-11-13 深圳技术大学 教学物联网实训系统及方法
US20190340821A1 (en) * 2018-05-04 2019-11-07 Microsoft Technology Licensing, Llc Multi-surface object re-mapping in three-dimensional use modes
CN110162164A (zh) * 2018-09-10 2019-08-23 腾讯数码(天津)有限公司 一种基于增强现实的学习互动方法、装置及存储介质
CN109731356A (zh) * 2018-12-13 2019-05-10 苏州双龙文化传媒有限公司 舞台效果塑造方法及舞台效果呈现系统
CN113853569A (zh) * 2019-05-22 2021-12-28 麦克赛尔株式会社 头戴式显示器
CN110632821A (zh) * 2019-09-23 2019-12-31 吉林工程技术师范学院 一种基于平面和建模为一体的环境设计创作台

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201927211U (zh) * 2010-10-29 2011-08-10 山东星科智能科技有限公司 一种炼钢生产技能训练与考核模拟仿真系统
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
CN107622524A (zh) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 用于移动终端的显示方法和显示装置
CN109685905A (zh) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 基于增强现实的小区规划方法和系统
CN108198044A (zh) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 商品信息的展示方法、装置、介质及电子设备
CN112037314A (zh) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 图像显示方法、装置、显示设备及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664825A (zh) * 2023-06-26 2023-08-29 北京智源人工智能研究院 面向大场景点云物体检测的自监督对比学习方法及系统
CN117130488A (zh) * 2023-10-24 2023-11-28 江西格如灵科技股份有限公司 基于vr场景的训练考核方法、装置、计算机设备及介质

Also Published As

Publication number Publication date
CN112911266A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2020228644A1 (zh) 基于ar场景的手势交互方法及装置、存储介质、通信终端
CN109032348B (zh) 基于增强现实的智能制造方法与设备
US20200184726A1 (en) Implementing three-dimensional augmented reality in smart glasses based on two-dimensional data
WO2018119889A1 (zh) 三维场景定位方法和装置
CN106846497B (zh) 应用于终端的呈现三维地图的方法和装置
CN110866977B (zh) 增强现实处理方法及装置、系统、存储介质和电子设备
CN109144252B (zh) 对象确定方法、装置、设备和存储介质
WO2022160406A1 (zh) 基于增强现实技术的物联网实训系统的实现方法及系统
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN107256082B (zh) 一种基于网络一体化和双目视觉技术的投掷物弹道轨迹测算系统
Huang et al. An approach for augmented learning of finite element analysis
WO2020034981A1 (zh) 编码信息的生成方法和识别方法
KR20200136723A (ko) 가상 도시 모델을 이용하여 객체 인식을 위한 학습 데이터 생성 방법 및 장치
WO2018119676A1 (zh) 一种显示数据处理方法及装置
Lan et al. Development of a virtual reality teleconference system using distributed depth sensors
CN111754622B (zh) 脸部三维图像生成方法及相关设备
US20200118333A1 (en) Automated costume augmentation using shape estimation
CN113379932B (zh) 人体三维模型的生成方法和装置
CN112732075B (zh) 一种面向教学实验的虚实融合机器教师教学方法及系统
Zhao et al. Rapid offline detection and 3D annotation of assembly elements in the augmented assembly
Kohtala et al. Leveraging synthetic data from cad models for training object detection models–a vr industry application case
JP7375149B2 (ja) 測位方法、測位装置、ビジュアルマップの生成方法およびその装置
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
CN112652056B (zh) 一种3d信息展示方法及装置
CN112634439B (zh) 一种3d信息展示方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922005

Country of ref document: EP

Kind code of ref document: A1