WO2023163376A1 - Système expérimental à distance en temps réel sans contact de collaboration virtuelle - Google Patents

Système expérimental à distance en temps réel sans contact de collaboration virtuelle Download PDF

Info

Publication number
WO2023163376A1
WO2023163376A1 PCT/KR2023/000620 KR2023000620W WO2023163376A1 WO 2023163376 A1 WO2023163376 A1 WO 2023163376A1 KR 2023000620 W KR2023000620 W KR 2023000620W WO 2023163376 A1 WO2023163376 A1 WO 2023163376A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
face
virtual reality
image
virtual
Prior art date
Application number
PCT/KR2023/000620
Other languages
English (en)
Korean (ko)
Inventor
천우영
장준호
김승직
이종하
Original Assignee
계명대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 계명대학교 산학협력단 filed Critical 계명대학교 산학협력단
Publication of WO2023163376A1 publication Critical patent/WO2023163376A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P1/00Details of instruments
    • G01P1/07Indicating devices, e.g. for remote indication
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present invention relates to a virtual collaboration non-face-to-face real-time remote experiment system.
  • Metaverse is a compound word of 'meta', which means virtual and transcendence, and 'universe', which means world and universe, and means a three-dimensional virtual world linked to reality.
  • This metaverse service is useful for online learning, so unlike offline education methods, it does not require time and money to move to a specific place, and can support various contents according to the user's learning level or difficulty.
  • users can indirectly experience various educational environments without physical limitations by using the metaverse environment.
  • the technical task to be achieved by the present invention is to provide a virtual collaboration non-face-to-face real-time remote experiment system that can prepare a promotion system for creating a core research support center, building research equipment, and activating joint research by utilizing the metaverse platform.
  • an embodiment of the present invention is a virtual collaborative non-face-to-face real-time remote experiment system, which is carried by a user, acquires user image information by photographing the user, and generates image information using the user image information.
  • a VR (Virtual Reality) headset displaying virtual reality content for non-face-to-face real-time remote experiments with avatars;
  • a metaverse platform server that generates and provides the avatar and the virtual reality content to the VR headset and controls the implementation and/or operation of the virtual reality content based on user input information received from the VR headset.
  • the metaverse platform server the server communication module for communicating with the VR headset; an image analysis module for analyzing an object from the user's image information received from the VR headset; a command analysis module that analyzes a button signal and/or a user's gesture from a user's command and detects an execution command corresponding thereto;
  • the virtual reality content including the virtual reality space and/or the virtual reality object to analyze the user's image information to create an avatar made of a 3D object, enable virtual collaboration, and enable non-face-to-face real-time remote experiments.
  • Content providing module for generating; and a server control module that receives an analysis result from the image analysis module and/or the command analysis module and controls the content providing module so that the avatar behaves in response to a user's command.
  • the image analysis module analyzes an object using the image recognition model set as an image analysis artificial intelligence program for shape recognition
  • the image recognition model includes 1 for object recognition, Automatically generate test images that automatically generate 2D barcode data, data argument that automatically processes padding according to image resolution changes, including image augmentation techniques, and location of bounding boxes that occur when the resolution of training images is changed
  • the content providing module extracts a plurality of images taken at set angles by sampling scanning data and/or image data of a test object selected by a user based on vision technology using the artificial intelligence model. After synthesizing and synthesizing the extracted image as a 3D object, it is possible to generate the virtual reality content by correcting it with a 3D editing tool.
  • the VR headset includes a sensing module for sensing a user's gaze based on a user's movement or eye movement based on the origin of the virtual reality content; a display module for displaying a corresponding viewing area during driving of the virtual reality content according to the detected movement and/or line of sight; a command input module for receiving and processing a user's command; A headset communication module communicating with the metaverse platform server; and a headset control module that processes the virtual reality content received through the headset communication module to be displayed on the display module.
  • the sensing module includes at least one of an acceleration sensor, an angular velocity sensor, and a gyro sensor to detect the movement of the VR headset, or an eye tracking sensor to detect the direction or movement of the eyes. can do.
  • the command input module may receive a user's button press or recognize a user's gesture so as to receive a selection signal and/or an operation signal command from the user.
  • another embodiment of the present invention is a virtual collaboration non-face-to-face real-time remote experiment system, carried by a guest user, photographing the guest user to obtain guest user image information, and the guest user image information
  • a guest terminal displaying avatars created using and virtual reality contents for non-face-to-face real-time remote experiments; It is used (portable) by the host user, acquires host user image information by photographing the host user, displays the host avatar created using the host user image information, and virtual reality content for virtual collaborative non-face-to-face real-time remote experiments.
  • a host terminal for providing instruction information indicated by the host to the guest terminal and/or the metaverse platform server; and generating the avatar and the virtual reality content and providing the virtual reality content to the guest terminal and/or the host terminal, based on input information of the guest and/or host received from the guest terminal and/or the host terminal.
  • a virtual collaboration non-face-to-face real-time remote experiment system including a metaverse platform server that controls the implementation and / or operation of content.
  • the metaverse platform server may include a server communication module communicating with the guest terminal and/or the host terminal; an image analysis module for analyzing an object from the image information of the guest user and/or host user received from the guest terminal and/or the host terminal; a command analysis module that analyzes a button signal and/or a guest user's gesture from a guest user's command and detects an execution command corresponding thereto; An avatar made of a 3D object is created by analyzing the video information of the guest user and/or the host user, and the virtual reality space and/or the virtual reality object is created to enable virtual collaboration and non-face-to-face real-time remote experiments.
  • a content providing module generating the virtual reality content including; and a server control module that receives an analysis result from the video analysis module and/or the command analysis module and controls the content providing module so that the avatar behaves in response to a guest user's command.
  • the image analysis module analyzes an object using the image recognition model set as an image analysis artificial intelligence program for shape recognition
  • the image recognition model includes 1 for object recognition, Automatically generate test images that automatically generate 2D barcode data, data argument that automatically processes padding according to image resolution changes, including image augmentation techniques, and location of bounding boxes that occur when the resolution of training images is changed
  • the content providing module samples scanning data and/or image data of a test object selected by a guest user based on vision technology using the artificial intelligence model, and outputs a plurality of images taken at set angles. After extracting and synthesizing, and generating the extracted image as a 3D object, it is possible to generate the virtual reality content by correcting it with a 3D editing tool.
  • the host terminal uses an experiment guide that includes at least one of test material resources, experiment equipment, experiment conditions, experiment methods, and instructions. Instruction information can be created.
  • the host terminal may generate the instruction information by further including at least one of host speech, chatting, pointer, and drawing.
  • a service capable of virtual collaboration and real-time remote experimentation can be provided, and the user's satisfaction can be increased by checking the experiment result in real time in close proximity.
  • FIG. 1 is a diagram showing the configuration of a virtual collaboration non-face-to-face real-time remote experiment system according to an embodiment of the present invention.
  • FIG. 2 is a diagram showing the detailed configuration of a VR headset according to an embodiment of the present invention by way of example.
  • FIG. 3 is a diagram showing the detailed configuration of the headset control module of FIG. 2 by way of example.
  • Figure 4 is a diagram showing the detailed configuration of the metaverse platform server according to an embodiment of the present invention by way of example.
  • FIG. 5 is a diagram showing an avatar created in the metaverse platform server according to an embodiment of the present invention by way of example.
  • FIG. 6 is a diagram showing an example of experimenting using virtual reality content generated by the metaverse platform server according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing the configuration of a virtual collaboration non-face-to-face real-time remote experiment system according to another embodiment of the present invention.
  • FIG. 1 is a diagram showing the configuration of a virtual collaboration non-face-to-face real-time remote experiment system according to an embodiment of the present invention.
  • a virtual collaborative non-face-to-face real-time remote experiment system may include a VR (Virtual Reality) headset 100 and a metaverse platform server 300.
  • VR Virtual Reality
  • the VR headset 100 is carried by the user, and user image information can be acquired by photographing the user.
  • the VR headset 100 may display an avatar generated using the user image information and virtual reality content for remote experiments.
  • the VR headset 100 includes a sensing module 110 that detects the user's gaze based on the user's rotation (movement) or eye movement based on the origin of the virtual reality content, and the detected rotation (movement) And / or display module 120 for displaying the viewing area corresponding to the driving of the virtual reality content according to the line of sight, command input module 130 for receiving and inputting a user's command, the metaverse platform server 300
  • It may include a headset communication module 140 that communicates with, and a headset control module 150 that processes the virtual reality content received through the headset communication module 140 to be displayed on the display module 120.
  • the sensing module 110 may include at least one of an acceleration sensor, an angular velocity sensor, and a gyro sensor to detect movement (rotation) of the VR headset 100 .
  • the detection module 110 may include an eye tracking sensor for detecting a direction (line of sight) or movement of the eyes.
  • the display module 120 may be worn on the user's body (face) and display the viewing area according to the driving of the virtual reality content.
  • the display module 120 is formed in the form of goggles that can be worn on the user's face, and may include a display displaying virtual reality content in front of the user's eyes.
  • the command input module 130 may include various manipulation buttons and components capable of recognizing a user's gesture so as to receive commands such as selection signals and manipulation signals from the user.
  • the command input module 130 may include an input device that receives a user's button press or uses a user's finger spread and/or closed finger as a gesture.
  • the headset communication module 140 is installed on one side of the display module 120 and can communicate with the metaverse platform server 300 through a communication network.
  • the headset communication module 140 can access the metaverse platform server 300 through a wired or wireless wide area or local area network or local access method according to the communication protocol of the metaverse platform server 300. there is.
  • the headset control module 150 may (signal) process the virtual reality content received through the headset communication module 140 to be displayed on the display module 120 .
  • the headset control module 150 may process to perform a set function (operation) in response to a command input from the user.
  • the headset control module 150 may include an image processing unit 152, an input command processing unit 154, and a main control unit 156, as shown in FIG.
  • the image processing unit 152 may signal-process the virtual reality content so that the virtual reality content received from the metaverse platform server 300 is displayed on the display module 120 and provide it to the display module 120. there is.
  • the image processing unit 152 may perform various image processing processes on the image signal included in the received virtual reality content.
  • the image processing unit 152 may output an image signal having performed such a process through the display of the display module 120 .
  • the image processor 152 can display an image based on the corresponding image signal on the display. For example, the image processing unit 152 extracts image data and/or additional data from the received virtual reality content, adjusts to a preset resolution, and outputs the image data and/or additional data through the display. can make it
  • the type of image processing process performed by the image processing unit 152 is not limited, and for example, decoding corresponding to the image format of the image data, interlacing image data in a progressive method De-interlacing to convert, scaling to adjust image data to a preset resolution, noise reduction to improve image quality, detail enhancement, frame refresh rate conversion, etc.
  • the input command processing unit 154 may signal-process the user command received from the command input module 130 and transmit it to the metaverse platform server 300 through the headset communication module 140.
  • the input command processing unit 154 compares the user's gesture received from the command input module 130 with a set gesture to determine whether a command is issued, and to perform a corresponding operation (function) set according to the determination result.
  • the main control unit 156 may control the input command processing unit 154 to process a user's command input through the command input module 130 .
  • the main control unit 156 processes the virtual reality content into a video signal through the image processing unit 152 and displays it on the display module 120, the virtual reality through the command input module 130
  • the image processing unit 152 may be controlled to execute the user's command.
  • the headset control module 150 is a system-on-chip (SOC) in which these various functions are integrated, or individual components capable of independently performing each of these processes are mounted on a printed circuit board to process images. It can be implemented as a board (not shown) and embedded in the VR headset 100.
  • SOC system-on-chip
  • the metaverse platform server 300 creates the avatar and the virtual reality content and provides them to the VR headset 100, implements the virtual reality content based on user input information received from the VR headset, and / Or you can control the operation. At this time, the metaverse platform server 300 creates a virtual reality space and/or a virtual reality object included in the virtual reality content based on previously held spatial data and/or object data according to the user's setting and/or request. can do. In addition, the metaverse platform server 300 receives and analyzes a user's command from the VR headset 100, and provides a virtual reality object and/or its operation corresponding to the user's command to the VR headset 100. can do. To this end, the metaverse platform server 300 may include a server communication module 310, an image analysis module 320, a command analysis module 330, a content providing module 340, and a server control module 350. can
  • the server communication module 310 may communicate with the headset communication module 140 through a communication network.
  • the server communication module 310 may receive a user's command from the headset communication module 140 and transmit the virtual reality content as a video signal.
  • the image analysis module 320 analyzes an object from the user's image information received from the VR headset 100 and provides the analysis result to the content providing module 340 and/or the server control module 350.
  • the image analysis module 320 may classify an object from an image using a preset image recognition model, and analyze the divided object into living things and objects.
  • the image recognition model can be set as an image analysis artificial intelligence program for shape recognition, and image resolution including automatic generation of test images and image propagation techniques that automatically generate 1- and 2-dimensional barcode data for object recognition.
  • Data argument that automatically processes padding according to change, resolution change that introduces an algorithm to correct the positional change of the bounding box that occurs when the resolution of the training image is changed, and image captured for product and shape recognition It can support items such as data set separation for image restoration that automatically classifies training data for upscaling, and recognition processing that simulates (object detection, OCR, barcode or QR code, object tracking, etc.).
  • the image recognition model can secure an accuracy of about 94% or more based on about 10,000 image datasets to which the data argument is applied.
  • the command analysis module 330 may analyze a button signal and/or a user's gesture from a user's command, and provide analysis results to the content providing module 340 and/or the server control module 350 . For example, the command analysis module 330 may detect (confirm) an execution command corresponding thereto by analyzing the button signal and/or the user's gesture.
  • the content providing module 340 may generate an avatar based on the user's image information. At this time, the content providing module 340 may generate an avatar made of a 3D object by analyzing the user's image information. In addition, the content providing module 340 sets an artificial intelligence model based on preset industrial domain data (eg, construction/construction data) and/or experiment data, and uses the spatial data and/or the object data to The virtual reality content including the virtual reality space and/or the virtual reality object may be generated from the artificial intelligence model.
  • preset industrial domain data eg, construction/construction data
  • experiment data uses the spatial data and/or the object data to The virtual reality content including the virtual reality space and/or the virtual reality object may be generated from the artificial intelligence model.
  • the content providing module 340 may generate the virtual reality space and/or the virtual reality object so that virtual collaboration is possible and real-time remote experiments are possible in a non-face-to-face manner.
  • the content providing module 340 may allow the user's avatar to enter the intelligent construction system core support center and perform a shear friction test according to the strength of stirrup reinforcing bars.
  • the user can use the avatar to virtually perform an experiment in which a specimen is applied for a shear friction test, and as a result of the experiment, the crack properties of the specimen can be checked in close proximity and in real time.
  • the content providing module 340 samples the scanning data and/or image data of the test object selected by the user based on the vision technology (live virtual reality toolkit) using the artificial intelligence model, and a plurality of images captured at set angles. After extracting and synthesizing, and generating the extracted image as a 3D object, it is possible to generate the virtual reality content by correcting it with a 3D editing tool. Through this, the content providing module 340 can easily create virtual reality contents optimized for various experimental environments only with scanning data and/or image data for various test objects, and significantly reduce the working time for creating virtual reality contents. It can achieve very good work efficiency.
  • live virtual reality toolkit live virtual reality toolkit
  • the server control module 350 receives analysis results from the image analysis module 320 and/or the command analysis module 330, and the content providing module 340 allows the avatar to act in response to a user's command. can control.
  • the metaverse platform server 300 can provide a service capable of virtual collaboration and non-face-to-face real-time remote experimentation, and the user can check the experiment result in real time in close proximity to increase user satisfaction. .
  • FIG. 7 is a diagram showing the configuration of a virtual collaboration non-face-to-face real-time remote experiment system according to another embodiment of the present invention.
  • the virtual collaboration non-face-to-face real-time remote experiment system may include a guest terminal 100, a host terminal 200, and a metaverse platform server 300.
  • the guest terminal 100 is carried by a guest user and may have substantially the same configuration and technical characteristics as the VR headset 100 described with reference to FIG. Accordingly, for convenience of explanation, descriptions of configurations and technical features overlapping those of the VR headset described above will be omitted.
  • the host terminal 200 is used (portable) by the host user to conduct a virtual collaborative non-face-to-face real-time remote experiment, and may acquire host user image information by photographing the host user.
  • the host terminal 200 may display a host avatar generated using the host user image information and virtual reality content for a virtual collaborative non-face-to-face real-time remote experiment.
  • the host terminal 200 displays virtual reality content including a virtual reality space and/or virtual reality objects for conducting a non-face-to-face real-time remote experiment service, and transmits instruction information instructed by the host to the guest terminal ( 100) and/or the metaverse platform server 300.
  • the host terminal 200 includes at least one of test material resources, research (experiment) equipment, research (experiment) conditions, research (experiment) methods, and instructions.
  • the instruction information can be generated using a guide.
  • the host terminal 200 may additionally include at least one of speech, chatting, pointer, and drawing of the host to generate the instruction information.
  • the host terminal 200 may perform multi-person interaction with the guest terminal 100 in real time to conduct a virtual collaborative non-face-to-face remote experiment.
  • the metaverse platform server 300 generates avatars and virtual reality contents for each of a guest user and a host user and provides them to the guest terminal 100 and the host terminal 200, and the guest terminal 100 and And/or implementation and/or operation of the virtual reality content may be controlled based on at least one of guest user and/or host user input information, command, and instruction information received from the host terminal 200 .
  • the metaverse platform server 300 is a virtual reality space and / or included in the virtual reality content based on previously held spatial data and / or object data according to the setting and / or request of the guest user and / or host user.
  • a virtual reality object may be created.
  • the metaverse platform server 300 receives and analyzes commands of the guest user and/or the host user from the guest terminal 100 and/or the host terminal 200, and analyzes the commands of the guest user and/or host user.
  • a virtual reality object corresponding to a command and/or its operation may be provided to the guest terminal 100 and/or the host terminal 200 .
  • the metaverse platform server 300 includes a server communication module 310, an image analysis module 320, a command analysis module 330, a content providing module 340, and a server control as shown in FIG. module 350.
  • the server communication module 310 may communicate with the guest terminal 100 and/or the host terminal 200 through a communication network.
  • the server communication module 310 receives a guest user's command and/or host user's instruction information from the guest terminal 100 and/or the host terminal 200, and converts the virtual reality content into a video signal. can be sent to
  • the image analysis module 320 analyzes an object from the image information of the guest user and/or host user received from the guest terminal 100 and/or the host terminal 200, and returns the analysis result to the content providing module ( 340) and/or the server control module 350.
  • the image analysis module 320 may classify an object from an image using a preset image recognition model, and analyze the divided object into living things and objects.
  • the image recognition model can be set as an image analysis artificial intelligence program for shape recognition, and image resolution including automatic generation of test images and image propagation techniques that automatically generate 1- and 2-dimensional barcode data for object recognition.
  • Data argument that automatically processes padding according to change, resolution change that introduces an algorithm to correct the positional change of the bounding box that occurs when the resolution of the training image is changed, and image captured for product and shape recognition It can support items such as data set separation for image restoration that automatically classifies training data for upscaling, and recognition processing that simulates (object detection, OCR, barcode or QR code, object tracking, etc.).
  • the image recognition model can secure an accuracy of about 94% or more based on about 10,000 image datasets to which the data argument is applied.
  • the command analysis module 330 may analyze a button signal and/or a guest user's gesture from a guest user's command, and provide analysis results to the content providing module 340 and/or the server control module 350. there is. For example, the command analysis module 330 may analyze the button signal and/or the user's gesture to determine an execution command corresponding thereto.
  • the content providing module 340 may generate an avatar based on image information of a guest user and/or a host user. In this case, the content providing module 340 may analyze image information of the guest user and/or host user to generate an avatar made of a 3D object. In addition, the content providing module 340 sets an artificial intelligence model based on preset industrial domain data (eg, construction/construction data) and/or experiment data, and uses the spatial data and/or the object data to The virtual reality content including the virtual reality space and/or the virtual reality object may be generated from the artificial intelligence model.
  • industrial domain data eg, construction/construction data
  • experiment data uses the spatial data and/or the object data to The virtual reality content including the virtual reality space and/or the virtual reality object may be generated from the artificial intelligence model.
  • the content providing module 340 may generate the virtual reality space and/or the virtual reality object so that virtual collaboration is possible and real-time remote experiments are possible in a non-face-to-face manner. For example, referring to FIG. 6 , the content providing module 340 may perform a shear friction test according to the strength of stirrup reinforcing bars by entering the guest user's avatar into the intelligent construction system core support center. At this time, the guest user can use the avatar to virtually perform an experiment in which a specimen is applied for a shear friction test, and as a result of the experiment, the crack properties of the specimen can be checked in close proximity and in real time.
  • the content providing module 340 samples the scanning data and/or image data of the test object selected by the guest user based on the vision technology (live virtual reality toolkit) using the artificial intelligence model, and provides a plurality of pictures taken at a set angle.
  • the virtual reality content may be generated by extracting and synthesizing images, generating the extracted images as a 3D object, and correcting them with a 3D editing tool.
  • the content providing module 340 can easily create virtual reality contents optimized for various experimental environments only with scanning data and/or image data for various test objects, and significantly reduce the working time for creating virtual reality contents. It can achieve very good work efficiency.
  • the server control module 350 receives analysis results from the image analysis module 320 and/or the command analysis module 330, and the content providing module 340 allows the avatar to act in response to a user's command. can control.
  • the metaverse platform server 300 can provide a service that enables virtual collaboration and non-face-to-face real-time remote experiments, and allows guest users to check experiment results in real time in close proximity to increase guest user satisfaction.
  • a service capable of virtual collaboration and real-time remote experimentation can be provided, and the user's satisfaction can be increased by checking the experiment result in real time in close proximity.
  • the virtual collaboration non-face-to-face real-time remote experiment system of the present invention utilizes a VR headset to create virtual reality content and provides a metaverse virtual collaboration non-face-to-face real-time remote system to enable virtual collaboration without time/spatial constraints, enabling non-face-to-face real-time remote Since experiments are possible, it is possible to contribute to the creation of a core research support center, construction of research equipment, and vitalization of joint research, so it has high industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un système expérimental à distance en temps réel sans contact de collaboration virtuelle. Un mode de réalisation de la présente invention concerne un système expérimental à distance en temps réel sans contact de collaboration virtuelle comprenant : un casque de réalité virtuelle (VR) qui est porté par un utilisateur, capture une image de l'utilisateur pour obtenir des informations d'image d'utilisateur, et affiche un avatar généré à l'aide des informations d'image d'utilisateur et des contenus de réalité virtuelle pour une expérience à distance en temps réel sans contact ; et un serveur de plateforme de métavers qui génère l'avatar et les contenus de réalité virtuelle et les fournit au casque VR, et commande la mise en œuvre et/ou le fonctionnement des contenus de réalité virtuelle sur la base d'informations d'entrée de l'utilisateur qui sont reçues du casque VR.
PCT/KR2023/000620 2022-02-25 2023-01-13 Système expérimental à distance en temps réel sans contact de collaboration virtuelle WO2023163376A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220025386A KR20230127734A (ko) 2022-02-25 2022-02-25 가상협업 비대면 실시간 원격 실험 시스템
KR10-2022-0025386 2022-02-25

Publications (1)

Publication Number Publication Date
WO2023163376A1 true WO2023163376A1 (fr) 2023-08-31

Family

ID=87766206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/000620 WO2023163376A1 (fr) 2022-02-25 2023-01-13 Système expérimental à distance en temps réel sans contact de collaboration virtuelle

Country Status (2)

Country Link
KR (1) KR20230127734A (fr)
WO (1) WO2023163376A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539367A (zh) * 2023-11-20 2024-02-09 广东海洋大学 基于交互式智能实验教学系统的图像识别跟踪方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360999B1 (ko) * 2013-08-09 2014-02-10 코리아디지탈 주식회사 증강현실 기반의 실시간 데이터 제공 방법 및 이를 이용한 휴대단말기
KR20170044318A (ko) * 2015-10-15 2017-04-25 한국과학기술원 헤드 마운트 디스플레이를 이용한 협업 방법
KR101784172B1 (ko) * 2016-08-03 2017-10-11 한국해양과학기술원 해양생물 3d 모델링 및 가상체험 콘텐츠의 제작 시스템 및 방법
JP2020013236A (ja) * 2018-07-17 2020-01-23 アビームコンサルティング株式会社 コンテンツ再生制御プログラム
KR20210064830A (ko) * 2019-11-26 2021-06-03 주식회사 딥파인 영상처리 시스템
KR20210146193A (ko) * 2020-05-26 2021-12-03 주식회사 빌리버 가상현실을 이용한 원격수업제공방법
KR102341752B1 (ko) * 2021-09-13 2021-12-27 (주)인더스트리미디어 메타버스에서 아바타를 이용한 강의 보조 방법 및 그 장치

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030057507A (ko) 2003-06-18 2003-07-04 김스캇 인터넷 화상 채팅 기능(Video Conferencing Function)과 웹사이트를 이용한 원거리 자동차 판매 영업 상담 방식

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360999B1 (ko) * 2013-08-09 2014-02-10 코리아디지탈 주식회사 증강현실 기반의 실시간 데이터 제공 방법 및 이를 이용한 휴대단말기
KR20170044318A (ko) * 2015-10-15 2017-04-25 한국과학기술원 헤드 마운트 디스플레이를 이용한 협업 방법
KR101784172B1 (ko) * 2016-08-03 2017-10-11 한국해양과학기술원 해양생물 3d 모델링 및 가상체험 콘텐츠의 제작 시스템 및 방법
JP2020013236A (ja) * 2018-07-17 2020-01-23 アビームコンサルティング株式会社 コンテンツ再生制御プログラム
KR20210064830A (ko) * 2019-11-26 2021-06-03 주식회사 딥파인 영상처리 시스템
KR20210146193A (ko) * 2020-05-26 2021-12-03 주식회사 빌리버 가상현실을 이용한 원격수업제공방법
KR102341752B1 (ko) * 2021-09-13 2021-12-27 (주)인더스트리미디어 메타버스에서 아바타를 이용한 강의 보조 방법 및 그 장치

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539367A (zh) * 2023-11-20 2024-02-09 广东海洋大学 基于交互式智能实验教学系统的图像识别跟踪方法
CN117539367B (zh) * 2023-11-20 2024-04-12 广东海洋大学 基于交互式智能实验教学系统的图像识别跟踪方法

Also Published As

Publication number Publication date
KR20230127734A (ko) 2023-09-01

Similar Documents

Publication Publication Date Title
Li et al. A web-based sign language translator using 3d video processing
CN108427910B (zh) 深层神经网络ar手语翻译学习方法、客户端及服务器
CN108983636B (zh) 人机智能共生平台系统
WO2023163376A1 (fr) Système expérimental à distance en temps réel sans contact de collaboration virtuelle
US20170102766A1 (en) Reaction output apparatus, reaction output system, and non-transitory computer-readable storage medium
CN111338481B (zh) 基于全身动态捕捉的数据交互系统及方法
WO2013025011A1 (fr) Procédé et système de suivi d'un corps permettant de reconnaître des gestes dans un espace
CN113303791A (zh) 一种机动车驾驶人在线自助体检系统、移动终端及存储介质
WO2015199470A1 (fr) Appareil et procédé permettant d'estimer la position d'une main au moyen d'une caméra de profondeur de couleur montée sur la tête, et système d'interaction à mains nues associé
O'Hagan et al. Visual gesture interfaces for virtual environments
WO2024054079A1 (fr) Mallette de jeu à mise en miroir par intelligence artificielle
CN110262662A (zh) 一种智能人机交互方法
CN111881807A (zh) 基于人脸建模及表情追踪的vr会议控制系统及方法
de Amorim et al. Asl-skeleton3d and asl-phono: Two novel datasets for the american sign language
WO2022145655A1 (fr) Système de réalité augmentée
WO2022019692A1 (fr) Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour créer une animation
WO2021187771A1 (fr) Dispositif de réalité augmentée réalisant une reconnaissance audio et son procédé de commande
WO2021029566A1 (fr) Procédé et appareil pour fournir des contenus virtuels dans un espace virtuel sur la base d'un système de coordonnées commun
Suenaga et al. Human reader: An advanced man‐machine interface based on human images and speech
Haritaoglu et al. Attentive Toys.
CN112667088B (zh) 基于vr行走平台的手势应用识别方法及系统
Garg et al. Controller free hand interaction in Virtual Reality
CN112017247A (zh) 利用kinect实现无人车视觉的方法
WO2024106567A1 (fr) Système de traitement d'image pour conversion de contenu augmentée
Deepika et al. Machine Learning-Based Approach for Hand Gesture Recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23760251

Country of ref document: EP

Kind code of ref document: A1