WO2021015348A1 - Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant - Google Patents

Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant Download PDF

Info

Publication number
WO2021015348A1
WO2021015348A1 PCT/KR2019/009286 KR2019009286W WO2021015348A1 WO 2021015348 A1 WO2021015348 A1 WO 2021015348A1 KR 2019009286 W KR2019009286 W KR 2019009286W WO 2021015348 A1 WO2021015348 A1 WO 2021015348A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
mobile device
player
camera
character
Prior art date
Application number
PCT/KR2019/009286
Other languages
English (en)
Korean (ko)
Inventor
박민지
Original Assignee
티팟스튜디오 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 티팟스튜디오 주식회사 filed Critical 티팟스튜디오 주식회사
Publication of WO2021015348A1 publication Critical patent/WO2021015348A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to a camera tracking method for providing mixed rendering content using Virtual Reality (VR) and Augmented Reality (AR), and a system and apparatus therefor. More specifically, the present invention relates to a method, apparatus, and system for generating and providing VR and AR mixed rendering content based on information related to motion of a VR device player user and camera tracking information of a mobile device user.
  • VR Virtual Reality
  • AR Augmented Reality
  • VR players wearing HMD Head Mounted Display
  • VR players wearing HMD enjoy immersive games, but the rest of the observers watch the motion of the VR player or passively stare at the screen provided by the VR player's perspective to feel the immersion that is characteristic of a virtual reality environment. It is difficult.
  • the observer who observes the VR player playing the VR game can check the first person view of the user playing the VR through the screen, but there is a problem that the screen change is very fast and causes motion sickness.
  • a chroma key technique that synthesizes the background around the VR player can be used, but this requires expensive facilities, equipment, and foreign software.
  • creators who produce content using Internet broadcasting services such as YouTube and Twitch, not specialized broadcasting stations, are in a difficult situation to utilize such facilities and equipment. Therefore, contrary to the existing chroma key technique, a new technology that can infer the motion of a VR player in a virtual environment built in a mobile device easily possessed by users and render it is required.
  • An object of the present invention is to provide a method and apparatus for generating and providing VR and AR mixed rendering content based on motion-related information of a VR device player user and camera tracking information of a mobile device user.
  • an object of the present invention is to provide mixed rendering content in which a virtual reality character and a virtual reality environment are reconstructed in a mobile device by transmitting only essential motion information of a VR player, not a method of transmitting a VR image.
  • the present invention utilizes a computer for generating human body motion data connected to the HMD of a VR player and a mobile device of the observer, so that the motion data generation part, which is a task with a relatively high computational amount, is processed by the computer and can be quickly processed.
  • the aim is to provide seamless VR video content by performing only operations on a mobile device.
  • an object of the present invention is to provide a method and apparatus for easily synthesizing a VR player and a VR world and performing video streaming without the need for specialized computer graphics (CG) photographing equipment.
  • CG computer graphics
  • the present invention synthesizes the virtual reality and the VR player so that the VR player can be observed from the viewpoint of the observer rather than the viewpoint of the VR player, and the viewer can view it as a desired viewpoint through a mobile device through augmented reality (AR). It aims to provide VR video content that is present.
  • AR augmented reality
  • a system for providing mixed rendering content using virtual reality (VR) and augmented reality (AR) motion-related information of a player user using a virtual reality device is received, and And a character body motion generating device configured to estimate a body posture of the player user based on the motion related information; And a mobile device configured to generate image content by rendering a three-dimensional environment based on the motion-related information of the character and the camera tracking information of the observer received from the character body motion generating device.
  • camera tracking may be performed after finding the initial position of the camera of the mobile device from the image of the player captured by the mobile device.
  • the character body motion generating device estimates the player-user's body posture using machine learning, and the mobile device synchronizes the estimated body posture of the player-user with the motion-related information of the character. Can estimate the initial position of the camera.
  • the motion-related information of the player user includes motion capture information obtained from a camera photographing the motion of the player user's body and the IMU (Inertial Measurement Unit) sensor data of the virtual reality device worn by the player user.
  • the character body motion generating device may learn a correlation between the IMU sensor data and the motion capture information using an artificial neural network (Artificial Neural Network, ANN).
  • ANN Artificial Neural Network
  • the mobile device can reduce errors by periodically adjusting the position and direction of the camera.
  • the mobile device may adjust the position and direction of the camera based on at least one marker disposed in a space in which a player user using the virtual reality device is located.
  • the present invention it is possible to provide a method and apparatus for generating and providing VR and AR mixed rendering content based on information related to motion of a VR device player user and camera tracking information of a mobile device user.
  • a VR image instead of transmitting a VR image, only essential motion information of a VR player is transmitted, and mixed rendering content in which a virtual reality character and a virtual reality environment are reconstructed in a mobile device can be provided.
  • the motion data generation part which is a task with a relatively high computational amount, is processed by the computer and can be processed quickly. It is possible to provide seamless VR video content by performing only possible operations on the mobile device.
  • the virtual reality and the VR player are synthesized so that the VR player can be observed from the viewer's point of view rather than the viewpoint of the VR player.
  • VR video content that can be viewed can be provided.
  • FIG. 1 is an exemplary view illustrating a configuration of a system for providing mixed rendering content using virtual reality and augmented reality according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of a system for providing mixed rendering content according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method performed in a VR device according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a method performed by the apparatus for generating a motion of a human body according to an embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method performed in a mobile device according to an embodiment of the present invention.
  • 6A and 6B are exemplary diagrams showing a screen of mixed rendering content generated according to an embodiment of the present invention.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and it does not mean that each component is formed of separate hardware or a single software component. That is, each constituent unit is described as being listed as a respective constituent unit for convenience of explanation, and at least two constituent units of each constituent unit are combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function. Integrated embodiments and separate embodiments of each of these components are also included in the scope of the present invention unless departing from the essence of the present invention.
  • FIG. 1 is an exemplary view illustrating a configuration of a system for providing mixed rendering content using virtual reality and augmented reality according to an embodiment of the present invention.
  • the VR player 10 experiencing a VR game or content includes a VR device 100 such as a head mounted display (HMD) that can be worn on the head and a motion controller 101 that can be held in both hands. It can be worn, and information on the human body motion and posture of the VR player can be obtained by receiving information related to the position and movement motion of both hands as well as the position and movement of the head of the VR player through such a VR wearable device.
  • the mobile device 200 of the observer 20 capable of photographing while observing the motion of the VR player 10 finds the camera position based on the camera image and sensor data, and uses AR technology through camera tracking. You can render a 3D environment.
  • a computer for generating human body motion data connected to the HMD 100 and the motion controller 101 of the VR player 10 may exist separately, and the motion data generation part, which is a task with a relatively high computational amount, is It is possible to provide seamless VR video content by performing only operations that are processed by a computer (PC) that is higher performance than the mobile device and can be processed faster by the mobile device 200.
  • PC computer
  • the character 270 created as an avatar of the VR player 10 in the mobile device 200 of the observer 20 is displayed on the virtual reality screen through the full body motion data of the VR player 10 and camera tracking information generated as described above. It may be synthesized, rendered, generated, and displayed, and the 3D virtual environment may change in real time according to the position and viewing angle of the observer's mobile device 200.
  • the immersion of the VR player is easily transmitted to the observer or game video creator by inferring the motion and posture of the player user in a virtual environment built in a mobile device and rendering it. I can.
  • the mixed rendering content providing system is a character body motion generating device for generating a character body motion by receiving motion data from the VR device 100 worn by the VR player and the VR device 100 ( 300), a mobile device for receiving full body motion data information from the character body motion generating device 300, rebuilding a virtual environment through camera tracking, and generating VR video content including a VR player character through real-time rendering It can be composed of 200.
  • the VR device 100 worn by a VR player who directly experiences VR games or contents includes a head mounted display (HMD) that can be worn on the head and a motion controller that can be held in both hands. Can, but is not limited to these.
  • the character body motion generating device 300 may reconstruct the player's whole body motion, and the reconstructed whole body motion-related information is transmitted to the mobile device ( 200), and the mobile device 200 may generate the final mixed rendering content through camera tracking technology by using an augmented reality mobile application or the like.
  • the VR device 100 may include a communication unit 110, a processing unit 120, a display unit 130, a sensor unit 140, and the like.
  • the communication unit 110 may be a module or part configured to transmit data through a network with the character body motion generating device 300, and the communication unit transmits data related to the motion and posture of the player wearing the VR device 100. It may be transmitted to an external device, for example, the character body motion generating device 300 through 110.
  • the network is a network connected by wire or wirelessly, and when the network is a wireless communication network, cellular communication or short-range communication may be included.
  • cellular communication is LTE (Long-Term Evolution), LTE-A (LTE Advanced), 5G (5th Generation), CDMA (Code Division Multiple Access), WCDMA (Wideband CDMA), UMTS (Universal Mobile Telecommunications System), WiBro It may include at least one of (Wireless Broadband), or Global System for Mobile Communications (GSM).
  • short-range communication may include at least one such as Wireless Fidelity (Wi-Fi), Bluetooth, Zigbee, or Near Field Communication (NFC).
  • Wi-Fi Wireless Fidelity
  • Bluetooth Zigbee
  • NFC Near Field Communication
  • the communication method is not limited thereto, and a wireless communication technology to be developed later will be included.
  • the processing unit 120 is responsible for processing various data related to the operation of the VR device 100 and displayed information.
  • the processing unit 120 may include a central processing unit (CPU), an application processor (AP), and the like.
  • a memory capable of storing commands or data related to at least one other component may be included therein, or necessary information may be accessed by communicating with a memory unit in the device or an external memory if necessary.
  • the display unit 130 displays (outputs) information processed by the VR device 100.
  • the display unit 130 may display execution screen information of an application program driven by the VR device 100, or UI (User Interface) and GUI (Graphic User Interface) information according to such execution screen information.
  • the display unit 130 may be configured as a three-dimensional display unit that displays a three-dimensional image.
  • Three-dimensional display methods such as stereoscopic method (glasses method), auto stereoscopic method (no glasses method), and projection method (holographic method) can be applied to the stereoscopic display unit, and various types of display technologies are not limited thereto. Can be utilized.
  • the sensor unit 140 is configured to measure motion-related data of the VR device 100, and may be, for example, an IMU (Inertial Measurement Unit) sensor built into an HMD or a motion controller.
  • IMU Inertial Measurement Unit
  • the sensor unit 140 may include a camera capable of photographing the player's human body motion to collect high-resolution human body motion data for the player user separately from the HMD 100 and the motion controller 101.
  • a camera capable of photographing the player's human body motion to collect high-resolution human body motion data for the player user separately from the HMD 100 and the motion controller 101.
  • Vicon Mocap It may include a high-resolution motion capture system such as System.
  • the user's motion and posture can be estimated by a separate computer such as the character human body motion generating device 300, and such data Can be used for machine learning.
  • the character body motion generating apparatus 300 may be configured to receive motion-related information of the player user from the VR device 100 and estimate the player's body posture based on the received motion-related information.
  • the character body motion generating apparatus 300 may be, for example, any one of a desktop computer, a laptop computer, a tablet computer, a notebook, a workstation, and a smart phone, It is not limited to these.
  • the communication unit 310 of the character human body motion generation device 300 is configured to receive high-resolution human body motion data and IMU sensor data from the VR device 100, etc., and, like the communication unit 110 of the VR device 100, various It may be a module or component supporting a wired or wireless network.
  • the character human body motion generation apparatus 300 may include a processing unit 320 configured to estimate a player's human body posture through machine learning, etc., and to perform a customization task of matching the estimated user's human body posture to the character's human body structure.
  • the processing unit 320 may include, for example, a central processing unit (CPU), an application processor (AP), etc., and includes a memory capable of storing instructions or data related to at least one other component, or a memory in the device. It can communicate with a secondary or external memory, if necessary, to access the necessary information.
  • the processing unit 320 may include a human body posture estimating unit 321 for estimating a body posture of a player user wearing the VR device 100.
  • the human body posture estimation unit 321 may perform a correlation learning algorithm between the IMU sensor information received by the VR device 100 and the high-resolution human body motion data. For example, a machine using artificial neural networks (ANN) Learning can be carried out.
  • ANN artificial neural networks
  • the calculation range of machine learning may be limited in consideration of the constraint condition of the VR game environment played through the VR device 100.
  • the human body posture estimating unit 321 may estimate a natural posture in consideration of temporal continuity, and perform learning while variously changing the number of sensor information.
  • the learning algorithm error rate target may be set to be less than 0.3 m of average error of major body parts.
  • the processing unit 320 may also include a character posture customization unit 322 configured to perform a character customization operation to be suitable for the human body structure of the character based on the player’s human body posture estimated through the human body posture estimation unit 321. .
  • the character posture customization unit 322 performs retargeting in accordance with the human body structure of the character being rendered on the estimated user's human body posture, thereby minimizing the physical inconsistency between the virtual environment in the VR game and the character motion. Play a role.
  • Real-time avatar motion retargeting through calculation time optimization may be implemented.
  • the retargeting processing time target may be set to 33 ms or less per frame.
  • the mobile device 200 is configured to generate mixed rendering content of a 3D environment through motion data received from the character body motion generating device 300 and camera tracking, for example, a smart phone, Tablet computer, desktop computer, laptop computer, notebook, workstation, PDA (Personal Digital Assistants), portable computer, wireless phone, mobile phone phone), e-book, portable multimedia player (PMP), portable game console, digital camera, television, wearable device, AI (artificial intelligence) speaker It may be, but is not limited to these, a portable device is preferred.
  • the mobile device 200 may include a communication unit 210, a camera unit 220, a sensor unit 230, a processing unit 240, a display unit 250, a 3D rendering unit 260, and the like. It may include all components that perform functions of mobile terminals such as phones and tablet computers.
  • the communication unit 210 may be a module or component configured to transmit data through a network with the character body motion generating device 300, wherein the network is a network connected by wire or wireless, and when the network is a wireless communication network, It may include cellular communication or short-range communication.
  • cellular communication is LTE (Long-Term Evolution), LTE-A (LTE Advanced), 5G (5th Generation), CDMA (Code Division Multiple Access), WCDMA (Wideband CDMA), UMTS (Universal Mobile Telecommunications System), WiBro It may include at least one of (Wireless Broadband), or Global System for Mobile Communications (GSM).
  • short-range communication may include at least one such as Wireless Fidelity (Wi-Fi), Bluetooth, Zigbee, or Near Field Communication (NFC).
  • Wi-Fi Wireless Fidelity
  • Bluetooth Zigbee
  • NFC Near Field Communication
  • the communication method is not limited thereto, and a wireless communication technology to be developed later will be included.
  • Motion-related data generated by the character body motion generating device 300 may be transmitted to a plurality of mobile devices 200 in real time, and the mobile device 200 accesses or accesses the character body motion generating device 300 at any time. It can be configured to be releasable. For example, for this purpose, a server using a websocket may be implemented to broadcast motion data to a plurality of mobile devices 200 in real time in the character body motion generating apparatus 300. In addition, for privacy protection, security functions can be provided so that only internal networks can access.
  • the camera unit 220 is configured to photograph a motion of a player user wearing the VR device 100 and to generate image data for performing camera tracking to implement an augmented reality (AR).
  • AR augmented reality
  • the sensor unit 230 may include an acceleration sensor and a gyroscope, and may track a camera position from motion data of the mobile device 200 obtained from the sensor unit 230 and implement camera tracking for AR.
  • the processing unit 240 is configured to generate image content by rendering a 3D environment based on the motion-related information of the character and the camera tracking information of the observer received from the character body motion generating apparatus 300, and the processing unit 240 is, for example, It may include a central processing unit (CPU), an application processor (AP), etc., and includes a memory that can store instructions or data related to at least one other component, or a memory unit in the device or an external memory if necessary. You can communicate with and access the information you need.
  • CPU central processing unit
  • AP application processor
  • the character human body motion generating device 300 transmits an image of the screen because it is highly likely that the image is interrupted or delayed in a general network environment.
  • the mobile device 200 may be configured to transmit only data of a small size capable of rendering a screen.
  • the mobile device 200 receives in advance at least one of information such as a rendering environment, a character model, and a character texture from the character body motion generating device, and receives at least one of the position, direction, and motion-related information of a character that changes in real time in real time. can do.
  • the motion data generation part which is a task with a relatively high computational amount, is processed by the computer (PC) of the character body motion generating device 300, which is higher than the mobile device 200, and only the movements that can be processed quickly are processed by the mobile device.
  • PC computer
  • seamless VR video content can be provided.
  • the processing unit 240 also implements an AR camera tracking technology, and can be largely divided into a camera automatic adjustment technology and a camera tracking technology for AR implementation.
  • the development of the camera auto-adjustment technology is a technology that automatically finds the initial position of the camera from an image captured by the mobile device 200, and after that, the camera can be tracked using AR technology.
  • the position of the camera may be estimated by tracking a user's posture from an image through machine learning and synchronizing this with motion data transmitted from the character body motion generating apparatus 300.
  • a machine learning method that can improve accuracy can be used instead of spending a lot of time.For example, a method such as random forest and deep learning can be used to achieve very high accuracy. You can estimate the location of the camera.
  • the image data obtained through the camera from the SLAM (Simultaneous Localization and Mapping) algorithm and the sensor data of the mobile device 200 are used to You can guess the movement.
  • the Euler method or the like from the movement of the mobile device 200 and the existing location of the mobile device 200, the current position and direction of the mobile device 200 may be obtained.
  • the mobile device 200 may reduce a position error by periodically adjusting the position and direction of the camera.
  • a plurality of markers may be disposed in a space where a VR game is experienced, and the mobile device 200 is based on at least one marker disposed in a space in which a player user using the VR device 100 is located. Position and direction can be adjusted. Since the VR experience space is not very large, the position and direction of the camera can be easily determined and periodically adjusted by recognizing the markers placed in each area of the space.
  • the display unit 250 is a component for visually providing VR content or mixed rendering content to a user.
  • the display unit 250 includes a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a micro LED, a micro electromechanical system (MEMS), micro electro mechanical systems) displays and electronic paper displays, but are not limited thereto.
  • LCD liquid crystal display
  • LED light emitting diode
  • OLED organic light emitting diode
  • MEMS micro electromechanical system
  • micro electro mechanical systems micro electro mechanical systems
  • the 3D rendering unit 260 is configured to render a 3D environment from the motion data generated by the character body motion generating apparatus 300 and the position of the camera tracked by the mobile device 200.
  • the mixed rendering content screen is rendered so that rendering can be implemented in real time by adjusting the level of detail in consideration of the performance of the mobile device 200, and the network speed so that the user of the mobile device 200 does not feel uncomfortable. Should be fast enough.
  • the mobile device 200 may be configured to record the motion of the player through the camera unit 220.
  • the virtual reality is rendered on the mobile device 200, a corresponding image can be recorded and used as content, and at the same time, two video content can be created and utilized by recording the real world in which the VR player plays a game.
  • performing two recordings at the same time is a task that may burden the performance of the mobile device 200. Therefore, after first creating content that records the player's motions, it is necessary to create video content by rendering a 3D environment. Can be configured.
  • FIG. 3 is a flowchart illustrating a method performed in a VR device according to an embodiment of the present invention.
  • the VR game screen may be provided to the player user through the display screen of the HMD of the VR device 100.
  • the VR player user plays a game while viewing the display screen of the VR device 100 or experiences VR content. Accordingly, the user's posture change or motion operation is performed.
  • motion-related data of the player may be collected through the IMU sensor of the VR device 100.
  • An HMD of the VR device 100 or an IMU sensor built into the motion controller may be used. Through this, it is possible to measure and collect motion-related data of the player user.
  • the VR device 100 may include a motion capture camera capable of photographing the player's human body motion to collect high-resolution human body motion data for the player user separately from the HMD and the IMU sensor of the motion controller.
  • a human body motion photographing may be performed, and high-resolution motion capture information may be obtained (S330).
  • the motion-related information of the player collected as described above may be transmitted to the character body motion generating apparatus 300 (S340).
  • FIG. 4 is a flowchart illustrating a method performed by the apparatus for generating a motion of a human body according to an embodiment of the present invention.
  • the character body motion generating apparatus 300 may receive motion-related information of a player using the virtual reality device (S410).
  • the motion-related information may include, for example, high-resolution human body motion data and IMU sensor data.
  • the character body motion generating apparatus 300 may estimate the player's body posture based on motion-related information.
  • the body posture estimation unit 321 includes IMU sensor information received by the VR device 100 and high-resolution human body.
  • a correlation learning algorithm between motion data may be performed, for example, machine learning may be performed using artificial neural networks (ANNs).
  • ANNs artificial neural networks
  • the posture of the 3D virtual character may be customized to suit the human body structure of the character (S430).
  • the motion-related information of the character generated through this process may be transmitted to the mobile device 200 (S440).
  • FIG. 5 is a flowchart illustrating a method performed in a mobile device according to an embodiment of the present invention.
  • the mobile device 200 may receive character motion-related information from the character body motion generating device 300 (S510).
  • the mobile device 200 is a rendering environment, a character model, and a character from the character body motion generating device. At least one of information such as texture is received in advance, and at least one of information related to a position, direction, and motion of a character that changes in real time is received in real time, thereby minimizing the amount of real-time transmission.
  • the initial position of the camera is automatically found and estimated from the image captured by the mobile device 200, and the camera position is corrected.
  • the user's posture is tracked from the image through machine learning, and
  • the position of the camera may be estimated by synchronizing with motion data transmitted from the human body motion generating device 300.
  • camera tracking is performed through motion estimation of the mobile device (S530).
  • the mobile device 200 is performed using image data obtained through the camera and sensor data values of the mobile device 200. ) Can be estimated, and through this, the current position and direction of the mobile device 200 can be obtained.
  • Mixed rendering content of a 3D environment may be generated and displayed from motion data transmitted from the character body motion generating apparatus 300 and location information of the camera tracked by the mobile device 200 (S540). You can record and use it as VR content and share it.
  • 6A and 6B are exemplary diagrams showing a screen of mixed rendering content generated according to an embodiment of the present invention.
  • an observer that is, a mixed rendering content creation and provider, may use his mobile device 200 to photograph a VR player wearing the VR device 100, and at this time, the viewpoint of the observer rather than the viewpoint of the VR player.
  • Mixed rendering content obtained by combining the virtual reality and the VR player character 270 may be displayed on the mobile device 200 so that the VR player can be observed.
  • a VR player holds a motion controller 101 in hand in addition to a VR device 100 composed of an HMD, and a character 270 generated by estimating the player's posture and motion is a mobile device. It may be generated and displayed in real time on the screen of 200.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un système pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle (VR) et d'une réalité augmentée (AR), qui peut fournir un système de fourniture de contenu de rendu mixte comprenant : un dispositif de génération de mouvement de corps de personnage configuré pour recevoir des informations relatives au mouvement d'un utilisateur de joueur à l'aide d'un dispositif de réalité virtuelle, et pour estimer une posture corporelle de l'utilisateur du joueur sur la base des informations relatives au mouvement ; et un dispositif mobile configuré pour générer un contenu vidéo par rendu d'un environnement tridimensionnel sur la base d'informations de suivi de caméra d'un observateur et d'informations relatives au mouvement du personnage reçues en provenance du dispositif de génération de mouvement de corps de personnage.
PCT/KR2019/009286 2019-07-25 2019-07-25 Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant WO2021015348A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190090180A KR20210012442A (ko) 2019-07-25 2019-07-25 가상 현실 및 증강 현실을 이용한 혼합 렌더링 콘텐츠 제공을 위한 카메라 트래킹 방법 및 이를 이용한 시스템
KR10-2019-0090180 2019-07-25

Publications (1)

Publication Number Publication Date
WO2021015348A1 true WO2021015348A1 (fr) 2021-01-28

Family

ID=74193613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/009286 WO2021015348A1 (fr) 2019-07-25 2019-07-25 Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant

Country Status (2)

Country Link
KR (1) KR20210012442A (fr)
WO (1) WO2021015348A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8419545B2 (en) * 2007-11-28 2013-04-16 Ailive, Inc. Method and system for controlling movements of objects in a videogame
US9898872B2 (en) * 2013-01-11 2018-02-20 Disney Enterprises, Inc. Mobile tele-immersive gameplay
US20180253152A1 (en) * 2017-01-06 2018-09-06 Adtile Technologies Inc. Gesture-controlled augmented reality experience using a mobile communications device
KR101989969B1 (ko) * 2018-10-11 2019-06-19 대한민국 증강현실 기반 건축 유적의 실감형 콘텐츠 체험시스템

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101824863B1 (ko) 2017-08-14 2018-02-02 (주)인테크 디자인 가상현실 멀티 스토리텔링 영상극장 다중미디어 스트리밍 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8419545B2 (en) * 2007-11-28 2013-04-16 Ailive, Inc. Method and system for controlling movements of objects in a videogame
US9898872B2 (en) * 2013-01-11 2018-02-20 Disney Enterprises, Inc. Mobile tele-immersive gameplay
US20180253152A1 (en) * 2017-01-06 2018-09-06 Adtile Technologies Inc. Gesture-controlled augmented reality experience using a mobile communications device
KR101989969B1 (ko) * 2018-10-11 2019-06-19 대한민국 증강현실 기반 건축 유적의 실감형 콘텐츠 체험시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PARK SOON-GI: "Augmented reality device technology trend", CONVERGENCE RESEARCH REVIEW, CONVERGENCE RESEARCH POLICY CENTER, KR, vol. 5, no. 4, 2 April 2019 (2019-04-02), KR, XP055784656, ISSN: 2465-8456 *

Also Published As

Publication number Publication date
KR20210012442A (ko) 2021-02-03

Similar Documents

Publication Publication Date Title
CN107911644B (zh) 基于虚拟人脸表情进行视频通话的方法及装置
US11463628B1 (en) Systems and methods for synchronizing image sensors
WO2018174535A1 (fr) Système et procédé pour une carte de profondeur
CN102939139B (zh) 共享虚拟空间中便携式设备的校准
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
US20160225188A1 (en) Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
CN110140099A (zh) 用于跟踪控制器的系统和方法
CN107924584A (zh) 增强现实
WO2017043795A1 (fr) Procédé de transmission d'images de réalité virtuelle, procédé de reproduction d'image et programme les utilisant
TW201835723A (zh) 圖形處理方法和裝置、虛擬實境系統和計算機儲存介質
WO2020262923A1 (fr) Système et procédé permettant de générer une expérience de réalité mixte
WO2017052008A1 (fr) Système de réalité virtuel comprenant des lunettes de réalité virtuelle adaptées à modifier la position de visualisation
CN110463195A (zh) 用于在虚拟现实视频中渲染定时文本和图形的方法和设备
WO2018182190A1 (fr) Utilisation de carillons pour l'identification de roi dans une vidéo à 360 degrés
CN106817508B (zh) 一种同步对象确定方法、装置和系统
US20220277506A1 (en) Motion-based online interactive platform
WO2017099500A1 (fr) Procédé et dispositif de création d'animation
EP3759576A1 (fr) Systèmes de suivi d'oeil binoculaire décalé à grande vitesse
EP3496044A1 (fr) Dispositif d'affichage d'image et système d'affichage d'image
US20230217004A1 (en) 360-degree virtual-reality system for dynamic events
CN108325208A (zh) 应用于游戏领域的增强现实实现方法
TWI755636B (zh) 虛擬實境影像播放方法及使用其之程式
WO2021015348A1 (fr) Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant
WO2021015347A1 (fr) Procédé pour fournir un contenu de rendu mixte en utilisant la réalité virtuelle et la réalité augmentée, et système l'utilisant
US11405531B2 (en) Data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938883

Country of ref document: EP

Kind code of ref document: A1