WO2023017890A1 - Procédé, appareil et système de fourniture de métavers - Google Patents

Procédé, appareil et système de fourniture de métavers Download PDF

Info

Publication number
WO2023017890A1
WO2023017890A1 PCT/KR2021/011663 KR2021011663W WO2023017890A1 WO 2023017890 A1 WO2023017890 A1 WO 2023017890A1 KR 2021011663 W KR2021011663 W KR 2021011663W WO 2023017890 A1 WO2023017890 A1 WO 2023017890A1
Authority
WO
WIPO (PCT)
Prior art keywords
metaverse
user
displays
providing
content
Prior art date
Application number
PCT/KR2021/011663
Other languages
English (en)
Korean (ko)
Inventor
문연국
김민준
김지은
채승훈
원광호
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2023017890A1 publication Critical patent/WO2023017890A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/08Mouthpieces; Microphones; Attachments therefor

Definitions

  • the present invention relates to a metaverse providing method, a metaverse providing device, and a metaverse providing system, and in detail, implements a metaverse that can be accessed in a mixed reality-based experience space, and a plurality of users or users in the same experience space
  • a metaverse providing method, a metaverse providing device, and a metaverse providing system in which a plurality of users in different experience spaces can access and share the experience of metaverse contents, or a single user in the experience space can experience metaverse contents will be.
  • Patent Document 1 Korean Patent Publication No. 10-2014-0036555
  • Patent Document 1 Korean Patent Publication No. 10-2014-0036555
  • Patent Document 1 discloses a configuration in which, for example, a first user at home and a second user in an outdoor space share experiences through metaverse content displayed on a single flat-panel display, respectively. are doing
  • the metaverse contents such as the circumference and the Olle road experienced by each user are displayed on a single flat display provided for each real space, there is a limit to enhancing the reality of the provided metaverse.
  • Patent Document 1 Korean Patent Publication No. 10-2014-0036555
  • An object of the present invention is to provide a metaverse providing method, a metaverse providing apparatus, and a metaverse providing system capable of enhancing the realism of the metaverse.
  • Another object of the present invention is a method for providing a metaverse, a device for providing a metaverse, and a method for providing an active experience by accessing the metaverse without a separate wearable device for access to virtual reality, such as a head mount display (HMD). It is to provide a metaverse provision system.
  • HMD head mount display
  • Another object of the present invention is to provide a metaverse providing method, a metaverse providing device, and a metaverse providing system capable of continuously expanding the number of metaverse contents provided in an experience space.
  • Another object of the present invention is not only to allow a single user in the experience space to experience metaverse content, but also to allow multiple users in the same experience space or multiple users in different experience spaces to access and share the experience of metaverse content. It is to provide a possible metaverse providing method, a metaverse providing device, and a metaverse providing system.
  • a metaverse providing method a) using a camera in the experience space of a metaverse providing device including an experience space provided by a plurality of displays. Detecting the location of one or more users by doing; b) displaying the metaverse content on one or more displays of the plurality of displays based on the location of the user and the metaverse content selected by the user; c) detecting the user's action using a sensor closest to the user's location among a plurality of sensors; and d) displaying a user avatar synchronized with the user's behavior on a display corresponding to the closest sensor among the plurality of displays.
  • step b) is performed in a state in which a first user among a plurality of users in the experience space selects a first metaverse content and a second user selects a second metaverse content. , displaying the first metaverse content on a display closest to the location of the first user among the plurality of displays, and displaying the second metaverse content on one or more displays among the remaining displays.
  • a method for providing a metaverse according to a second feature of an embodiment of the present invention is a) using a camera in each experience space of a plurality of metaverse providing devices each including a plurality of experience spaces provided by a plurality of displays. detecting the user's location; b) displaying the metaverse content on one or more displays among the plurality of displays based on the user's position in each experience space and the metaverse content selected by the user; c) sensing an action of the user by using a sensor closest to the user's location among a plurality of sensors in each of the experience spaces; and d) displaying a user avatar synchronized with the user's behavior on a display corresponding to the closest sensor among the plurality of displays in each experience space.
  • Active experiences can be performed by accessing the metaverse without a separate wearable device for access to virtual reality such as HMD (Head Mount Display).
  • HMD Head Mount Display
  • the number of metaverse contents provided in the experience space can be continuously expanded.
  • FIG. 1 is a block diagram of a metaverse providing device according to an embodiment of the present invention.
  • FIG. 2 is a schematic perspective view of a metaverse providing device for explaining an embodiment of a metaverse providing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic perspective view of a metaverse providing device for explaining another embodiment of a metaverse providing method according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a metaverse providing method according to an embodiment of the present invention.
  • FIG. 5 is a block diagram of a metaverse providing system according to an embodiment of the present invention.
  • FIG. 6 is a schematic perspective view of a metaverse providing system for explaining an embodiment of a metaverse providing method according to a modified example of an embodiment of the present invention.
  • FIG. 8 is a flowchart of a metaverse providing method according to a modified example of an embodiment of the present invention.
  • FIG. 1 is a block diagram of a metaverse providing device according to an embodiment of the present invention
  • FIG. 2 is a schematic perspective view of a metaverse providing device for explaining an embodiment of a metaverse providing method according to an embodiment of the present invention
  • 3 is a schematic perspective view of a metaverse providing device for explaining another embodiment of a metaverse providing method according to an embodiment of the present invention.
  • the input unit 110, the control unit 150, the storage unit 160, the communication unit 170, the signal collection unit 180, and the driving unit 190 are shown in FIG. 1, but in FIGS. 2 and 3 not shown
  • FIG. 4 is a flowchart of a metaverse providing method according to an embodiment of the present invention.
  • the metaverse providing device 100 includes a plurality of displays 121 to 123, a camera 130, a plurality of sensors 141 to 149, and a controller 150.
  • the metaverse providing device 100 may further include one or more of an input unit 110, a storage unit 160, a communication unit 170, a signal collection unit 180, and a driving unit 190.
  • the plurality of displays 121 to 123 are arranged to provide an experience space (S) and are components that display metaverse contents.
  • a plurality of displays 121 to 123 provide the experience space S means “a plurality of displays 121 to 123 are provided on 75% or more of the sides of the total side of the experience space S” " can mean For example, referring to FIGS. 2 and 3 , when the experience space S has four sides, it may mean that a plurality of displays 121 to 123 are provided on three or more side surfaces. Of course, the plurality of displays 121 to 123 may be further provided on one or more of the ceiling surface and the floor surface of the experience space S as well as the side surface. In this specification, “a plurality of displays 121 to 123 provide an experience space S” may be expressed as “a plurality of displays 121 to 123 surround the experience space S”.
  • the camera 130 is a component that detects the location of one or more users U1 to U3 in the experience space S.
  • the plurality of sensors 141 to 149 are components that detect actions of the users U1 to U3.
  • the plurality of sensors 141 to 149 may be arranged along the plurality of displays 121 to 123 .
  • a configuration in which "a plurality of sensors 141 to 149 are disposed along a plurality of displays 121 to 123" is i) a plurality of sensors 141 to 149 are disposed on top of a plurality of displays 121 to 123.
  • a configuration in which a plurality of sensors 141 to 149 are disposed in front of a plurality of displays 121 to 123 (screen or extended surface of a screen), and iii) a plurality of sensors (141 ⁇ 149) may include one or more of the configurations arranged on the surface of the plurality of displays (121 ⁇ 123) side of the bottom surface of the experience space (S).
  • each of the plurality of sensors 141 to 149 may include one or more of a gaze sensor, a motion sensor, a gesture sensor, a voice sensor, and a touch sensor.
  • the gaze sensor may detect the gaze of the users U1 to U3, and may be implemented as, for example, Tobii Inc.'s gaze sensor.
  • the motion sensors may detect motions of the users U1 to U3, and the gesture sensors may detect gestures of the users U1 to U3.
  • a “gesture” is a smaller movement than a “motion”.
  • a “action” is a relatively large movement, such as moving an “arm,” and a “gesture” is a relatively small movement, such as moving a “hand” while the “arm” is substantially stationary.
  • the gesture sensor may be implemented as, for example, a gesture sensor of Leap Motion, Inc., and the motion sensor may be implemented as, for example, NUITRACK SDK of 3DiVi Inc.
  • the motion sensors may be implemented as receivers that receive signals transmitted from the RFID tags or IMUs (Inertial Measurement Units) when the users U1 to U3 are wearing them.
  • the motion sensor may be implemented as a pressure sensor installed on the floor of the experience space S to detect foot motions of the users U1 to U3.
  • the voice sensor may detect the voice of the users U1 to U3 and may be implemented as a microphone, for example. Also, the voice sensor may be implemented as a speech-to-text (STT) sensor.
  • STT speech-to-text
  • the input unit 110 is a component that receives selection signals of users U1 to U3 for metaverse content and user avatars A1 to A3 and images of users U1 to U3.
  • the input unit 110 may be implemented by, for example, a touch screen panel and a camera disposed outside the experience space (S).
  • the users U1 to U3 may select metaverse contents and user avatars A1 to A3 through the touch screen panel of the input unit 110 .
  • the users U1 to U3 may input images of the users U1 to U3 through the camera of the input unit 110 .
  • the input unit 110 may be implemented by a portable terminal capable of communicating with the communication unit 170 of the metaverse providing device 100.
  • the communication unit 170 is a component that communicates information about one or more of metaverse contents and user avatars A1 to A3. For example, the communication unit 170 may transmit/receive information about one or more of metaverse contents and user avatars A1 to A3 with the server 200 shown in FIG. 5 .
  • the signal collector 180 may collect the first signal received from the camera 130 and the second signal received from the plurality of sensors 141 to 149 .
  • the signal collection unit 180 may transfer the collected first and second signals to the control unit 150 .
  • the driving unit 190 may activate the sensors 141 to 149 closest to the position of the user U1 to U3 among the plurality of sensors 141 to 149 according to a control signal from the control unit 150 . According to this activation, the sensors 141 to 149 closest to the location of the users U1 to U3 may detect the actions of the users U1 to U3.
  • FIG. 4 together with FIGS. 1 to 3, a metaverse providing method according to an embodiment of the present invention will be described.
  • step 310 the camera 130 detects the location of one or more users U1 to U3 in the experience space S provided by the plurality of displays 121 to 123.
  • the input unit 110 receives a selection signal of the user U1 to U3 for the metaverse content and the user avatar A1 to A3 and the image of the user U1 to U3. steps can be performed.
  • step 320 the control unit 150 displays one or more displays 121 to 123 among the plurality of displays 121 to 123 based on the location of the users U1 to U3 and the metaverse content selected by the users U1 to U3.
  • a plurality of displays 121 to 123 are controlled to display metaverse contents.
  • the control unit 150 when all users U1 to U3 in the experience space S are in a state in which, for example, the first metaverse content related to “marathon” is selected, a plurality of The plurality of displays 121 to 123 may be controlled to display the first metaverse content on the displays 121 to 123 of the above.
  • the controller 150 determines that a first user U1 among a plurality of users U1 to U3 in the experience space S, for example, a first metaverse related to “marathon”.
  • a first metaverse related to “marathon” For example, referring to FIG. 3 , the controller 150 determines that a first user U1 among a plurality of users U1 to U3 in the experience space S, for example, a first metaverse related to “marathon”.
  • the second user U2 is in a state of selecting, for example, the second metaverse content related to "dance”
  • the position closest to the position of the first user U1 among the plurality of displays 121 to 123 Control the plurality of displays 121 to 123 to display the first metaverse content on the display 121 and display the second metaverse content on one or more displays 122 and 123 among the remaining displays 122 and 123.
  • the controller 150 detects the action of the user U1 to U3 using the sensor 141 to 149 closest to the position of the user U1 to U3 among the plurality of sensors 141 to 149. Controls the sensors 141 to 149 of For example, the sensors 141 to 149 closest to the location of the users U1 to U3 are based on the metaverse content selected by the users U1 to U3, based on the gaze, motion, gesture, and voice of the users U1 to U3. At least one of , , and touch may be sensed.
  • the sensors 142, 145, and 148 closest to the location of the users U1 to U3 are motion can be detected.
  • the sensors closest to the location of the users (U1 to U3) may detect a user's touch.
  • the controller 150 selects the user (U1 to U3) on the displays (121 to 123) corresponding to the sensors (141 to 149) closest to the position of the user (U1 to U3) among the plurality of displays (121 to 123).
  • the plurality of displays 121 to 123 are controlled to display the user avatars A1 to A3 synchronized with the behavior of the user. For example, referring to FIG. 2 , the controller 150 displays the first user U1's action on the first display 121 corresponding to the second sensor 142 closest to the position of the first user U1. Controls the first display 121 to display the first user avatar A1 synchronized with the second user U2, and displays the second display 122 corresponding to the fifth sensor 145 closest to the position of the second user U2.
  • the second display 122 is controlled to display the second user avatar A2 synchronized with the behavior of the second user U2 and corresponds to the eighth sensor 148 closest to the position of the third user U3.
  • the third display 123 may be controlled to display the third user U3's avatar A3 synchronized with the behavior of the third user U3 on the third display 123.
  • the controller 150 selects the sensors 141 to 149 closest to the position of the users U1 to U3 among the entire areas of the displays 121 to 123 corresponding to the sensors 141 to 149.
  • the plurality of displays 121 to 123 may be controlled to display the user avatars A1 to A3 synchronized with the actions of the users U1 to U3 in the area corresponding to .
  • the controller 150 controls the second sensor 142 of the entire area of the first display 121 corresponding to the second sensor 142 closest to the position of the first user U1.
  • the first display 121 is controlled to display the first user avatar A1 synchronized with the behavior of the first user U1 in the central area corresponding to ), and the first display 121 closest to the location of the second user U2 is controlled.
  • the second user U2's avatar A2 synchronized with the behavior of the second user U2 is displayed in the central area corresponding to the fifth sensor 145 among the entire area of the second display 122 corresponding to the fifth sensor 145. of the entire area of the third display 123 corresponding to the eighth sensor 148 closest to the position of the third user U3, corresponding to the eighth sensor 148.
  • the third display 123 may be controlled to display the third user U3's avatar A3 synchronized with the behavior of the third user U3 in the central area.
  • a method for providing a metaverse is a) using a camera 130 in an experience space S provided by a plurality of displays 121 to 123 and one or more users U1 to U3. ) Detecting the location (310); b) The metaverse content is displayed on one or more displays 121 to 123 among the plurality of displays 121 to 123 based on the location of the users U1 to U3 and the metaverse content selected by the users U1 to U3.
  • an active experience can be performed by accessing the metaverse without a separate wearable device for accessing virtual reality such as an HMD.
  • a plurality of users (U1 to U3) in the experience space (S) can access the metaverse providing device 100 and share the experience of metaverse contents.
  • step b) the first user U1 among the plurality of users U1 to U3 in the experience space S selects the first metaverse content and the second user U2 selects the second metaverse content.
  • the first metaverse content is displayed on the display 121 closest to the position of the first user U1 among the plurality of displays 121 to 123, and the other displays 122 and 123
  • the step of displaying the second metaverse content it is possible to continuously expand the number of metaverse contents provided in the experience space (S).
  • FIG. 5 is a block diagram of a metaverse providing system according to an embodiment of the present invention
  • FIG. 6 is a metaverse providing system for explaining an embodiment of a metaverse providing method according to a modified example of an embodiment of the present invention
  • 7 is a schematic perspective view of a metaverse providing system for explaining another embodiment of a metaverse providing method according to a modification of an embodiment of the present invention.
  • the input units 110a and 110b, the control units 150a and 150b, the storage units 160a and 160b, the communication units 170a and 170b, the signal collection units 180a and 180b, and the driving units 190a and 190b 5, but not shown in FIGS. 6 and 7.
  • FIG. 8 is a flowchart of a metaverse providing method according to a modified example of an embodiment of the present invention.
  • the metaverse providing system 1000 according to an embodiment of the present invention will be described as follows.
  • the metaverse providing system 1000 includes a plurality of metaverse providing devices 100A and 100B and a server 200 .
  • Each of the plurality of metaverse providing devices 100A and 100B includes a plurality of displays 121a to 123a and 121b to 123b, cameras 130a and 130b, a plurality of sensors 141a to 149a and 141b to 149b, and a controller 150a. , 150b), and communication units 170a and 170b.
  • each of the plurality of metaverse providing devices 100A and 100b includes one or more of input units 110a and 110b, storage units 160a and 160b, signal collection units 180a and 180b, and driving units 190a and 190b. may further include.
  • the server 200 is a component that transmits and receives information on one or more of metaverse contents and user avatars A11, A12, A13, A21, and A22 to and from a plurality of metaverse providing devices 100A and 100B.
  • the server 200 may include a server storage unit 210, a server communication unit 220, and a server control unit 230.
  • the server storage unit 210 is a component that stores information on one or more of metaverse contents and user avatars A11, A12, A13, A21, and A22.
  • One or more of these metaverse content and user avatars (A11, A12, A13, A21, A22) is metaverse content and user avatars (A11, A12, A13 , A21, A22), or one or more of metaverse contents and user avatars A11, A12, A13, A21, A22 received from a plurality of metaverse providing devices 100A, 100b. there is.
  • the input units 110a and 110b input metaverse contents and user avatars A11 to A13 for each experience space S1 and S2 of the plurality of metaverse providing devices 100A and 100B.
  • a step of receiving a selection signal of the user (U11 to U13, U21 to U22) for A21 to A22 and the user's image (U11 to U13, U21 to U22) may be input.
  • step 440 in each of the experience spaces S1 and S2 of the plurality of metaverse providing devices 100A and 100B, the control units 150a and 150b display the user U11 among the plurality of displays 121a to 123a and 121b to 123b. Synchronization with the actions of the users (U11 to U13, U21 to U22) on the displays (121a to 123a, 121b to 123b) corresponding to the sensors (141a to 149a, 141b to 149b) closest to the positions of ⁇ U13 and U21 to U22 The plurality of displays 121a to 123a and 121b to 123b are controlled to display the user avatars A11 to A13 and A21 to A22.
  • the user avatars (A11 to A13) synchronized with the behavior of the users (U11 to U13) in the first experience space (S1) Transmission of information to the second metaverse providing device 100B, and provision of the first metaverse of information about the user avatars A21 to A22 synchronized with the actions of the users U21 to U22 in the second experience space S2 This can be done by stopping the transmission to device 100A.
  • the user U21 of the second experience space S2 of the metaverse providing device 100B selects the same first metaverse content (“dance” in FIG.
  • metaverse contents are displayed on a single flat display provided for each real space
  • the sense of reality of the metaverse can be enhanced.
  • an active experience can be performed by accessing the metaverse without a separate wearable device for accessing virtual reality such as an HMD.
  • step d) the user U12 and the second metaverse providing device of the first experience space S1 of the first metaverse providing device 100A among the plurality of metaverse providing devices 100A and 100B
  • the user U21 of the second experience space S2 of 100B is in a state in which the same first metaverse content is selected, one of the plurality of displays 121a to 123a of the first metaverse providing device 100A.
  • the user of the first experience space S1 By simultaneously displaying the user avatar A12 synchronized with the behavior of U12 and the user avatar A21 synchronized with the behavior of the user U21 in the second experience space S2, the experience spaces S1 and S2 are different from each other. ) A plurality of users (U12, U21) in can access and share the experience of metaverse content.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Computer Graphics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Dans un mode de réalisation, l'invention concerne un procédé de fourniture de métavers, consistant : a) à détecter l'emplacement d'au moins un utilisateur au moyen d'une caméra dans un espace d'expérience d'un appareil afin de fournir un métavers comprenant l'espace d'expérience fourni par une pluralité d'écrans ; b) à afficher un contenu de métavers sur au moins un des écrans de la pluralité, en fonction de l'emplacement de l'utilisateur et du contenu de métavers sélectionné par l'utilisateur ; c) à détecter l'action de l'utilisateur au moyen du capteur le plus proche de l'emplacement de l'utilisateur parmi une pluralité de capteurs ; et d) à afficher un avatar d'utilisateur synchronisé avec l'action de l'utilisateur sur un écran correspondant au capteur le plus proche parmi la pluralité d'écrans.
PCT/KR2021/011663 2021-08-11 2021-08-31 Procédé, appareil et système de fourniture de métavers WO2023017890A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0105773 2021-08-11
KR1020210105773A KR102641519B1 (ko) 2021-08-11 2021-08-11 메타버스 제공 방법, 메타버스 제공 장치, 및 메타버스 제공 시스템

Publications (1)

Publication Number Publication Date
WO2023017890A1 true WO2023017890A1 (fr) 2023-02-16

Family

ID=85200746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/011663 WO2023017890A1 (fr) 2021-08-11 2021-08-31 Procédé, appareil et système de fourniture de métavers

Country Status (2)

Country Link
KR (1) KR102641519B1 (fr)
WO (1) WO2023017890A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130068593A (ko) * 2011-12-15 2013-06-26 한국전자통신연구원 실감 융합형 메타버스 플랫폼 장치 및 이를 이용한 서비스 제공 방법
KR20150129957A (ko) * 2014-05-12 2015-11-23 한국전자통신연구원 다중 화면 실감미디어 서비스를 위한 체험형 라이드 재현장치 및 그 방법
KR20170058817A (ko) * 2015-11-19 2017-05-29 세창인스트루먼트(주) 내부가 보이도록 투명디스플레이가 설치된 가상현실시스템
US20180165864A1 (en) * 2016-12-13 2018-06-14 DeepMotion, Inc. Virtual reality system using multiple force arrays for a solver
JP2019139781A (ja) * 2013-03-11 2019-08-22 マジック リープ, インコーポレイテッドMagic Leap,Inc. 拡張現実および仮想現実のためのシステムおよび方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101923723B1 (ko) 2012-09-17 2018-11-29 한국전자통신연구원 사용자 간 상호작용이 가능한 메타버스 공간을 제공하기 위한 메타버스 클라이언트 단말 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130068593A (ko) * 2011-12-15 2013-06-26 한국전자통신연구원 실감 융합형 메타버스 플랫폼 장치 및 이를 이용한 서비스 제공 방법
JP2019139781A (ja) * 2013-03-11 2019-08-22 マジック リープ, インコーポレイテッドMagic Leap,Inc. 拡張現実および仮想現実のためのシステムおよび方法
KR20150129957A (ko) * 2014-05-12 2015-11-23 한국전자통신연구원 다중 화면 실감미디어 서비스를 위한 체험형 라이드 재현장치 및 그 방법
KR20170058817A (ko) * 2015-11-19 2017-05-29 세창인스트루먼트(주) 내부가 보이도록 투명디스플레이가 설치된 가상현실시스템
US20180165864A1 (en) * 2016-12-13 2018-06-14 DeepMotion, Inc. Virtual reality system using multiple force arrays for a solver

Also Published As

Publication number Publication date
KR102641519B1 (ko) 2024-02-29
KR20230024451A (ko) 2023-02-21

Similar Documents

Publication Publication Date Title
WO2012033345A1 (fr) Procédé et appareil d'écran tactile à commande de mouvement
WO2009102138A2 (fr) Système de réalité augmentée de table mobile de personnalisation et coopération, et procédé d'interaction utilisant la réalité augmentée
WO2018217060A1 (fr) Procédé et dispositif pouvant être porté permettant d'effectuer des actions à l'aide d'un réseau de capteurs corporels
WO2016060291A1 (fr) Dispositif portatif et procédé de commande associé
WO2012115307A1 (fr) Appareil et procédé d'entrée d'instruction à l'aide de geste
WO2009108029A2 (fr) Dispositif et procédé d'affichage de touches de guitare virtuelle sur un dispositif mobile
WO2018070754A1 (fr) Système et procédé servant à empêcher des artéfacts de limite
WO2013133583A1 (fr) Système et procédé de réhabilitation cognitive par interaction tangible
EP3039507A1 (fr) Dispositif portatif affichant une image de réalité augmentée et son procédé de commande
WO2015030305A1 (fr) Dispositif portable affichant une image de réalité augmentée et procédé de commande pour celui-ci
WO2016159461A1 (fr) Système de fourniture de services de création interactifs à base de réalité augmentée
WO2014168300A1 (fr) Montre intelligente
WO2011034307A2 (fr) Procédé et terminal pour fournir différentes informations d'image selon l'angle d'un terminal, et support d'enregistrement lisible par ordinateur
WO2015156539A2 (fr) Appareil informatique, procédé associé de commande d'un appareil informatique, et système à affichage multiple
WO2014098305A1 (fr) Dispositif tactile permettant de fournir une mini-carte d'interface utilisateur tactile et son procédé de commande
WO2020085821A1 (fr) Terminal intelligent connecté à un visiocasque et procédé de commande associé
WO2011040710A2 (fr) Procédé, terminal et support d'enregistrement lisible par ordinateur destinés à effectuer une recherche visuelle en fonction du mouvement ou de la position dudit terminal
CN110362231A (zh) 抬头触控设备、图像显示的方法及装置
WO2013133624A1 (fr) Appareil d'interface utilisant une reconnaissance de mouvement, et procédé destiné à commander ce dernier
WO2009157732A2 (fr) Procédé et dispositif de commande du mouvement d'un curseur
WO2016208881A1 (fr) Interface utilisateur tridimensionnelle pour un visiocasque
WO2023017890A1 (fr) Procédé, appareil et système de fourniture de métavers
WO2012118271A1 (fr) Procédé et dispositif permettant de contrôler un contenu à l'aide d'un contact, support d'enregistrement associé, et terminal utilisateur comportant ce support
JP3487237B2 (ja) ポインティング装置とそれを用いたコンピュータシステム
WO2011099731A2 (fr) Procédé de synchronisation d'informations de personnage en fonction d'une classification du type de données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953537

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE