WO2021075835A1 - Content provision method and system based on virtual reality sound - Google Patents
Content provision method and system based on virtual reality sound Download PDFInfo
- Publication number
- WO2021075835A1 WO2021075835A1 PCT/KR2020/013968 KR2020013968W WO2021075835A1 WO 2021075835 A1 WO2021075835 A1 WO 2021075835A1 KR 2020013968 W KR2020013968 W KR 2020013968W WO 2021075835 A1 WO2021075835 A1 WO 2021075835A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- content
- virtual reality
- objects
- space
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/13—Application of wave-field synthesis in stereophonic audio systems
Definitions
- the present invention relates to a virtual reality sound-based content providing method and system, and in particular, a method of providing virtual reality sound-based content in which sound is output when contacting a virtual reality-based sound space using a camera ray installed in a user terminal And a system.
- a game application program based on sound virtual reality recorded on a recording medium As a prior art, a game application program based on sound virtual reality recorded on a recording medium, a game system using the same, and a method for providing it (Public Patent No. 10-2019-0003152) is a game that proceeds according to sound, and gestures using a mobile terminal within the game. It merely discloses a method of providing game content that performs a mission according to the following.
- the problem to be solved of the present invention is to create and provide sound content in a virtual space based on an auditory element rather than a visual element, and contact the virtual reality-based sound space using a camera ray installed on a user terminal in the virtual space.
- the present invention includes the step of producing virtual reality sound-based content and executing the content in order to solve the conceived problem
- the step of producing the virtual reality sound-based content includes: Upon receiving the content creation request, the content generation unit generates a plurality of sound rooms including a plurality of sound objects, and the input unit generates at least one of a starting sound, a viewing sound, and a tap sound of at least one of the plurality of sound objects. And receiving an input; and receiving, by an input unit, at least one of a size, a position, and a distance of the sound object.
- the present invention it is possible to produce and provide auditory-centered virtual reality content rather than visual-centered virtual reality content, and enjoy virtual reality sound-based content more simply without wearing a VR device on the head.
- it is possible to produce various categories of virtual reality content such as sound-based games and guides.
- FIG. 1 and 2 are conceptual diagrams illustrating a virtual reality sound-based content providing method according to an embodiment of the present invention.
- FIG. 3 is a block diagram of a system for providing contents based on virtual reality sound according to an embodiment of the present invention.
- FIG. 4 is a flowchart illustrating a virtual reality sound-based content production method according to an embodiment of the present invention.
- FIG. 5 is a flowchart illustrating a method of executing contents based on a virtual reality sound according to an embodiment of the present invention.
- 6 to 8 are reference diagrams illustrating a method of producing sound-based content according to an embodiment of the present invention.
- FIG. 1 and 2 are conceptual diagrams illustrating a virtual reality sound-based content providing method according to an embodiment of the present invention.
- the sound objects 320 and 340 are searched. It is possible to provide virtual reality sound-based content that operates to generate an event.
- the location and size of the sound objects 320 and 340 may be set based on the location and size input by the content creator. Different sound files may be set for the sound objects 320 and 340.
- the first sensing area 310 may be configured with a predetermined radius based on the center point of the first sound object 320, and the camera ray 210 of the user terminal 200 is the sound object 320 in the sensing area 310. If you look in the direction of ), a preset sound may be generated in the sound object.
- the user 100 may sense a sound generated through the user terminal 200.
- the second sensing area 330 may be configured with a predetermined radius based on the center point of the second sound object 340.
- additional sensing areas 350 and 360 may be set and arranged to detect sound objects in a wider space.
- an additional detection area may be set to adjust the difficulty level, or in the case of guide content, an additional detection area may be additionally set according to the user's age.
- a plurality of sound objects 320 and 340 are arranged in one sound room 300 so that the user 100 can search for sound objects arranged in a virtual space.
- the sound in that direction may be output according to the direction of the camera ray installed in the user terminal, or the sound may be output when the user satisfies specific conditions of use.
- the specific conditions of use may be sound output when an interaction with a user terminal such as a screen touch, a gaze operation for a preset time, and a screen slide operation is satisfied.
- FIG. 3 is a block diagram of a virtual reality sound-based user terminal according to an embodiment of the present invention.
- the user terminal 200 may be installed with a dedicated application for executing contents based on virtual reality sound.
- the user terminal 200 includes a content generation unit 210, a content execution unit 220, a sound recognition unit 230, a screen providing unit 240, a control unit 250, a sound providing unit 260, and a storage unit 270. ), an input unit 280, and a communication unit 290.
- the user terminal 200 is a computing device, which can execute a processor determined by operating system software and various application software, such as a smartphone or tablet PC, and is a mobile computing device that can be easily used while moving because it is easy to carry. It is not.
- the content generator 210 may generate virtual reality sound-based content.
- the content may be composed of a plurality of sound rooms including a plurality of sound objects.
- the sound object is an object including a sound in which a sense of three-dimensionality and a sense of distance are felt in virtual reality, and may be disposed in a sound room in a virtual reality space.
- the sound room is a space in which sound objects can be placed, and a user terminal can create multiple sound rooms within one content.
- a plurality of sound rooms can be arranged, and the user terminal can move freely. The sound of opening and closing the door is generated when moving between the sound rooms, so that the user terminal can recognize the movement between the sound rooms.
- the content generation unit 210 may receive at least one of a starting sound, a viewing sound, a tap sound, a size, a location, and a distance of each sound object from a producer, and generate the sound object in a virtual reality space based on the input.
- the content execution unit 220 can execute the generated content
- the sound recognition unit 230 may be configured with a camera ray based on a gyroscope or an accelerator sensor. That is, the sound recognition unit 230 may output a preset sound when the virtual laser contacts the sound object in the direction the camera ray is looking at.
- the screen providing unit 240 may execute a dedicated application and display it to a user, and may display a template for producing content.
- the controller 250 controls processing of a process related to execution of the dedicated application software, and controls the operation of each component of the terminal 200.
- the sound providing unit 260 may output at least one of a start sound, a viewing sound, and a tap sound mapped to the corresponding content according to the occurrence of an event.
- the storage unit 270 may store a dedicated application for producing or operating virtual reality sound-based content.
- the storage unit 270 may store the generated sound space content.
- the input unit 280 receives an input event executed by an input means.
- the input unit 280 may be a touch screen, and transmits a touch event to the controller 250. Values for the starting sound, viewing sound, tapping sound, size, position, and distance of each sound object can be input from the user.
- the communication unit 290 is a communication module that serves to exchange data with an external device.
- FIG. 4 is a flowchart illustrating a virtual reality sound-based content production method according to an embodiment of the present invention.
- a method of providing virtual reality sound-based content includes producing a virtual reality sound-based content and executing the content.
- the content generation unit when receiving a request for generating virtual reality sound-based content from a user, the content generation unit generates a plurality of sound rooms including a plurality of sound objects (S410).
- the sound object is an object including a sound in which a sense of three-dimensionality and a sense of distance are felt in virtual reality, and may be disposed in a sound room in a virtual reality space.
- the sound room is a space in which sound objects can be placed, and a user terminal can create multiple sound rooms within one content.
- a plurality of sound rooms can be arranged, and the user terminal can move freely. The sound of opening and closing the door is generated when moving between the sound rooms, so that the user terminal can recognize the movement between the sound rooms.
- the input unit receives at least one of a starting sound, a viewing sound, and a tap sound from among the plurality of sound objects (S420).
- the input unit receives at least one of the size, location, and distance of the sound object (S430).
- the content generation unit generates the sound object based on at least one of the starting sound, the viewing sound, the tap sound, the size, the position, and the distance. It is created in a virtual reality space (S440).
- the content generation unit generates sound space content in which a plurality of sound rooms including sound objects are arranged (S450).
- FIG. 5 is a flowchart illustrating a method of executing contents based on a virtual reality sound according to an embodiment of the present invention.
- the content execution unit loads the sound space content stored in the storage unit and executes the content including at least one sound object in the virtual space (S510). .
- the sound recognition unit When the sound recognition unit detects the sound object in the content, an event occurs (S520).
- the sound recognition unit may be a camera ray installed in the user terminal, but is not limited thereto.
- the sound recognition unit may be configured with a gyroscope or an accelerator sensor-based camera ray. That is, when the virtual laser contacts the sound object in the direction the camera is looking at, the already set sound may be output.
- the control unit reflects the input value corresponding to the operation of the user terminal to the content.
- the operation of the user terminal may be at least one of a basic operation, a start operation, a gaze operation, a touch operation, and a setting operation.
- the basic operation may be activated when the sound object is not performing any other operation, and the starting operation may be activated when a cycle of the sound object starts.
- the gaze movement can be operated when the user terminal moves and is directed in the direction in which the sound object is located. It is possible to set to perform another operation after the set time in the program has elapsed, and to stop the set operation when the user terminal does not see the direction of the sound object.
- the touch operation can be operated when the user terminal moves in the direction in which the sound object is located and touches the screen. It is possible to set to perform different actions after the time set in the program has elapsed.
- the setting operation can be operated when the time set by the manufacturer is exceeded. At this time, it is possible to set whether to use each condition when creating content.
- An input value corresponding to the operation of the user terminal may be at least one of a sound reproduction operation, a sound stop operation, a movement operation, and a restart operation of the sound object content.
- a sound set based on the coordinates at which the sound object is located is reproduced, and in the sound stopping operation, the sound may be stopped when the sound object is reproducing the sound.
- the moving operation may move the user terminal to an adjacent sound room, and the restart operation may initialize a timer in the sound object and start a new cycle.
- the moving operation and the restarting operation can be operated in parallel with the sound reproduction operation and the sound stop operation, and in this case, the operation may be performed after the set prohibition time has elapsed.
- the cycle may end under the condition that each sound object performs a restart operation from the start.
- 6 to 8 are reference diagrams illustrating a method for producing sound-based content according to an embodiment of the present invention.
- FIG. 6 to 8 are diagrams for explaining a GUI for content creation.
- a name of a sound space 610 can be set, and a plurality of sound rooms 620 can be selected.
- a specific image 630 may be added or a specific background music 640 may be added.
- sounds 710 of a plurality of sound objects arranged in the sound room may be set.
- the first sound object 720, the second sound object 730, and the third sound object 740 can be set, and by selecting the first sound object 720, the start sound 750 and the sound when seeing 760 ) And tap sound 770 can be set, respectively.
- a size 810, a location 820, and a distance 830 of a plurality of sound objects arranged in a sound room may be set. Each can be selected, and the size, location, and distance can be set through the scroll bar 840.
- the placement map 850 may show how the sound object is disposed at what location in the sound room and in what size.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention comprises the steps of: producing content based on a virtual reality sound; and executing the content, wherein the step of producing the content based on the virtual reality sound comprises the steps of: when a request for generating the content based on the virtual reality sound is received from a user, generating, by a content generation unit, a plurality of sound rooms including a plurality of sound objects; receiving, as an input by an input unit, at least one of a starting sound, a viewing sound, and a tap sound of at least one of the plurality of sound objects; and receiving, as an input by the input unit, at least one of the size, position, and distance of the sound object.
Description
본 발명은 가상현실 사운드 기반의 컨텐츠 제공 방법 및 시스템에 관한 것으로, 특히 사용자단말에 설치된 카메라 레이를 이용하여 가상현실 기반의 소리 공간에 접촉하면 소리가 출력되는 가상현실 사운드 기반의 컨텐츠를 제공하는 방법 및 시스템에 관한 것이다.The present invention relates to a virtual reality sound-based content providing method and system, and in particular, a method of providing virtual reality sound-based content in which sound is output when contacting a virtual reality-based sound space using a camera ray installed in a user terminal And a system.
종래의 VR/AR 컨텐츠는 대부분 단말기에 탑재된 디스플레이를 통해 제공되며, 사용자가 디스플레이를 주시하면서 조작하는 형태가 일반적이다. 이러한 종래의 VR/AR 컨텐츠는 시각 중심으로 시각장애인의 경우 VR/AR컨텐츠를 사용하지 못하는 문제가 있다.Most of the conventional VR/AR contents are provided through a display mounted on a terminal, and it is common for a user to manipulate while looking at the display. Such conventional VR/AR content has a problem in that the visually impaired cannot use the VR/AR content.
선행기술로서 기록매체에 기록된 사운드 가상현실 기반 게임 어플리케이션 프로그램, 이를 이용한 게임 시스템 및 제공방법(공개특허 10-2019-0003152호)은 사운드에 따라 게임이 진행하며, 게임 내에서 모바일 단말기를 이용한 제스처에 따라 임무를 수행하는 게임 콘텐츠를 제공하는 방법을 개시하고 있을 뿐이다.As a prior art, a game application program based on sound virtual reality recorded on a recording medium, a game system using the same, and a method for providing it (Public Patent No. 10-2019-0003152) is a game that proceeds according to sound, and gestures using a mobile terminal within the game. It merely discloses a method of providing game content that performs a mission according to the following.
본 발명의 해결하고자 하는 과제는 시각적 요소가 아닌 청각적 요소를 기반으로 가상공간 내에 소리 컨텐츠를 제작하여 제공하고, 가상공간내에서 사용자단말에 설치된 카메라 레이를 이용하여 가상현실 기반의 소리 공간에 접촉하면 소리가 출력되는 가상현실 사운드 기반의 컨텐츠를 제공하는 데 과제가 있다.The problem to be solved of the present invention is to create and provide sound content in a virtual space based on an auditory element rather than a visual element, and contact the virtual reality-based sound space using a camera ray installed on a user terminal in the virtual space. There is a challenge in providing virtual reality sound-based content that outputs sound.
본 발명은 안출된 과제를 해결하기 위해 가상현실 사운드 기반의 컨텐츠를 제작하는 단계와 컨텐츠를 실행하는 단계를 포함하고, 상기 가상현실 사운드 기반의 컨텐츠를 제작하는 단계는, 사용자로부터 가상현실 사운드 기반의 컨텐츠 생성 요청을 수신하면, 컨텐츠생성부가 다수개의 사운드객체들을 포함하는 다수개의 소리방들을 생성하는 단계와, 입력부가 상기 다수개의 사운드객체들 중 적어도 하나의 시작소리, 볼때소리, 탭소리 중 적어도 하나를 입력받는 단계와, 입력부가 상기 사운드객체의 크기, 위치, 거리 중 적어도 하나를 입력받는 단계를 포함한다.The present invention includes the step of producing virtual reality sound-based content and executing the content in order to solve the conceived problem, and the step of producing the virtual reality sound-based content includes: Upon receiving the content creation request, the content generation unit generates a plurality of sound rooms including a plurality of sound objects, and the input unit generates at least one of a starting sound, a viewing sound, and a tap sound of at least one of the plurality of sound objects. And receiving an input; and receiving, by an input unit, at least one of a size, a position, and a distance of the sound object.
본 발명에 의하면 시각 중심의 가상현실 컨텐츠가 아닌 청각 중심의 가상현실 컨텐츠를 제작하여 제공할 수 있으며, VR 장치를 머리에 쓰거나 하지 않아 보다 간편하게 가상현실 사운드 기반의 컨텐츠를 즐길 수 있다. 본 발명을 활용하여 사운드 기반의 게임, 안내 등 다양한 카테고리의 가상현실 컨텐츠를 제작할 수 있다.According to the present invention, it is possible to produce and provide auditory-centered virtual reality content rather than visual-centered virtual reality content, and enjoy virtual reality sound-based content more simply without wearing a VR device on the head. Using the present invention, it is possible to produce various categories of virtual reality content such as sound-based games and guides.
도 1과 도 2는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 제공 방법을 설명하는 개념도이다.1 and 2 are conceptual diagrams illustrating a virtual reality sound-based content providing method according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 제공 시스템의 구성도이다.3 is a block diagram of a system for providing contents based on virtual reality sound according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 제작 방법을 설명하는 순서도이다.4 is a flowchart illustrating a virtual reality sound-based content production method according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 실행 방법을 설명하는 순서도이다.5 is a flowchart illustrating a method of executing contents based on a virtual reality sound according to an embodiment of the present invention.
도 6 내지 도 8은 본 발명의 실시예에 따른 사운드 기반의 컨텐츠 제작 방법을 설명하는 참고도이다.6 to 8 are reference diagrams illustrating a method of producing sound-based content according to an embodiment of the present invention.
본 명세서에 개시되어 있는 본 발명의 개념에 따른 실시 예들에 대해서 특정한 구조적 또는 기능적 설명은 단지 본 발명의 개념에 따른 실시 예들을 설명하기 위한 목적으로 예시된 것으로서, 본 발명의 개념에 따른 실시 예들은 다양한 형태들로 실시될 수 있으며 본 명세서에 설명된 실시 예들에 한정되지 않는다.Specific structural or functional descriptions of the embodiments according to the concept of the present invention disclosed in the present specification are exemplified only for the purpose of describing the embodiments according to the concept of the present invention, and the embodiments according to the concept of the present invention are It may be implemented in various forms and is not limited to the embodiments described herein.
본 발명의 개념에 따른 실시 예들은 다양한 변경들을 가할 수 있고 여러 가지 형태들을 가질 수 있으므로 실시 예들을 도면에 예시하고 본 명세서에서 상세하게 설명하고자 한다. 그러나 이는 본 발명의 개념에 따른 실시 예들을 특정한 개시 형태들에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물, 또는 대체물을 포함한다.Since the embodiments according to the concept of the present invention can apply various changes and have various forms, the embodiments will be illustrated in the drawings and described in detail in the present specification. However, this is not intended to limit the embodiments according to the concept of the present invention to specific disclosed forms, and includes all changes, equivalents, or substitutes included in the spirit and scope of the present invention.
본 명세서에서 사용한 용어는 단지 특정한 실시 예를 설명하기 위해 사용된 것으로서, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 명세서에서, "포함하다" 또는 "가지다" 등의 용어는 본 명세서에 기재된 특징, 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terms used in the present specification are only used to describe specific embodiments, and are not intended to limit the present invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as "comprise" or "have" are intended to designate the presence of features, numbers, steps, actions, components, parts, or combinations thereof described herein, but one or more other features. It is to be understood that the presence or addition of elements or numbers, steps, actions, components, parts, or combinations thereof, does not preclude in advance the possibility of the presence or addition.
이하, 본 명세서에 첨부된 도면들을 참조하여 본 발명의 실시 예들을 상세히 설명한다. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1과 도 2는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 제공 방법을 설명하는 개념도이다. 도 1을 참조하면, 본 발명은 사용자(100)가 가상공간(300)에 미리 설정된 사운드객체(320, 340)를 사용자단말(200)을 통해 탐색하고, 사운드객체(320, 340)가 탐색되면 이벤트가 발생하도록 동작하는 가상현실 사운드 기반의 컨텐츠를 제공할 수 있다. 사운드객체(320,340)의 위치와 크기는 컨텐츠 제작자가 입력한 위치와 크기에 기초하여 설정될 수 있다. 사운드객체(320, 340)는 서로 다른 소리파일이 설정될 수 있다. 제1감지영역(310)은 제1사운드객체(320)의 중심점을 기준으로 일정반경으로 구성될 수 있고, 사용자단말(200)의 카메라레이(210)가 감지영역(310)내의 사운드객체(320) 방향으로 주시하게 되면, 사운드객체에 기설정된 소리가 발생할 수 있다. 사용자(100)는 사용자단말(200)을 통해 발생된 소리를 감지할 수 있다. 제2감지영역(330)은 제2사운드객체(340)의 중심점을 기준으로 일정반경으로 구성될 수 있다. 컨텐츠 제작자에 따라, 추가감지영역(350, 360)을 설정하여 보다 넓은 공간에서 사운드객체를 감지할 수 있도록 배치할 수 있다. 게임 컨텐츠의 경우 난이도를 조정하기 위해 추가감지영역을 설정하거나, 안내 컨텐츠의 경우 사용자의 연령에 따라 추가감지영역을 추가로 설정할 수 있다. 하나의 소리방(300) 내에 다수개의 사운드객체(320, 340)가 배치되어 사용자(100)가 가상공간에 배치된 사운드객체를 탐색할 수 있다.1 and 2 are conceptual diagrams illustrating a virtual reality sound-based content providing method according to an embodiment of the present invention. Referring to FIG. 1, in the present invention, when the user 100 searches for sound objects 320 and 340 preset in the virtual space 300 through the user terminal 200, the sound objects 320 and 340 are searched. It is possible to provide virtual reality sound-based content that operates to generate an event. The location and size of the sound objects 320 and 340 may be set based on the location and size input by the content creator. Different sound files may be set for the sound objects 320 and 340. The first sensing area 310 may be configured with a predetermined radius based on the center point of the first sound object 320, and the camera ray 210 of the user terminal 200 is the sound object 320 in the sensing area 310. If you look in the direction of ), a preset sound may be generated in the sound object. The user 100 may sense a sound generated through the user terminal 200. The second sensing area 330 may be configured with a predetermined radius based on the center point of the second sound object 340. Depending on the content creator, additional sensing areas 350 and 360 may be set and arranged to detect sound objects in a wider space. In the case of game content, an additional detection area may be set to adjust the difficulty level, or in the case of guide content, an additional detection area may be additionally set according to the user's age. A plurality of sound objects 320 and 340 are arranged in one sound room 300 so that the user 100 can search for sound objects arranged in a virtual space.
즉, 본 발명은 사용자가 서있는 공간에 임의로 배치된 소리파일들이 있고, 사용자단말에 설치된 카메라 레이의 방향에 따라 그 방향의 소리가 출력되거나, 사용자가 사용특정 조건을 만족하면 소리가 출력될 수 있다. 상기 사용특정조건은 화면 터치, 미리 설정된 시간 동안 주시하는 동작, 화면 슬라이드 조작 등의 사용자단말과 상호작용을 만족하면 소리가 출력될 수 있다. That is, in the present invention, there are sound files arbitrarily arranged in the space where the user is standing, and the sound in that direction may be output according to the direction of the camera ray installed in the user terminal, or the sound may be output when the user satisfies specific conditions of use. . The specific conditions of use may be sound output when an interaction with a user terminal such as a screen touch, a gaze operation for a preset time, and a screen slide operation is satisfied.
도 3은 본 발명의 실시예에 따른 가상현실 사운드 기반의 사용자단말의 구성도이다.3 is a block diagram of a virtual reality sound-based user terminal according to an embodiment of the present invention.
도 3을 참조하면, 사용자단말(200)은 가상현실 사운드 기반의 컨텐츠를 실행하기 위한 전용 애플리케이션이 설치될 수 있다. 사용자단말(200)은 컨텐츠생성부(210), 컨텐츠실행부(220), 소리인식부(230), 화면제공부(240), 제어부(250), 사운드제공부(260), 저장부(270), 입력부(280), 통신부(290)로 구성된다. Referring to FIG. 3, the user terminal 200 may be installed with a dedicated application for executing contents based on virtual reality sound. The user terminal 200 includes a content generation unit 210, a content execution unit 220, a sound recognition unit 230, a screen providing unit 240, a control unit 250, a sound providing unit 260, and a storage unit 270. ), an input unit 280, and a communication unit 290.
사용자단말(200)은 컴퓨팅장치로서, 스마트폰, 태블릿PC처럼 운영체제 소프트웨어와 다양한 애플리케이션 소프트웨어에 의해 정해진 프로세서를 실행할 수 있고, 휴대가 용이하여 이동중에도 쉽게 활용하 수 있는 모바일 컴퓨팅 디바이스이나 이에 대해 한정하는 것은 아니다. The user terminal 200 is a computing device, which can execute a processor determined by operating system software and various application software, such as a smartphone or tablet PC, and is a mobile computing device that can be easily used while moving because it is easy to carry. It is not.
컨텐츠생성부(210)는 가상현실 사운드 기반의 컨텐츠를 생성할 수 있다. 상기 컨텐츠는 다수개의 사운드객체들을 포함하는 다수개의 소리방들로 구성될 수 있다. 상기 사운드객체는 가상현실에서 입체감과 거리감이 느껴지는 소리를 포함하는 객체이고, 가상현실 공간의 소리방에 배치될 수 있다. 상기 소리방은 사운드객체를 배치할 수 있는 공간으로, 사용자단말은 하나의 컨텐츠 내에 여러 소리방을 생성할 수 있다. 상기 소리방은 다수개가 배치될 수 있고, 사용자단말이 자유롭게 이동할 수 있다. 상기 소리방들 간에 이동시 문을 열고 닫는 소리가 발생하도록 구성하여 사용자단말은 소리방들 간에 이동을 인식할 수 있다. The content generator 210 may generate virtual reality sound-based content. The content may be composed of a plurality of sound rooms including a plurality of sound objects. The sound object is an object including a sound in which a sense of three-dimensionality and a sense of distance are felt in virtual reality, and may be disposed in a sound room in a virtual reality space. The sound room is a space in which sound objects can be placed, and a user terminal can create multiple sound rooms within one content. A plurality of sound rooms can be arranged, and the user terminal can move freely. The sound of opening and closing the door is generated when moving between the sound rooms, so that the user terminal can recognize the movement between the sound rooms.
컨텐츠생성부(210)는 제작자로부터 사운드객체 각각의 시작소리, 볼때소리, 탭소리, 크기, 위치, 거리 중 적어도 하나를 입력받고 이에 기초하여 상기 사운드객체를 가상현실 공간에 생성할 수 있다.The content generation unit 210 may receive at least one of a starting sound, a viewing sound, a tap sound, a size, a location, and a distance of each sound object from a producer, and generate the sound object in a virtual reality space based on the input.
컨텐츠실행부(220)는 생성된 컨텐츠를 실행할 수 있고, The content execution unit 220 can execute the generated content,
소리인식부(230)는 자이로 스코프 또는 엑셀러레이터 센서 기반의 카메라 레이로 구성될 수 있다. 즉, 소리인식부(230)는 카메라 레이가 주시하는 방향에 가상의 레이저가 사운드객체에 접촉하면 기설정된 소리가 출력될 수 있다.The sound recognition unit 230 may be configured with a camera ray based on a gyroscope or an accelerator sensor. That is, the sound recognition unit 230 may output a preset sound when the virtual laser contacts the sound object in the direction the camera ray is looking at.
화면제공부(240)는 전용 애플리케이션을 실행하여 사용자에게 도시할 수 있고, 컨텐츠를 제작하기 위한 템플릿을 도시할 수 있다. The screen providing unit 240 may execute a dedicated application and display it to a user, and may display a template for producing content.
제어부(250)는 전용 애플리케이션 소프트웨어의 실행에 관련한 프로세스의 처리를 제어하며, 단말기(200)의 각 구성의 동작을 제어한다. The controller 250 controls processing of a process related to execution of the dedicated application software, and controls the operation of each component of the terminal 200.
사운드제공부(260)는 이벤트 발생에 따라 해당 컨텐츠에 매핑되어 있는 시작소리, 볼때소리, 탭소리 중 적어도 하나를 출력할 수 있다.The sound providing unit 260 may output at least one of a start sound, a viewing sound, and a tap sound mapped to the corresponding content according to the occurrence of an event.
저장부(270)는 가상현실 사운드 기반의 컨텐츠를 제작하거나 동작하기 위한 전용 애플리케이션을 저장할 수 있다. 저장부(270)는 생성된 소리공간 컨텐츠를 저장할 수 있다.The storage unit 270 may store a dedicated application for producing or operating virtual reality sound-based content. The storage unit 270 may store the generated sound space content.
입력부(280)는 입력수단에 의해 실행되는 입력 이벤트를 수신한다. 입력부(280)는 터치스크린일 수 있고, 터치 이벤트를 제어부(250)에 전달한다. 사용자로부터 사운드객체 각각의 시작소리, 볼때소리, 탭소리, 크기, 위치, 거리에 대한 값을 입력받을 수 있다. The input unit 280 receives an input event executed by an input means. The input unit 280 may be a touch screen, and transmits a touch event to the controller 250. Values for the starting sound, viewing sound, tapping sound, size, position, and distance of each sound object can be input from the user.
통신부(290)는 외부 장치와 데이터를 주고 받는 역할을 하는 통신 모듈이다. The communication unit 290 is a communication module that serves to exchange data with an external device.
도 4는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 제작 방법을 설명하는 순서도이다.4 is a flowchart illustrating a virtual reality sound-based content production method according to an embodiment of the present invention.
도 4를 참조하면, 가상현실 사운드 기반의 컨텐츠를 제공하는 방법은 가상현실 사운드 기반의 컨텐츠를 제작하는 단계와 컨텐츠를 실행하는 단계를 포함한다. Referring to FIG. 4, a method of providing virtual reality sound-based content includes producing a virtual reality sound-based content and executing the content.
먼저, 가상현실 사운드 기반의 컨텐츠를 제작하는 단계는, 사용자로부터 가상현실 사운드 기반의 컨텐츠 생성 요청을 수신하면, 컨텐츠생성부가 다수개의 사운드객체들을 포함하는 다수개의 소리방들을 생성한다(S410). 상기 사운드객체는 가상현실에서 입체감과 거리감이 느껴지는 소리를 포함하는 객체이고, 가상현실 공간의 소리방에 배치될 수 있다. 상기 소리방은 사운드객체를 배치할 수 있는 공간으로, 사용자단말은 하나의 컨텐츠 내에 여러 소리방을 생성할 수 있다. 상기 소리방은 다수개가 배치될 수 있고, 사용자단말이 자유롭게 이동할 수 있다. 상기 소리방들 간에 이동시 문을 열고 닫는 소리가 발생하도록 구성하여 사용자단말은 소리방들 간에 이동을 인식할 수 있다First, in the step of producing virtual reality sound-based content, when receiving a request for generating virtual reality sound-based content from a user, the content generation unit generates a plurality of sound rooms including a plurality of sound objects (S410). The sound object is an object including a sound in which a sense of three-dimensionality and a sense of distance are felt in virtual reality, and may be disposed in a sound room in a virtual reality space. The sound room is a space in which sound objects can be placed, and a user terminal can create multiple sound rooms within one content. A plurality of sound rooms can be arranged, and the user terminal can move freely. The sound of opening and closing the door is generated when moving between the sound rooms, so that the user terminal can recognize the movement between the sound rooms.
입력부가 상기 다수개의 사운드객체들 중 적어도 하나의 시작소리, 볼때소리, 탭소리 중 적어도 하나를 입력받는다(S420).The input unit receives at least one of a starting sound, a viewing sound, and a tap sound from among the plurality of sound objects (S420).
입력부가 상기 사운드객체의 크기, 위치, 거리 중 적어도 하나를 입력받는다(S430).도컨텐츠생성부가 상기 시작소리, 볼때소리, 탭소리, 크기, 위치, 거리 중 적어도 하나에 기초하여 상기 사운드객체를 가상현실 공간에 생성한다(S440). 컨텐츠생성부가 사운드객체을 포함하는 다수개의 소리방들이 배치된 소리공간 컨텐츠를 생성한다(S450). The input unit receives at least one of the size, location, and distance of the sound object (S430). The content generation unit generates the sound object based on at least one of the starting sound, the viewing sound, the tap sound, the size, the position, and the distance. It is created in a virtual reality space (S440). The content generation unit generates sound space content in which a plurality of sound rooms including sound objects are arranged (S450).
도 5는 본 발명의 실시예에 따른 가상현실 사운드 기반의 컨텐츠 실행 방법을 설명하는 순서도이다.5 is a flowchart illustrating a method of executing contents based on a virtual reality sound according to an embodiment of the present invention.
도 5를 참조하면, 상기 가상 현실 사운드 기반의 컨텐츠를 실행하는 단계는,컨텐츠실행부가 저장부에 저장된 소리공간 컨텐츠를 로딩하여 가상공간에 적어도 하나의 사운드객체를 포함하는 컨텐츠를 실행한다(S510).Referring to FIG. 5, in the executing of the virtual reality sound-based content, the content execution unit loads the sound space content stored in the storage unit and executes the content including at least one sound object in the virtual space (S510). .
소리인식부가 상기 컨텐츠 내에서 상기 사운드객체를 감지하면 이벤트가 발생한다(S520). 이때, 상기 소리인식부는 사용자단말에 설치된 카메라 레이일 수 있으나 이에 대해 한정하는 것은 아니다. 실시예에 따라, 상기 소리인식부는 자이로 스코프 또는 엑셀러레이터 센서 기반의 카메라 레이로 구성될 수 있다. 즉, 카메라가 주시하는 방향에 가상의 레이저가 사운드객체에 접촉하면 이미 설정된 소리가 출력될 수 있다.When the sound recognition unit detects the sound object in the content, an event occurs (S520). In this case, the sound recognition unit may be a camera ray installed in the user terminal, but is not limited thereto. According to an embodiment, the sound recognition unit may be configured with a gyroscope or an accelerator sensor-based camera ray. That is, when the virtual laser contacts the sound object in the direction the camera is looking at, the already set sound may be output.
제어부가 상기 사용자단말의 동작에 대응하는 입력값을 컨텐츠에 반영한다. 이때, 사용자단말의 동작은 기본동작, 시작동작, 주시동작, 터치동작, 설정동작 중 적어도 하나일 수 있다. 상기 기본동작은 사운드객체가 다른 동작을 하고 있지 않을 경우 작동하고, 상기 시작동작은 사운드객체의 주기가 시작할 경우 작동할 수 있다.The control unit reflects the input value corresponding to the operation of the user terminal to the content. In this case, the operation of the user terminal may be at least one of a basic operation, a start operation, a gaze operation, a touch operation, and a setting operation. The basic operation may be activated when the sound object is not performing any other operation, and the starting operation may be activated when a cycle of the sound object starts.
주시동작은 사용자단말이 움직여 사운드객체가 위치한 방향으로 향할 경우 작동할 수 있다. 프로그램 내에서 설정한 시간이 지난 후에 다른 동작을 하도록 설정이 가능하고, 사용자단말이 사운드객체의 방향을 보지 않을 때 설정한 작용을 멈추게 설정이 가능하다.The gaze movement can be operated when the user terminal moves and is directed in the direction in which the sound object is located. It is possible to set to perform another operation after the set time in the program has elapsed, and to stop the set operation when the user terminal does not see the direction of the sound object.
터치동작은 사용자단말이 사운드객체가 위치한 방향으로 이동하여 화면을 터치하면 작동할 수 있다. 프로그램내에서 설정한 시간이 지난 후에 다른 동작을 하도록 설정이 가능하다. 설정동작은 제작자가 설정한 시간이 초과하는 경우 작동할 수 있다. 이때, 컨텐츠 제작시 각 조건의 사용 여부를 설정하는 것이 가능하다.The touch operation can be operated when the user terminal moves in the direction in which the sound object is located and touches the screen. It is possible to set to perform different actions after the time set in the program has elapsed. The setting operation can be operated when the time set by the manufacturer is exceeded. At this time, it is possible to set whether to use each condition when creating content.
사용자단말의 동작에 대응하는 입력값으로, 사운드객체 컨텐츠의 소리재생동작, 소리정지동작, 이동동작, 재시작동작 중 적어도 하나일 수 있다.An input value corresponding to the operation of the user terminal, and may be at least one of a sound reproduction operation, a sound stop operation, a movement operation, and a restart operation of the sound object content.
상기 소리재생동작은 사운드객체이 위치한 좌표를 기초로 설정된 소리가 재생되고, 소리정지동작은 사운드객체가 소리를 재생하고 있는 경우 상기 소리를 정지할 수 있다. 상기 이동동작은 사용자단말을 인접한 소리방으로 이동시킬 수 있고, 재시작동작은 사운드객체 내 타이머를 초기화하고 새 주기를 시작할 수 있다. In the sound reproducing operation, a sound set based on the coordinates at which the sound object is located is reproduced, and in the sound stopping operation, the sound may be stopped when the sound object is reproducing the sound. The moving operation may move the user terminal to an adjacent sound room, and the restart operation may initialize a timer in the sound object and start a new cycle.
즉, 소리재생동작과 소리정지동작은 사용자단말이 설정한 시간동안 다른 상호작용 조건이 만족해도 동작을 할 수 없게 하는 방해금지 시간 설정이 가능하다. 또한, 이동동작과 재시작동작은, 소리재생동작과 소리정지동작과 병행하여 동작이 가능하며, 이 경우 설정된 방해금지 시간이 도과한 후에 동작할 수 있다. 상기 주기는 각 사운드객체가 시작에서 재시작동작을 하는 조건에서 종료될 수 있다.That is, for the sound reproducing operation and the sound stopping operation, it is possible to set an interruption prohibition time in which the user terminal cannot operate even if other interaction conditions are satisfied for a set time. In addition, the moving operation and the restarting operation can be operated in parallel with the sound reproduction operation and the sound stop operation, and in this case, the operation may be performed after the set prohibition time has elapsed. The cycle may end under the condition that each sound object performs a restart operation from the start.
도 6 내지 도 8은 본 발명의 실시예에 따른 사운드 기반의 컨텐츠 제작 방법을 설명하는 참고도이다.6 to 8 are reference diagrams illustrating a method for producing sound-based content according to an embodiment of the present invention.
도 6 내지 도 8은 컨텐츠 제작을 위한 GUI를 설명하는 도면으로, 도 6을 참조하면, 소리공간(610)의 이름을 설정할 수 있고, 다수개의 소리방(620)을 선택할 수 있다. 소리방이 선택되면, 특정이미지(630)를 추가하거나, 특정배경음악(640)을 추가할 수 있다. 6 to 8 are diagrams for explaining a GUI for content creation. Referring to FIG. 6, a name of a sound space 610 can be set, and a plurality of sound rooms 620 can be selected. When a sound room is selected, a specific image 630 may be added or a specific background music 640 may be added.
도 7을 참조하면, 소리방이 선택되면 소리방 내에 배치되는 다수개의 사운드객체들의 소리(710)를 설정할 수 있다. 제1사운드객체(720), 제2사운드객체(730), 제3사운드객체(740)이 설정될 수 있고, 제1사운드객체(720)을 선택하여, 시작소리(750), 볼때소리(760), 탭소리(770)을 각각 설정할 수 있다. Referring to FIG. 7, when a sound room is selected, sounds 710 of a plurality of sound objects arranged in the sound room may be set. The first sound object 720, the second sound object 730, and the third sound object 740 can be set, and by selecting the first sound object 720, the start sound 750 and the sound when seeing 760 ) And tap sound 770 can be set, respectively.
도 8을 참조하면, 소리방 내에 배치되는 다수개의 사운드객체의 크기(810), 위치(820), 거리(830)를 설정할 수 있다. 각각을 선택하고, 스크롤바(840)를 통해 크리, 위치, 거리를 설정할 수 있다. 배치지도(850)는 소리방 내에 어떤 위치에 어떤 크기로 사운드객체가 배치되는 지를 도시할 수 있다.Referring to FIG. 8, a size 810, a location 820, and a distance 830 of a plurality of sound objects arranged in a sound room may be set. Each can be selected, and the size, location, and distance can be set through the scroll bar 840. The placement map 850 may show how the sound object is disposed at what location in the sound room and in what size.
본 발명은 도면에 도시된 실시 예를 참고로 설명되었으나 이는 예시적인 것에 불과하며, 본 기술 분야의 통상의 지식을 가진 자라면 이로부터 다양한 변형 및 균등한 타 실시 예가 가능하다는 점을 이해할 것이다. 따라서, 본 발명의 진정한 기술적 보호 범위는 첨부된 등록청구범위의 기술적 사상에 의해 정해져야 할 것이다.The present invention has been described with reference to the embodiments shown in the drawings, but these are only exemplary, and those of ordinary skill in the art will understand that various modifications and equivalent other embodiments are possible therefrom. Therefore, the true technical protection scope of the present invention should be determined by the technical idea of the attached registration claims.
Claims (5)
- 가상현실 사운드 기반의 컨텐츠를 제공하는 방법에 있어서,In the method of providing virtual reality sound-based content,가상현실 사운드 기반의 컨텐츠를 제작하는 단계와 컨텐츠를 실행하는 단계를 포함하고,Including the step of producing a virtual reality sound-based content and executing the content,상기 가상현실 사운드 기반의 컨텐츠를 제작하는 단계는,The step of producing the virtual reality sound-based content,사용자로부터 가상현실 사운드 기반의 컨텐츠 생성 요청을 수신하면, 컨텐츠생성부가 다수개의 사운드객체들을 포함하는 다수개의 소리방들을 생성하는 단계;Generating a plurality of sound rooms including a plurality of sound objects by a content generation unit upon receiving a request for content creation based on a virtual reality sound from a user;입력부가 상기 다수개의 사운드객체들 중 적어도 하나의 시작소리, 볼때소리, 탭소리 중 적어도 하나를 입력받는 단계;Receiving, by an input unit, at least one of a starting sound, a viewing sound, and a tap sound from among the plurality of sound objects;입력부가 상기 사운드객체의 크기, 위치, 거리 중 적어도 하나를 입력받는 단계를 포함하는 가상현실 사운드 기반의 컨텐츠 제공 방법.And receiving at least one of a size, a location, and a distance of the sound object by an input unit.
- 제1항에 있어서,The method of claim 1,컨텐츠생성부가 상기 시작소리, 볼때소리, 탭소리, 크기, 위치, 거리 중 적어도 하나에 기초하여 상기 사운드객체을 가상현실 공간에 생성하는 단계;Generating, by a content generation unit, the sound object in a virtual reality space based on at least one of the starting sound, the viewing sound, the tap sound, the size, the location, and the distance;컨텐츠생성부가 사운드객체을 포함하는 다수개의 소리방들이 배치된 소리공간 컨텐츠를 생성하는 단계를 더 포함하는 가상현실 사운드 기반의 컨텐츠 제공 방법. A method for providing contents based on virtual reality sound, further comprising the step of, by a content generation unit, generating sound space contents in which a plurality of sound rooms including sound objects are arranged.
- 제1항에 있어서,The method of claim 1,상기 사운드객체은 가상현실에서 입체감과 거리감이 느껴지는 소리를 포함하는 객체이고, 가상현실 공간의 소리방에 배치되는 가상현실 사운드 기반의 컨텐츠 제공 방법.The sound object is an object including a sound in which a sense of three-dimensional sense and a sense of distance are felt in virtual reality, and a virtual reality sound-based content providing method disposed in a sound room of a virtual reality space.
- 제1항에 있어서,The method of claim 1,상기 컨텐츠를 실행하는 단계는,The step of executing the content,컨텐츠실행부가 가상공간에 적어도 하나의 사운드객체을 포함하는 컨텐츠를 실행하는 단계;Executing, by the content execution unit, content including at least one sound object in the virtual space;소리인식부가 상기 컨텐츠 내에서 상기 사운드객체을 감지하면 이벤트가 발생하는 단계;Generating an event when the sound recognition unit detects the sound object in the content;제어부가 상기 사용자단말의 동작에 대응하는 입력값을 컨텐츠에 반영하는 단계를 더 포함하는 가상현실 사운드 기반의 컨텐츠 제공 방법.The virtual reality sound-based content providing method further comprising the step of the control unit reflecting the input value corresponding to the operation of the user terminal to the content.
- 가상현실 사운드 기반의 컨텐츠 제공 시스템에 있어서,In a virtual reality sound-based content providing system,다수개의 사운드객체들을 포함하는 다수개의 소리방들로 구성되고, 상기 사운드객체 각각의 시작소리, 볼때소리, 탭소리, 크기, 위치, 거리 중 적어도 하나를 입력받고 이에 기초하여 상기 사운드객체을 가상현실 공간에 생성하는 컨텐츠생성부; 및It is composed of a plurality of sound rooms including a plurality of sound objects, and receives at least one of a starting sound, a viewing sound, a tap sound, a size, a location, and a distance of each of the sound objects, and based on this, the sound object is transferred to the virtual reality space. A content generation unit that generates; And사용자단말에 설치된 카메라 레이가 주시하는 방향에 가상의 레이저가, 상기 컨텐츠생성부에서 생성된 사운드객체에 접촉하면 기설정된 소리가 출력되는 소리인식부를 포함하는 가상현실 사운드 기반의 컨텐츠 제공 시스템.A virtual reality sound-based content providing system comprising a sound recognition unit that outputs a preset sound when a virtual laser installed in a user terminal is in a direction gaze on the sound object generated by the content generation unit.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190129265A KR102297532B1 (en) | 2019-10-17 | 2019-10-17 | Method and system for providing content based on virtual reality sound |
KR10-2019-0129265 | 2019-10-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021075835A1 true WO2021075835A1 (en) | 2021-04-22 |
Family
ID=75537910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/013968 WO2021075835A1 (en) | 2019-10-17 | 2020-10-14 | Content provision method and system based on virtual reality sound |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102297532B1 (en) |
WO (1) | WO2021075835A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010257435A (en) * | 2009-04-03 | 2010-11-11 | Sony Computer Entertainment Inc | Device, and method for reproducing content, and program |
KR20160079788A (en) * | 2013-11-05 | 2016-07-06 | 소니 주식회사 | Information processing device, method of processing information, and program |
WO2018122449A1 (en) * | 2016-12-30 | 2018-07-05 | Nokia Technologies Oy | An apparatus and associated methods in the field of virtual reality |
JP2019101050A (en) * | 2017-11-28 | 2019-06-24 | 株式会社コロプラ | Program for assisting in performing musical instrument in virtual space, computer-implemented method for assisting in selecting musical instrument, and information processor |
KR20190072190A (en) * | 2017-12-15 | 2019-06-25 | 동의대학교 산학협력단 | Virtual reality 3D cooking game system and method |
-
2019
- 2019-10-17 KR KR1020190129265A patent/KR102297532B1/en active IP Right Grant
-
2020
- 2020-10-14 WO PCT/KR2020/013968 patent/WO2021075835A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010257435A (en) * | 2009-04-03 | 2010-11-11 | Sony Computer Entertainment Inc | Device, and method for reproducing content, and program |
KR20160079788A (en) * | 2013-11-05 | 2016-07-06 | 소니 주식회사 | Information processing device, method of processing information, and program |
WO2018122449A1 (en) * | 2016-12-30 | 2018-07-05 | Nokia Technologies Oy | An apparatus and associated methods in the field of virtual reality |
JP2019101050A (en) * | 2017-11-28 | 2019-06-24 | 株式会社コロプラ | Program for assisting in performing musical instrument in virtual space, computer-implemented method for assisting in selecting musical instrument, and information processor |
KR20190072190A (en) * | 2017-12-15 | 2019-06-25 | 동의대학교 산학협력단 | Virtual reality 3D cooking game system and method |
Also Published As
Publication number | Publication date |
---|---|
KR20210045806A (en) | 2021-04-27 |
KR102297532B1 (en) | 2021-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107132988B (en) | Virtual objects condition control method, device, electronic equipment and storage medium | |
US11036286B2 (en) | Information processing apparatus, information processing method, and computer-readable recording medium | |
WO2015034177A1 (en) | Method and device for executing command on basis of context awareness | |
JP2022527502A (en) | Virtual object control methods and devices, mobile terminals and computer programs | |
EP2761973A1 (en) | Method of operating gesture based communication channel and portable terminal system for supporting the same | |
CN105912241A (en) | Method and device for man-machine interaction, and terminal | |
WO2013118987A1 (en) | Control method and apparatus of electronic device using control device | |
KR20120018685A (en) | Termianl for recogniging multi user input and control method thereof | |
CN108339272A (en) | Virtual shooting main body control method and device, electronic equipment, storage medium | |
WO2011145788A1 (en) | Touch screen device and user interface for the visually impaired | |
CN110471870A (en) | Method, apparatus, electronic equipment and the storage medium of multisystem operation | |
WO2021075835A1 (en) | Content provision method and system based on virtual reality sound | |
KR20180043866A (en) | Method and system to consume content using wearable device | |
JP2018205317A (en) | Method to grasp space that electronic equipment is located by utilizing battery charger thereof, electronic equipment and battery charger | |
JP2018113031A (en) | Method and system for detecting automated input | |
CN113867873A (en) | Page display method and device, computer equipment and storage medium | |
WO2020096121A1 (en) | Force feedback method and system using density | |
WO2023106661A1 (en) | Remote control system | |
CN109104759A (en) | Exchange method, electronic equipment and the computer-readable medium of electronic equipment | |
WO2019066408A1 (en) | Device and method for providing text message on basis of touch input | |
TWM449618U (en) | Configurable hand-held system for interactive games | |
CN104049807B (en) | A kind of information processing method and electronic equipment | |
WO2012118342A2 (en) | Method and apparatus for controlling a user terminal having a touch screen, recording medium therefor, and user terminal comprising the recording medium | |
WO2024202705A1 (en) | Program, information processing device, and method | |
JP6672399B2 (en) | Electronics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20876612 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20876612 Country of ref document: EP Kind code of ref document: A1 |