WO2020213783A1 - System and method for providing user interface of virtual interactive content, and recording medium having computer program stored therein for same - Google Patents
System and method for providing user interface of virtual interactive content, and recording medium having computer program stored therein for same Download PDFInfo
- Publication number
- WO2020213783A1 WO2020213783A1 PCT/KR2019/006028 KR2019006028W WO2020213783A1 WO 2020213783 A1 WO2020213783 A1 WO 2020213783A1 KR 2019006028 W KR2019006028 W KR 2019006028W WO 2020213783 A1 WO2020213783 A1 WO 2020213783A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interactive content
- image
- user interface
- virtual interactive
- digital camera
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
Definitions
- the present invention relates to a technology for providing a user interface for virtual interactive content using a wall or a floor as a screen. More specifically, the present invention relates to a technology for providing a user interface for playing virtual interactive content projected on a wall or a floor using a virtual mouse object such as a soccer ball.
- a moving object is identified in an image captured with a digital camera of the content being played, and the movement of the object is tracked to generate an event corresponding to a mouse click when the object touches the wall.
- the characteristic pattern of the object can be learned in advance through machine learning.
- Machine vision technology has been proposed as a way to implement these functions.
- Machine vision technology refers to a combination of hardware and software that provides operating instructions for devices that perform functions of capturing and processing images, and has been used primarily as a technology to manage the production quality of products in various industries.
- the machine vision technologies is a 3 Dimensional Depth camera technology that uses an infrared (IR) camera.
- the infrared camera includes at least one infrared light irradiation module and at least one light sensor module, and for all pixels of a photographed image, a camera using a lag or a phase shift of the modulated optical signal ( 10) It uses the so-called ToF method (Time-Of-Flight measurement) to measure the distance between the moving object.
- ToF method Time-Of-Flight measurement
- a technique for identifying and tracking an object corresponding to a virtual mouse through digital processing of a play image of a content captured using a general digital camera has also been proposed.
- Mixed Reality combines reality and virtual to create a new environment in which real and virtual objects coexist, allowing users to experience various digital information more realistically by interacting with the new environment in real time. It's technology.
- Mixed reality includes augmented reality (AR) that adds virtual information based on reality and augmented virtuality (AV) that adds reality information to a virtual environment.
- AR augmented reality
- AV augmented virtuality
- a virtual touch method and apparatus using a 3D camera disclosed in Korean Patent Publication No. 2013-0050672 is a technology that uses both machine vision and augmented reality (AR), and detects the shape of the user and converts the touch operation by the user into 3D. It is a method of realizing a virtual touch method like a real touch screen without a touch display or a special touch recognition device using a recognized 3D camera.
- AR augmented reality
- An object of the present invention is to provide a user interface scheme for accurately identifying an object in a content image regardless of the brightness of a place where interactive content is executed.
- Another object of the present invention is to provide a user interface scheme that provides compatibility without additional modification to interactive content applications distributed on the market.
- An embodiment of the present invention for achieving the above object, a digital camera for photographing a virtual interactive content image displayed on a wall; And an object recognition module that identifies a predefined object in the captured image of the virtual interactive content and determines the distance and coordinates of the object, and an event including the coordinates of the object when the object hits a wall surface to the interactive content application. It relates to a system for providing a user interface of virtual interactive content including an application driving device that executes a conversion engine including an event module to deliver.
- the system of this embodiment may further include an image output device that displays the image of the virtual interactive content on the wall.
- system of this embodiment further includes a machine learning server that repeatedly analyzes a plurality of image data including the object to learn a pattern related to at least one of a shape, size, surface pattern, and color for identifying the object.
- a machine learning server that repeatedly analyzes a plurality of image data including the object to learn a pattern related to at least one of a shape, size, surface pattern, and color for identifying the object.
- the digital camera of the present embodiment may have at least two image sensors, and in this case, the object recognition module may calculate a distance between the digital camera and the object by using a difference in angle of view of the image sensors.
- the digital camera according to the present embodiment may have at least one image sensor, and in this case, the object recognition module may calculate a distance between the digital camera and a wall surface based on the size of an object in an image captured by the digital camera.
- Another embodiment of the present invention includes the steps of identifying a pre-learned object from a captured image of virtual interactive content; Determining the distance and coordinates of the identified object; Generating an event including coordinates of a touch point when the object hits a wall; And transmitting the event to a virtual interactive content application.
- the method of providing a user interface of the present invention may further include determining that the object has touched the wall when the calculated distance of the object matches the preset distance of the wall.
- the method for providing a user interface of the present invention is a machine that repeatedly analyzes a plurality of image data including the object to learn a pattern related to at least one of a shape, a size, a surface pattern, and a color to identify the object. It may further include a running step.
- Another embodiment of the present invention relates to a computer program in which the above-described method for providing a user interface is implemented as an algorithm or a computer-readable recording medium in which the program is stored.
- the present invention it is possible to enjoy sports interactive content without being affected by environmental factors of a play place such as illumination, temperature, and humidity.
- content can be enjoyed comfortably in an indoor space with sufficiently bright lighting even on hot, cold, or high concentration of fine dust, and content can be enjoyed on an outdoor court in an area where the temperature and weather suitable for exercise are maintained.
- the recognition rate can be remarkably improved by learning in advance various characteristics of a throwing object that serves as a mouse that controls execution of content through repetitive analysis.
- the conversion engine generating the event and the virtual interactive content receiving the event are independently executed, there is no need to modify the virtual interactive content to maintain compatibility between the two programs. Therefore, the productivity of the interactive content is increased while the universality of the conversion engine is guaranteed.
- FIG. 1 is a conceptual diagram schematically showing a configuration of a system for providing a user interface according to a first embodiment.
- FIG. 2 is a block diagram of a system for providing a user interface according to the first embodiment.
- 3 and 4 are block diagrams showing the system configuration of a modified embodiment of the first embodiment.
- 5A to 5D illustrate photographing scenes of an object image for machine learning.
- FIG. 6 is a block diagram of a system for providing a user interface according to a second embodiment.
- FIG. 7 is a flow chart showing step-by-step a method of providing a user interface according to the third embodiment.
- FIG. 8 is a flowchart illustrating a machine learning process step by step in a method of providing a user interface according to the third embodiment.
- MODULE refers to a unit that processes a specific function or operation, and may mean hardware or software, or a combination of hardware and software.
- the term "moving object” or "object” refers to an object that can cause movement by a user using a part of his or her body or by using equipment such as a racket or a club. Volleyball ball, tennis ball, badminton ball, Ozami, darts, and the like are exemplified. However, the present invention is not limited thereto, and any object that maintains a certain shape and can be easily moved by a user may correspond to a “object”. These “objects” may also be referred to as “virtual mouse objects” or “virtual pointer objects” in that they serve as input means (eg, mouse, pointer, etc.) for executing or controlling virtual interactive content.
- interactive content refers to content that outputs or executes various results in response to a user's real-time action, not content that is unilaterally played or executed according to a predetermined plot. .
- “virtual interactive content” does not execute the content using conventional input means such as a mouse or touch pad (hereinafter referred to as'mouse, etc.'), but the actual content is executed on a separate computer device.
- the execution image of the content is directly projected on a wall, floor, or ceiling (hereinafter referred to as'wall surface') through a beam projector, or on a screen installed on a wall, or a display device installed on a wall (for example, It refers to interactive content that is output through a digital TV or a digital monitor) and virtually implements the same effect as an input means such as a mouse by touching a wall surface on which the image of the content is displayed through a moving object.
- Such interactive content may be implemented as media content such as a movie, a digital book, or a digital picture frame, or as an interactive game performed by a user's touch input.
- Embodiment 1 relates to a system for providing a user interface of virtual interactive content that recognizes a moving object using a stereo camera.
- FIG. 1 is a conceptual diagram schematically showing a configuration of a system for providing a user interface according to a first embodiment.
- the user plays the content by throwing the ball corresponding to the virtual mouse object toward a specific point on the wall where the content is displayed.
- a digital camera 10 for photographing a user's action and content scene is disposed on a wall or ceiling opposite to the wall surface on which the content is projected, and the interactive content is executed by an application driving device 20 provided separately.
- An image output device 30 that receives an image of interactive content from the application driving device 20 and outputs it to the wall surface is disposed on the wall or ceiling opposite the wall surface on which the content is projected.
- FIG. 2 is a block diagram showing a detailed configuration of a system for providing a user interface according to the first embodiment.
- the system of Example 1 includes a digital camera 10, an application driving device 20, and an image output device 30, and may further include a machine learning server 40.
- the digital camera 10 photographs a content scene including a moving virtual pointer object, and transmits the photographed image data to the application driving device 20.
- the digital camera 10 may be connected to the application driving device 20 through a wired communication interface such as USB, RJ-45, or a short-range or broadband wireless communication interface such as Bluetooth, IEEE 802.11, and LTE.
- a wired communication interface such as USB, RJ-45, or a short-range or broadband wireless communication interface such as Bluetooth, IEEE 802.11, and LTE.
- the communication interface or communication protocol mentioned here is only an example, and any communication interface and protocol for smoothly transmitting image data can be used.
- a stereo-type measurement algorithm may be used to identify a moving object in image data and estimate a distance between the camera 10 and the moving object.
- the same object is photographed using two camera modules (image sensors) separated from each other, and the distance to the object is estimated by using the angle difference caused by the discrepancy between the viewpoints between the two camera modules.
- the digital camera 10 of Example 1 includes at least two 2D image sensor modules (not shown).
- the application driving device 20 executes the conversion engine 21 and the interactive content application 22.
- the application driving device 20 may install and execute the conversion engine 21 and the interactive content application 22 together in a single device such as a desktop PC, a notebook computer, a mobile tab, a smartphone, and a server.
- the application driving device 20 may install and execute the conversion engine 21 on a single device such as a desktop PC illustrated above, and install and execute the interactive content application 22 on a separate server 20-1.
- FIG. 3 is a block diagram showing the system configuration of such a modified embodiment.
- the conversion engine 21 is installed and executed on the digital camera 10, and only interactive content applications are executed on the application driving device 20, and the digital camera 10 and the application driving device 20 are It can be connected through a local area network or an LTE or 5G broadband network.
- 4 is a block diagram showing the system configuration of this modified embodiment.
- the transformation engine 21 generates an event corresponding to a click of a mouse when the moving object collides with the wall, and transmits the event to the interactive content application 22.
- the conversion engine 21 may include an object recognition module 21-1 and an event module 21-2.
- the object recognition module 21-1 identifies a moving object by processing the image data sent from the camera 10, and estimates the distance between the camera 10 and the object using a stereotype technique. Object identification and distance estimation will be collectively defined as tracking. Tracking may be performed on all frames of image data sent from the camera 10, or intermittently performed on frames of preset intervals in consideration of the burden of load of the conversion engine 21 due to frequent tracking. It could be.
- the object recognition module 21-1 may be included in the conversion engine 21 or installed in the digital camera 10 as firmware.
- the digital camera 10 provides tracking information including the distance to the object and the coordinates of the object instead of image data to the event module 21-2 of the conversion engine 21 do.
- the event module 21-2 determines whether the moving object collides with the wall, converts the coordinates of the collision point into coordinates on the execution screen of the interactive content application, generates an event including the converted coordinates, and interactively generates an event. Send to the content application.
- the principle of the event module 21-2 determining whether a moving object has collided with a wall surface may be implemented with various algorithms.
- An example algorithm is as follows. That is, the distance A between the camera 10 and the wall surface is measured in advance and stored in the conversion engine 21.
- the event module 21-2 compares the distance (B) with the object continuously sent by the object recognition module 21-1 with the previously stored distance (A), and when the two distances (A, B) become the same, the object Is considered to have hit the wall.
- Another example algorithm is as follows. That is, the event module 21-2 continuously monitors the change in the distance B with the object sent from the object recognition module 21-1. And the moment when the distance B increases and then turns to a decrease is determined as the moment of collision.
- Another example algorithm is as follows. That is, the event module 21-2 continuously monitors the change in the size of the object identified in the image data sent from the object recognition module 21-1. Since the size will gradually decrease as the distance from the camera 10 increases, the moment when the size of the object decreases and then turns to increase is determined as the moment of collision.
- the event module 21-2 has a mapping table in which the XY coordinates of the wall screen on which the content image is actually displayed and the xy coordinates on the execution screen of the content application are matched in advance.
- the event module 21-2 finds the XY coordinate of the collision point by processing the image data, and finds the xy coordinate matching the XY coordinate from the mapping table.
- the mapping table may be a database in which XY coordinates at predetermined intervals and xy coordinates at predetermined intervals are stored in advance, or an algorithm defining a correlation between the XY coordinates and the xy coordinates by an equation.
- the event module 21-2 generates an event including the converted xy coordinate and transmits it to the interactive content application.
- GUI Graphical user interface
- mouse_move_Event(A3,B3) By continuously generating the mouse cursor (A1,B1), (A2,B2), (A3,B3)... It is moved to the path of and displayed, and by generating mouse_left_Click(An, Bn) at the point where the mouse is stopped, it notifies the operating system or the activated application that the left mouse button is clicked at the coordinates of (An, Bn).
- event should be understood as a concept including all events for inputting a user's instruction to the interactive content application 220. Accordingly, events transmitted from the conversion engine 21 to the interactive content application 220 may be variously defined as a left mouse click event, a right mouse click event, a mouse movement event, a mouse double click event, and a mouse wheel click event.
- the object recognition module 21-1 identifies a plurality of objects
- the left mouse click event is performed by the event module 21-2. Is generated, a mouse right-click event is generated when the second object is recognized, and a mouse wheel click event is generated when the third object is recognized.
- the player since the player can control the virtual interactive content using three types of objects, it is possible to enjoy content with a richer plot.
- the present invention makes a moving object operate like a mouse or a pointer through a method in which the conversion engine 21 generates an event and transmits the generated event to the interactive content application 22.
- the event generated by the conversion engine 21 is compatible with the operating system in which the interactive content application 22 is executed. Alice, the developer of the interactive content application 22, does not need to discuss compatibility with Bob, the developer of the conversion engine 21 in advance, so the conversion engine 21 of the present invention is sold on the market. It has the advantage of being able to apply any interactive content to be applied without a separate modification for interfacing.
- the image output device 30 may be any type of device as long as it has a function of outputting a content image on a wall or the like.
- a beam projector for example, a beam projector, a display device such as a large TV or monitor mounted on a wall, and an augmented reality headset may be used as the image output device 30.
- the image output device 30 is connected to the application driving device 20 through a cable or wireless communication.
- a problem may occur such as a shadow on the image by a user moving an object.
- a problem may occur such as a shadow on the image by a user moving an object.
- an image without a shaded area by a user may be displayed.
- the machine learning server 40 includes a machine learning engine (not shown) that learns various characteristics for identifying an object based on the image data sent from the camera 10.
- the machine learning server 40 uses a certain number of characteristics to identify the object based on at least one of the shape of the ball, the size of the ball, the pattern pattern on the surface of the ball, and the color of the ball. Patterns can be found.
- the machine learning server 40 may receive image data through an application driving device 20 connected to the digital camera 10 or may be directly connected to the digital camera 10 to receive image data.
- 5A to 5D illustrate examples of photographing an object at various locations in order to pre-learn identification information of an object by machine learning.
- the user places an object such as a ball on his hand, and changes the orientation of the front, rear, left, right, up and down based on the camera 10 to view dozens to hundreds of images.
- an object such as a ball
- 5A to 5D illustrate a case in which the user directly grabs the object and shoots one by one, but is not limited thereto, and the object (ball) is thrown into the shooting area of the camera 10 or the user
- the scene of throwing the (ball) onto the wall can be recorded as a video, and machine learning can be performed on the video of each frame constituting the video.
- the machine learning server 40 finds a specific pattern to more clearly identify an object by repeatedly analyzing dozens to hundreds of different image data captured in this way.
- the object recognition module 21-1 of the transformation engine 21 can easily identify an object from image data using identification pattern information, which is a result obtained by learning in advance by the machine learning module 40.
- the object recognition module 21-1 of the transformation engine is Images can be accurately identified.
- the machine learning server 40 may learn only one object, but may learn in advance to identify a plurality of different objects when control is required with a plurality of objects according to the type of content.
- Embodiment 2 relates to a system for providing a user interface of virtual interactive content that recognizes a moving object using a mono camera.
- Example 2 assumes that a mono camera such as a closed-type camera (CCTV) is already installed for security purposes, or a case of adopting a mono camera to construct a system for providing a user interface relatively inexpensively is assumed. It is not necessarily limited to these cases.
- CCTV closed-type camera
- FIG. 6 is a block diagram showing a detailed configuration of a system for providing a user interface according to a second embodiment.
- the user interface providing system includes a digital camera 100, an application driving device 200 and an image output device 300, and may further include a machine learning server 400. .
- the digital camera 100 photographs a content scene including a moving virtual pointer object and transmits the photographed image data to the application driving device 200.
- connection structure or communication protocol between the digital camera 100 and the application driving device 200 is the same as that of the digital camera 10 of the first embodiment.
- the digital camera 100 identifies a moving object from image data and uses a structured pattern measurement algorithm to estimate a distance between the camera 100 and the moving object.
- the digital camera 100 of the structured pattern technique includes at least one light projection module and at least one image sensor module, and when the light projection module projects a structured set of light patterns onto an object, the image sensor is reflected by the projection.
- Optical 3D scanning is performed by capturing an image, and a distance between the camera 100 and an object is measured using the 3D scanning result.
- the application driving device 200 executes the conversion engine 210 and the interactive content application 220. It is the same as described in the first embodiment that the conversion engine 210 and the interactive content application 220 may be executed in one device 200 or separately executed in a separate device.
- the transformation engine 210 generates an event corresponding to a click of a mouse when the moving object collides with the wall, and transmits the event to the interactive content application 220.
- the conversion engine 210 may include an object recognition module 211 and an event module 212.
- the object recognition module 211 processes image data sent from the camera 100 to identify a moving object, and estimates the distance between the camera 100 and the object using a structured pattern technique.
- the event module 212 determines whether the moving object collides with the wall, converts the coordinates of the collision point into coordinates on the execution screen of the interactive content application, generates an event including the converted coordinates, and converts the event into the interactive content application. Transfer to.
- the principle of the event module 212 transforming the coordinates is the same as described in the first embodiment.
- the image output device 300 and the machine learning server 400 are also the same as the image output device 30 and the machine learning server 40 of the first embodiment.
- Embodiment 3 relates to a method of providing a user interface for virtual interactive content.
- FIG. 7 is a flow chart showing step-by-step a method of providing a user interface according to the third embodiment.
- a virtual interactive content image is displayed on the wall by an image output device such as a beam projector, and the user throws a virtual mouse object to the wall to play the content.
- the digital camera installed on the ceiling captures an image displayed on the wall and a scene where the user throws an object on the wall, and transmits the captured image data to the application driving device in real time (S101).
- the conversion engine running in the application driving device identifies the virtual mouse object learned in advance from the image data sent from the camera (S102), and tracks the movement of the object (S103).
- “tracking” refers to a process of determining the distance between the identified object and the camera and the coordinates on the wall screen where the object is located.
- the conversion engine converts the XY coordinates of the touch point into the xy coordinates on the execution screen of the interactive content application (S105).
- a mouse event including the converted coordinates is generated, and the mouse event is transmitted to the interactive content application (S106).
- FIG. 8 is a flowchart illustrating a machine learning process step by step in a method of providing a user interface according to the third embodiment.
- FIG. 5 For better understanding, the situation of FIG. 5 in which a user enters the shooting range of a digital camera and holds a virtual mouse object such as a ball in one hand and performs test shots tens to hundreds of times will be described in detail with reference to the situation of FIG.
- the machine learning server receives image data from a digital camera or an application driving device connected to the digital camera (S201), and processes the image data to derive at least one characteristic of the shape, size, surface pattern, and color of the object ( S202).
- a user can take tens to hundreds of images while placing an object such as a ball on his hand and changing the orientation of the front, rear, left, right, up and down directions based on the camera.
- the machine learning server repeatedly analyzes dozens to hundreds of different image data captured in this way, thereby defining a specific pattern to more clearly identify an object.
- the termination of the machine learning process may be automatically executed when a preset criterion is satisfied, or may be executed arbitrarily at the discretion of an administrator.
- the pattern for object identification defined through the above steps is provided to the conversion engine, so that the object can be accurately identified even if there is any kind of background in the still image of the moving object.
- the entire or partial functions of the method for providing the user interface of the virtual interactive content of the third and fourth embodiments described above are provided in a recording medium that can be read through a computer by tangibly implementing a program of instructions for implementing this. It will be readily understood by those skilled in the art that it may be.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the computer-readable recording medium may be specially designed and constructed for the present invention, or may be known and usable to those skilled in computer software. Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and floptical disks.
- Magneto-optical media and hardware devices specially configured to store and execute program instructions such as ROM, RAM, flash memory, USB memory, and the like.
- the computer-readable recording medium may be a transmission medium such as an optical or metal wire or a waveguide including a carrier wave for transmitting a signal specifying a program command or a data structure.
- Examples of program instructions include high-level language codes that can be executed by a computer using an interpreter or the like, in addition to machine language codes such as those produced by a compiler.
- the hardware device may be configured to operate as one or more software modules to perform the operation of the present invention and vice versa.
Abstract
Description
Claims (12)
- 벽면에 디스플레이 되는 가상 인터렉티브 컨텐츠 영상을 촬영하는 디지털 카메라; 및A digital camera that photographs a virtual interactive content image displayed on a wall; And상기 가상 인터렉티브 컨텐츠의 촬영 영상에서 미리 정의된 객체를 식별하고 상기 객체의 거리와 좌표를 파악하는 객체인식 모듈과, 상기 객체가 벽면에 부딪힌 때 상기 객체의 좌표가 포함된 이벤트를 인터렉티브 컨텐츠 애플리케이션에 전달하는 이벤트 모듈을 포함하는 변환 엔진을 실행하는 애플리케이션 구동 장치An object recognition module that identifies a predefined object in the captured image of the virtual interactive content and identifies the distance and coordinates of the object, and delivers an event including the coordinates of the object to the interactive content application when the object hits a wall Application driving device that executes a conversion engine including an event module를 포함하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.A system for providing a user interface of virtual interactive content including a.
- 제1항에 있어서,The method of claim 1,상기 가상 인터렉티브 컨텐츠의 영상을 벽면에 디스플레이 하는 영상 출력 장치를 더 포함하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.A system for providing a user interface of virtual interactive content, further comprising an image output device that displays the image of the virtual interactive content on a wall.
- 제2항에 있어서,The method of claim 2,상기 영상 출력 장치는,The video output device,빔 프로젝터, 벽면에 거치되는 디스플레이 기기, 증강현실 헤드셋 중 어느 하나인 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.A system for providing a user interface for virtual interactive contents, characterized in that it is any one of a beam projector, a display device mounted on a wall, and an augmented reality headset.
- 제1항에 있어서,The method of claim 1,상기 객체가 포함된 복수의 영상 데이터를 반복적으로 분석하여 상기 객체를 식별하기 위한 형상, 크기, 표면의 패턴 무늬, 색상 중 적어도 하나에 관한 패턴을 학습하는 머신러닝 서버를 더 포함하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.The virtual interactive content further comprises a machine learning server that repeatedly analyzes a plurality of image data including the object to learn a pattern related to at least one of a shape, size, surface pattern, and color for identifying the object. User interface providing system.
- 제1항에 있어서,The method of claim 1,상기 디지털 카메라는 적어도 두 개의 이미지 센서를 가지며,The digital camera has at least two image sensors,상기 객체인식 모듈은, 상기 이미지 센서들의 화각 차이를 이용하여 상기 디지털 카메라와 상기 객체 간의 거리를 계산하는 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.The object recognition module calculates a distance between the digital camera and the object using a difference in angle of view of the image sensors.
- 제1항에 있어서,The method of claim 1,상기 디지털 카메라는 적어도 하나의 이미지 센서를 가지며,The digital camera has at least one image sensor,상기 객체인식 모듈은, 상기 디지털 카메라로 촬영한 영상 속의 객체의 크기를 기초로 상기 디지털 카메라와 상기 객체 간의 거리를 계산하는 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 시스템.The object recognition module calculates a distance between the digital camera and the object based on the size of the object in the image captured by the digital camera.
- 가상 인터렉티브 컨텐츠의 촬영 영상에서 미리 학습된 객체를 식별하는 단계;Identifying an object learned in advance from the captured image of the virtual interactive content;상기 식별된 객체의 거리 및 좌표를 파악하는 단계;Determining the distance and coordinates of the identified object;상기 객체가 벽면에 부딪힌 때 터치 지점의 좌표가 포함된 이벤트를 생성하는 단계; 및Generating an event including coordinates of a touch point when the object hits a wall; And상기 이벤트를 가상 인터렉티브 컨텐츠 애플리케이션에 전달하는 단계Delivering the event to a virtual interactive content application를 포함하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 방법.Method for providing a user interface of virtual interactive content comprising a.
- 제7항에 있어서,The method of claim 7,상기 촬영 영상은 적어도 두 개의 이미지 센서를 가지는 디지털 카메라로 촬영되며,The photographed image is photographed by a digital camera having at least two image sensors,상기 거리는, 상기 이미지 센서들의 화각 차이에 기초하여 계산되는 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 방법.The distance is calculated based on a difference in angle of view of the image sensors.
- 제7항에 있어서,The method of claim 7,상기 촬영 영상은 하나의 이미지 센서를 가지는 디지털 카메라로 촬영되며,The photographed image is photographed with a digital camera having one image sensor,상기 거리는, 상기 컨텐츠 영상 속의 객체의 크기를 기초로 계산되는 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 방법.The distance is calculated based on the size of an object in the content image.
- 제8항에 있어서,The method of claim 8,상기 계산된 객체의 거리와 미리 설정된 벽면의 거리가 일치하면 상기 객체가 벽면에 터치된 것으로 판단하는 단계를 더 포함하는 것을 특징으로 하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 방법.And determining that the object has touched the wall when the calculated distance of the object matches the preset distance of the wall.
- 제7항에 있어서,The method of claim 7,상기 객체가 포함된 복수의 영상 데이터를 반복적으로 분석하여 상기 객체를 식별하기 위한 형상, 크기, 표면의 패턴 무늬, 색상 중 적어도 하나에 관한 패턴을 학습하는 머신러닝 단계를 더 포함하는 가상 인터렉티브 컨텐츠의 사용자 인터페이스 제공 방법.The virtual interactive content further comprising a machine learning step of learning a pattern related to at least one of a shape, a size, a pattern pattern on a surface, and a color for identifying the object by repeatedly analyzing a plurality of image data including the object. How to provide a user interface.
- 제7항 내지 제11항의 방법을 알고리즘으로 구현한 컴퓨터 프로그램이 저장되는, 컴퓨터로 읽을 수 있는 기록매체.A computer-readable recording medium storing a computer program that implements the method of claims 7 to 11 as an algorithm.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20190045098 | 2019-04-17 | ||
KR10-2019-0045098 | 2019-04-17 | ||
KR1020190058257A KR102041279B1 (en) | 2019-04-17 | 2019-05-17 | system, method for providing user interface of virtual interactive contents and storage of computer program therefor |
KR10-2019-0058257 | 2019-05-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020213783A1 true WO2020213783A1 (en) | 2020-10-22 |
Family
ID=68729655
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/006029 WO2020213784A1 (en) | 2019-04-17 | 2019-05-20 | Sports interactive content execution system for inducing exercise |
PCT/KR2019/006028 WO2020213783A1 (en) | 2019-04-17 | 2019-05-20 | System and method for providing user interface of virtual interactive content, and recording medium having computer program stored therein for same |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/006029 WO2020213784A1 (en) | 2019-04-17 | 2019-05-20 | Sports interactive content execution system for inducing exercise |
Country Status (2)
Country | Link |
---|---|
KR (3) | KR102041279B1 (en) |
WO (2) | WO2020213784A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102454833B1 (en) * | 2022-05-12 | 2022-10-14 | (주)이브이알스튜디오 | Display device displaying image of virtual aquarium, and control method for user terminal communicating to display device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120040818A (en) * | 2010-10-20 | 2012-04-30 | 에스케이플래닛 주식회사 | System and method for playing contents of augmented reality |
KR20120061110A (en) * | 2010-10-22 | 2012-06-13 | 주식회사 팬택 | Apparatus and Method for Providing Augmented Reality User Interface |
KR20130071059A (en) * | 2011-12-20 | 2013-06-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
US20180293442A1 (en) * | 2017-04-06 | 2018-10-11 | Ants Technology (Hk) Limited | Apparatus, methods and computer products for video analytics |
KR101963682B1 (en) * | 2018-09-10 | 2019-03-29 | 주식회사 큐랩 | Data management system for physical measurement data by performing sports contents based on augmented reality |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110013076A (en) * | 2009-08-01 | 2011-02-09 | 강병수 | Ring input device for gestural and touch interface use camera system |
KR20120114767A (en) | 2011-04-08 | 2012-10-17 | 동서대학교산학협력단 | Game display system throwing objects and a method thereof |
JP6074170B2 (en) * | 2011-06-23 | 2017-02-01 | インテル・コーポレーション | Short range motion tracking system and method |
KR101330531B1 (en) | 2011-11-08 | 2013-11-18 | 재단법인대구경북과학기술원 | Method of virtual touch using 3D camera and apparatus thereof |
KR101572346B1 (en) * | 2014-01-15 | 2015-11-26 | (주)디스트릭트홀딩스 | Service system and service method for augmented reality stage, live dance stage and live audition |
KR20150035854A (en) * | 2015-02-17 | 2015-04-07 | 주식회사 홍인터내셔날 | A dart game apparatus capable of authentification using throw line on a remote multi mode |
KR101860753B1 (en) * | 2016-06-13 | 2018-05-24 | (주)블루클라우드 | User recognition content providing system and operating method thereof |
-
2019
- 2019-05-17 KR KR1020190058257A patent/KR102041279B1/en active IP Right Grant
- 2019-05-17 KR KR1020190058258A patent/KR102054148B1/en active IP Right Grant
- 2019-05-20 WO PCT/KR2019/006029 patent/WO2020213784A1/en active Application Filing
- 2019-05-20 WO PCT/KR2019/006028 patent/WO2020213783A1/en active Application Filing
- 2019-06-17 KR KR1020190071560A patent/KR102275702B1/en active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120040818A (en) * | 2010-10-20 | 2012-04-30 | 에스케이플래닛 주식회사 | System and method for playing contents of augmented reality |
KR20120061110A (en) * | 2010-10-22 | 2012-06-13 | 주식회사 팬택 | Apparatus and Method for Providing Augmented Reality User Interface |
KR20130071059A (en) * | 2011-12-20 | 2013-06-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
US20180293442A1 (en) * | 2017-04-06 | 2018-10-11 | Ants Technology (Hk) Limited | Apparatus, methods and computer products for video analytics |
KR101963682B1 (en) * | 2018-09-10 | 2019-03-29 | 주식회사 큐랩 | Data management system for physical measurement data by performing sports contents based on augmented reality |
Also Published As
Publication number | Publication date |
---|---|
KR102054148B1 (en) | 2019-12-12 |
KR102275702B1 (en) | 2021-07-09 |
WO2020213784A1 (en) | 2020-10-22 |
KR102041279B1 (en) | 2019-11-27 |
KR20200122202A (en) | 2020-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013043020A2 (en) | System and method for photographing moving subject by means of multiple cameras, and acquiring actual movement trajectory of subject based on photographed images | |
CN101919241B (en) | Dual-mode projection apparatus and method for locating a light spot in a projected image | |
US8818027B2 (en) | Computing device interface | |
WO2018182321A1 (en) | Method and apparatus for rendering timed text and graphics in virtual reality video | |
WO2016044778A1 (en) | Method and system for an automatic sensing, analysis, composition and direction of a 3d space, scene, object, and equipment | |
WO2013141522A1 (en) | Karaoke and dance game | |
WO2020101094A1 (en) | Method and apparatus for displaying stereoscopic strike zone | |
WO2021177535A1 (en) | Unmanned sports relay service method using camera position control and image editing through real-time image analysis and apparatus therefor | |
JP2000352761A (en) | Video projection device and method therefor, and video projection controller | |
WO2018129792A1 (en) | Vr playing method, vr playing apparatus and vr playing system | |
EP3039476A1 (en) | Head mounted display device and method for controlling the same | |
WO2019194529A1 (en) | Method and device for transmitting information on three-dimensional content including multiple view points | |
CN106527825A (en) | Large-screen remote control interaction system and interaction method thereof | |
WO2020213783A1 (en) | System and method for providing user interface of virtual interactive content, and recording medium having computer program stored therein for same | |
WO2019078580A2 (en) | Method and device for transmitting immersive media | |
WO2017195984A1 (en) | 3d scanning device and method | |
WO2019035581A1 (en) | Server, display device and control method therefor | |
Meško et al. | Laser spot detection | |
WO2018030795A1 (en) | Camera device, display device, and method for correcting motion in device | |
JP2003346190A (en) | Image processor | |
WO2018139810A1 (en) | Sensing apparatus for calculating position information of object in motion, and sensing method using same | |
JP7315489B2 (en) | Peripheral tracking system and method | |
CN103327385B (en) | Based on single image sensor apart from recognition methods and device | |
WO2023234532A1 (en) | Data recording method, device and system for virtual production | |
WO2020213786A1 (en) | Virtual interactive content execution system using body movement recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19925510 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925510 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925510 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 26/04/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925510 Country of ref document: EP Kind code of ref document: A1 |