WO2020213786A1 - Virtual interactive content execution system using body movement recognition - Google Patents
Virtual interactive content execution system using body movement recognition Download PDFInfo
- Publication number
- WO2020213786A1 WO2020213786A1 PCT/KR2019/007315 KR2019007315W WO2020213786A1 WO 2020213786 A1 WO2020213786 A1 WO 2020213786A1 KR 2019007315 W KR2019007315 W KR 2019007315W WO 2020213786 A1 WO2020213786 A1 WO 2020213786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- movement
- player
- interactive content
- event
- digital camera
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
Definitions
- the present invention relates to a system for executing virtual interactive contents using body movement recognition, and more particularly, a system for executing virtual interactive contents by generating a predetermined event by recognizing a specific motion of a player's upper body and/or lower body with a digital camera. It is about.
- a virtual interactive content execution technology that projects content such as a game on a large screen such as a wall, recognizes and tracks a player's motion or a thrown object such as a ball thrown by the player, and links with the execution of the content has recently been in the spotlight.
- virtual interactive contents that can be enjoyed indoors regardless of environmental conditions such as outdoor temperature, fine dust concentration, rainfall, and snowfall are gradually being introduced.
- One of the conventional virtual interactive content execution systems uses an infrared (IR) camera to track the movement of a player or a thrown object.
- the IR camera module of this system includes at least one infrared light irradiation module and at least one light sensor module, and uses the lag or phase shift of the modulated optical signal for all pixels of the captured image.
- ToF method Time-Of-Flight measurement
- Patent Document 0001 relates to an object-throwing game display system, an IR camera that recognizes reflection information of infrared light of an object thrown on the front of the display, and recognized by the IR camera It includes a computer that obtains the location information by receiving the information of the infrared light.
- Patent Document 0001 Since the technology of Patent Document 0001 identifies the position of the thrown object using infrared rays, in order to obtain a recognition rate enough to play a normal game, the game space should not be exposed to daylight or maintain illumination below a predetermined standard. Therefore, there is a limitation to playing the game in a closed room under low-illumination lighting or to play the game with the window covered with a blackout curtain so as not to be exposed to daylight.
- infrared rays due to the nature of infrared rays, it is difficult to play a game smoothly in a high temperature environment above a predetermined temperature or above a predetermined humidity. For example, in a hot indoor game hall on a summer day, an outdoor court in broad daylight, or an indoor or outdoor court in the case of fog or rain, the recognition rate of a thrown object is significantly lowered.
- Patent Document 0001 relates to a technology for tracking the thrown object thrown by the player, but the same problem occurs with the technology for tracking the player's motion with an infrared camera.
- Another of the conventional interactive content execution systems is to install a touch plate using a piezoelectric sensor on the floor, and when the player makes various actions on the touch plate while watching the interactive content projected on the wall screen, the player's foot moves by the piezoelectric sensor Is detected and reflected in the execution of interactive content.
- the present invention has been proposed to solve the above-mentioned problems, and an object of the present invention is to provide a virtual interactive content execution system that is not affected by environmental factors of a play place such as illumination, temperature, and humidity.
- Another object of the present invention is to provide a virtual interactive content execution system that can dramatically improve the recognition rate by learning in advance various features of a person through repetitive pre-analysis in order to quickly and accurately identify a player from a play video. Is to do.
- An embodiment of the present invention for achieving the above object, a digital camera for photographing the movement of the player; And a recognition module that identifies a movement pattern of a part of the player's body in the image captured by the digital camera, and when the movement pattern of the part of the body matches a preset pattern, transmitting an event including the identifier of the pattern to the interactive content application.
- a digital camera for photographing the movement of the player
- a recognition module that identifies a movement pattern of a part of the player's body in the image captured by the digital camera, and when the movement pattern of the part of the body matches a preset pattern, transmitting an event including the identifier of the pattern to the interactive content application.
- It relates to an interactive content execution system using body movement recognition, comprising: an application driving device that executes a conversion engine including an event module.
- the recognition module tracks the movement of a part of the body based on the distance to the body part of the player, and the event module includes an identifier of the corresponding pattern when the movement distance and the movement direction of the body part coincide with a preset pattern. And deliver the generated event to the interactive content application.
- the digital camera has at least two image sensors, and the recognition module estimates a distance between the digital camera and a body part of the player by using a difference in angle of view of the image sensors.
- the recognition module identifies movement of at least one of the player's left arm, right arm, left foot, and right foot.
- the event module generates an event for at least one of a movement of the body part, a walk, a jump, and a movement in any one of a plurality of preset directions.
- the digital camera includes at least one image sensor, and in this case, the recognition module includes a distance between the digital camera and a body part of the player based on the size of the body part of the player in the image captured by the digital camera.
- a stage installed on the floor may be further included in order to provide a visual guide to the player about the preset movement direction and movement range.
- it may further include a machine learning server that analyzes a plurality of image data including a person and learns pattern information for identifying a person from a background in the image in advance.
- virtual interactive content can be enjoyed without being affected by environmental factors such as illumination, temperature, and humidity.
- content can be enjoyed comfortably in an indoor space with sufficiently bright lighting even on hot, cold, or high concentration of fine dust, and content can be enjoyed on an outdoor court in an area where the temperature and weather suitable for exercise are maintained.
- a touch sensor such as a piezoelectric element or the like is not required to recognize a player's body motion, inconvenience of using the system due to a failure can be prevented in advance.
- the conversion engine generating the event and the virtual interactive content receiving the event are independently executed, there is no need to modify the virtual interactive content to maintain compatibility between the two programs. Therefore, the productivity of interactive content development is increased while the universality of the conversion engine is guaranteed.
- FIG. 1 is a conceptual diagram schematically showing the configuration of a virtual interactive content execution system according to a first embodiment.
- FIG. 2 is a block diagram showing a detailed configuration of a system for executing virtual interactive content according to the first embodiment.
- 3 and 4 are block diagrams showing a system configuration of a modified embodiment of the first embodiment.
- 5A through 5C illustrate various embodiments of a stage.
- MODULE refers to a unit that processes a specific function or operation, and may mean hardware or software, or a combination of hardware and software.
- interactive content refers to content that outputs or executes various results in response to a user's real-time action, not content that is unilaterally played or executed according to a predetermined plot. .
- content is not executed using conventional input means such as a mouse or a touch pad (hereinafter referred to as'mouse, etc.'), but the actual content is executed on a separate computer device, but the execution image of the content is beamed.
- Directly projected onto a wall, floor, or ceiling (hereinafter referred to as'wall surface') through a projector, projected onto a screen installed on a wall, etc., or through a display device (for example, a digital TV or digital monitor) installed on a wall.
- Is output, and the player uses a mouse through various movements such as jumping, walking, moving the right or left arm, or moving the right or left leg on the directional bearing plate placed on the floor while looking at the wall on which the image of the content is displayed.
- virtual interactive content refers to interactive content that induces dynamic movement or movement of a player.
- virtual interactive content can be understood as a concept including all kinds of content that can induce a player's kinetic action. Therefore, it is obvious to those skilled in the art that it may be implemented as media content such as a tap dance game using a floor touch in nine directions, a game that experiences virtual historical relics using walking and arm movements.
- Embodiment 1 relates to a virtual interactive content execution system that recognizes a player's body movement using a stereo camera.
- FIG. 1 is a conceptual diagram schematically showing the configuration of a virtual interactive content execution system according to a first embodiment.
- a digital camera 10 for photographing a user's action is disposed on the wall opposite to the wall on which the interactive content is projected, or on the ceiling or on either side of the wall, and the interactive content is executed on a separate application driving device 20 do.
- An image output device 30 that receives an image of interactive content from the application driving device 20 and outputs it to the wall surface is disposed on the wall or ceiling opposite the wall surface on which the content is projected.
- a stage 50 is disposed on the floor to provide a visual guide to the player regarding a predetermined orientation and reach distance.
- FIG. 2 is a block diagram showing a detailed configuration of a system for executing virtual interactive content according to the first embodiment.
- the system of Example 1 includes a digital camera 10, an application driving device 20 and an image output device 30, and includes at least one of a machine learning server 40 and a stage 50. It may contain more.
- the digital camera 10 photographs a motion scene of the player and transmits the photographed image data to the application driving device 20.
- the digital camera 10 uses an application driving device 20 and a wired communication interface such as USB, RJ-45, or a short-range or broadband wireless communication interface or communication protocol such as Bluetooth, IEEE 802.11, and LTE. Can be connected.
- a wired communication interface such as USB, RJ-45, or a short-range or broadband wireless communication interface or communication protocol such as Bluetooth, IEEE 802.11, and LTE. Can be connected.
- the communication interface or communication protocol mentioned here is only an example, and any communication interface and protocol for smoothly transmitting image data can be used.
- a stereo-type measurement algorithm can be used.
- the same object is photographed using two camera modules (image sensors) separated from each other, and the distance to the object is estimated by using the angle difference caused by the discrepancy between the viewpoints between the two camera modules.
- the digital camera 10 of Example 1 includes at least two 2D image sensor modules (not shown).
- the application driving device 20 executes the conversion engine 21 and the interactive content application 22.
- the application driving device 20 may install and execute the conversion engine 21 and the interactive content application 22 together in a single device such as a desktop PC, a notebook computer, a mobile tab, a smartphone, and a server.
- the application driving device 20 may install and execute the conversion engine 21 on a single device such as a desktop PC illustrated above, and install and execute the interactive content application 22 on a separate server 20-1.
- FIG. 3 is a block diagram showing the system configuration of such a modified embodiment.
- the conversion engine 21 is installed and executed on the digital camera 10, and only interactive content applications are executed on the application driving device 20, and the digital camera 10 and the application driving device 20 are It can be connected through a local area network or an LTE or 5G broadband network.
- 4 is a block diagram showing the system configuration of this modified embodiment.
- the conversion engine 21 When it is detected that the player's arm or foot moves in a predetermined pattern, the conversion engine 21 generates an event corresponding to the pattern, and transmits the generated event to the interactive content application 22. To this end, the conversion engine 21 may include a recognition module 21-1 and an event module 21-2.
- the recognition module 21-1 identifies the player by processing the image data sent from the camera 10, and uses a stereotype technique to determine between the camera 10 and the player's moving body (for example, a moving right foot). Estimate the distance.
- the identification of the player and the estimation of the distance to the moving body will be collectively defined as tracking. Tracking may be performed on all frames of image data sent from the camera 10, or intermittently performed on frames of preset intervals in consideration of the burden of load of the conversion engine 21 due to frequent tracking. It could be.
- the recognition module 21-1 may be included in the conversion engine 21 or may be installed in the digital camera 10 as firmware.
- the digital camera 10 provides tracking information including the distance to the object and the coordinates of the object instead of image data to the event module 21-2 of the conversion engine 21 do.
- the event module 21-2 determines whether the player's body movement matches a predetermined pattern, generates an event including an identification flag of the movement pattern, and transmits the generated event to the interactive content application.
- the principle of the event module 21-2 determining whether the player's body movement matches a predetermined pattern may be implemented with various algorithms. For better understanding, an example of executing interactive content using the movement of the legs of the player's body will be described.
- an algorithm for recognizing a pattern when a player moves his or her feet in a specific direction may be implemented as follows.
- the recognition module 21-1 first estimates the distance between the camera 10 and the right foot, and the camera 10 and the left foot from the captured image of the player standing at the center of the stage 50 and waiting, Set as the reference value for pattern recognition.
- the recognition module 21-1 first analyzes only the image of the stage 50, but estimates the distance to the center point using the division line displayed on the upper surface of the stage 50, or separates and recognizes the stage object in the image.
- the distance to the center point may be estimated, and the distance to the center point may be regarded as the player's initial position and set as a reference value for pattern recognition.
- the recognition module 21-1 continuously tracks the movement of the player's left foot and right foot and transmits it to the event module 21-2, and the event module 21-2 determines the movement distance and the movement direction of the left foot in advance. If the movement distance and direction of movement of the left foot of the set pattern match, it is determined that movement of the pattern has occurred, and an event of the pattern is generated.
- the event module 21-2 keeps the player's left foot at 10 o'clock from the reference point.
- a predetermined distance the distance to the upper left area of the stage
- it is recognized as a “left foot 10 o'clock pattern”, and a “left foot 10 o'clock event” with a flag pointing to the upper left area of the stage is generated.
- an algorithm for recognizing a pattern in which the player jumps in place may be implemented as follows.
- the recognition module 21-1 continuously tracks the movements of the player's left and right feet and transmits them to the event module 21-2.
- the event module 21-2 recognizes as a "jumping pattern” and generates a "jumping event” when the movement direction of the left foot and the right foot is vertical and the movement distance is greater than a preset height.
- an algorithm for recognizing a pattern of a player walking in place may be implemented as follows.
- the recognition module 21-1 continuously tracks the movements of the player's left and right feet and transmits them to the event module 21-2.
- the event module 21-2 recognizes as a “walking pattern” and recognizes as a “walking event” when it is determined that the movement direction of the left foot and the right foot is vertical and the movement distance is more than a preset height, but the left foot and the right foot alternately move up and down. Occurs.
- the event module 21-2 generates an event including an identifier of the determined pattern and transmits it to the interactive content application 22.
- GUI Graphical user interface
- the interactive dancing content 22 whose score is counted is executed, interactive dancing
- the content 22 converts the inactive footprint into an active footprint image when a "left foot movement event" is received from the event module 21-2 while an inactive footprint image at 10 o'clock is displayed on the screen. It can be run as a plot that counts a predetermined score.
- the educational interactive content 22 is executed to listen to commentary on a specific relic while visiting ancient relics one by one on the wall screen.
- a "walking event” is received from the event module 21-2
- the content 22 continuously outputs a scene according to the forward walking on the screen.
- a "walking stop event” is received from the event module 21-2, and the stationary scenery at that point is output.
- the “right arm 10 o'clock event” is received from the event module 21-2, the description of the relic may be executed as a narrated plot.
- a virtual reality (VR) headset is used as the image output device 30, the interestingness of the content may be doubled.
- the term “event” may be understood as a concept including any event for inputting a user's instruction to the interactive content application 22. Therefore, the event transmitted from the conversion engine 21 to the interactive content application 22 may be an event related to the aforementioned arm/leg movement, but in addition to the left mouse click event, the mouse right click event, the mouse movement event, the mouse It can be variously defined as a double click event or a mouse wheel click event.
- the event generated by the conversion engine 21 is compatible with the operating system in which the interactive content application 22 is executed. Alice, the developer of the interactive content application 22, does not need to discuss compatibility with Bob, the developer of the conversion engine 21 in advance, so the conversion engine 21 of the present invention is sold on the market. It has the advantage of being able to apply any interactive content to be applied without a separate modification for interfacing.
- the image output device 30 may be any type of device as long as it has a function of outputting a content image on a wall or the like.
- a beam projector for example, a beam projector, a display device such as a large TV or monitor mounted on a wall, and an augmented reality headset may be used as the image output device 30.
- the image output device 30 is connected to the application driving device 20 through a cable or wireless communication.
- the machine learning server 40 includes a machine learning engine (not shown) that learns various characteristics for identifying an object based on image data sent from the camera 10.
- the machine learning server 40 uses the object based on at least one of a typical human shape, an arm position, a leg position, and a body silhouette for distinguishing a man and a woman. You can find a certain pattern to identify
- the machine learning server 40 may receive image data through an application driving device 20 connected to the digital camera 10 or may be directly connected to the digital camera 10 to receive image data.
- the machine learning server 40 finds a specific pattern to more clearly identify a person by repeatedly analyzing dozens to hundreds of different image data captured on a person (or male and female).
- the recognition module 21-1 of the conversion engine 21 can easily identify the player from the image data using identification pattern information, which is a result obtained by learning in advance by the machine learning module 40.
- the machine learning server 40 may learn only one object for one content, but if control is required with a plurality of objects according to the type of content, it may pre-learn to identify a plurality of different objects for one content. have.
- the stage 50 provides a reference point for the player's initial position, and provides a visual direction guide so that the player can easily point a specific direction using the lower body's feet or the upper body's arms.
- 5A to 5C illustrate various embodiments of the stage 50.
- the stage 50 includes an external layout OL having a predetermined width, and includes a partition line IL for dividing and distinguishing an inner area of the external layout OL.
- the player stands on the stage 50 and moves the left foot and/or the right foot of the lower body with reference to the division line IL to perform an action corresponding to a desired event action.
- the stage 50 is divided into 9 by a division line IL, and the player sees the front, rear, left, right, center, diagonal lines, and mixed motions thereof through 9 divisions. It is possible to do it clearly.
- the embodiment of FIG. 5B is an example of displaying the external layout OL and the division line IL using the laser light source L.
- the external layout (OL) and division line (IL) displayed on the floor may have a function to guide the movement of the lower body of the player, and in the case of the external layout (OL) and division line (IL) displayed on the wall, the player It can have a function to guide the upper body motion of the.
- FIG. 5C is an example of displaying the external layout OL and the partition line IL in a predetermined space using the hologram device H.
- the hologram by the hologram device (H) is preferably projected onto a space where photographing is possible in the photographing device (10), and the player wants to move the upper body's left arm and/or right arm by referring to the division line (IL) from one side of the hologram. It is possible to take an action corresponding to the event action.
- FIGS. 5A to 5C may be used separately or may be mixed and used. That is, it is possible to selectively use the embodiment of FIG. 5A or 5B to guide the lower body motion and apply the embodiment of FIG. 5B to the wall surface or selectively use the embodiment of FIG. 5C to guide the upper body motion. .
- the paint of the partition line or the material of the stage does not need to be limited to any specific ones.
- the partition line may be displayed using a general colored paint such as paint or ink, and the stage may be implemented as a mat made of artificial fiber.
- the stage 50 may not be a component that must be provided. That is, in another embodiment, a separate stage 50 having substantiality is not disposed on the floor. Instead, if the content is executed and the user stands at a location in the shooting target area of the camera 10 and does not move for a preset time, the application driving device 20 recognizes the location as the initial location of the player. And when the player moves the upper body arm or lower body foot in 9 directions based on experience and sense, the application driving device 20 generates an event corresponding to each movement by tracking the movement of the arm and/or foot. Let it.
- the entire or partial functions of the virtual interactive content execution system described above may be provided in a recording medium that can be read through a computer by tangibly implementing a program of instructions for implementing it.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the computer-readable recording medium may be specially designed and constructed for the present invention, or may be known and usable to those skilled in computer software.
- Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and floptical disks.
- Magneto-optical media and hardware devices specially configured to store and execute program instructions such as ROM, RAM, flash memory, USB memory, and the like.
- the computer-readable recording medium may be a transmission medium such as an optical or metal wire or a waveguide including a carrier wave for transmitting a signal specifying a program command or a data structure.
- Examples of program instructions include high-level language codes that can be executed by a computer using an interpreter or the like, in addition to machine language codes such as those produced by a compiler.
- the hardware device may be configured to operate as one or more software modules to perform the operation of the present invention and vice versa.
Abstract
Description
Claims (8)
- 플레이어의 움직임을 촬영하는 디지털 카메라; 및A digital camera that photographs the movement of the player; And상기 디지털 카메라의 촬영 영상에서 플레이어의 신체 일부의 움직임 패턴을 식별하는 인식 모듈과, 상기 신체 일부의 움직임 패턴이 미리 설정된 패턴과 일치하면 해당 패턴의 식별자가 포함된 이벤트를 인터렉티브 컨텐츠 애플리케이션에 전달하는 이벤트 모듈을 포함하는 변환 엔진을 실행하는 애플리케이션 구동 장치Recognition module for identifying a movement pattern of a part of the player's body in the image captured by the digital camera, and an event that transmits an event including the identifier of the pattern to the interactive content application when the movement pattern of the body part matches a preset pattern Application driving device that runs a conversion engine including a module를 포함하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.Virtual interactive content execution system using body movement recognition, comprising a.
- 제1항에 있어서,The method of claim 1,상기 인식 모듈은, 플레이어의 신체 일부와의 거리를 기반으로 신체 일부의 이동을 트래킹하며,The recognition module tracks movement of a part of the body based on the distance to the part of the body of the player,상기 이벤트 모듈은 신체 일부의 이동 거리 및 이동 방향이 미리 설정된 패턴과 일치하면 해당 패턴의 식별자가 포함된 이벤트를 발생시키고, 발생된 이벤트를 인터렉티브 컨텐츠 애플리케이션에 전달하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.The event module generates an event including an identifier of the pattern when the movement distance and movement direction of a body part coincide with a preset pattern, and transmits the generated event to an interactive content application. Virtual interactive content execution system.
- 제2항에 있어서,The method of claim 2,상기 디지털 카메라는 적어도 두 개의 이미지 센서를 가지며,The digital camera has at least two image sensors,상기 인식 모듈은, 상기 이미지 센서들의 화각 차이를 이용하여 상기 디지털 카메라와 상기 플레이어의 신체 일부와의 거리를 추정하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.The recognition module estimates a distance between the digital camera and a body part of the player by using a difference in angle of view of the image sensors. A system for executing virtual interactive contents using body movement recognition.
- 제3항에 있어서,The method of claim 3,상기 인식 모듈은, 플레이어의 왼팔, 오른팔, 왼발, 오른발 중 적어도 하나의 움직임을 식별하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.The recognition module is a virtual interactive content execution system using body motion recognition, characterized in that the recognition module identifies a movement of at least one of a left arm, a right arm, a left foot, and a right foot of the player.
- 제4항에 있어서,The method of claim 4,상기 이벤트 모듈은, 상기 신체 일부의 움직임으로부터 걷기, 점프, 미리 설정된 복수의 방향 중 어느 한 방향으로의 이동 중 적어도 하나에 대한 이벤트를 발생시키는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.The event module generates an event for at least one of a movement of the body part, a walk, a jump, and a movement in any one of a plurality of preset directions. .
- 제1항에 있어서,The method of claim 1,상기 디지털 카메라는 적어도 하나의 이미지 센서를 가지며,The digital camera has at least one image sensor,상기 인식 모듈은, 상기 디지털 카메라의 촬영 영상 속 플레이어의 신체 일부의 크기를 기초로 상기 디지털 카메라와 플레이어의 신체 일부와의 거리를 추정하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.The recognition module estimates a distance between the digital camera and a body part of the player based on the size of a body part of the player in an image captured by the digital camera.
- 제1항에 있어서,The method of claim 1,플레이어에게 미리 설정된 이동 방향 및 이동 범위에 대한 시각적인 가이드를 제공하기 위해 바닥에 설치되는 스테이지를 더 포함하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.A virtual interactive content execution system using body movement recognition, further comprising a stage installed on the floor to provide a visual guide to the player about a preset movement direction and movement range.
- 제1항에 있어서,The method of claim 1,사람이 포함된 복수의 영상 데이터를 분석하여 영상 속의 배경으로부터 사람을 식별하기 위한 패턴 정보를 미리 학습하는 머신러닝 서버를 더 포함하는 것을 특징으로 하는 신체 움직임 인식을 이용한 가상 인터렉티브 컨텐츠 실행 시스템.A virtual interactive content execution system using body movement recognition, further comprising a machine learning server that analyzes a plurality of image data including a person and learns pattern information for identifying a person from a background in the image in advance.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20190045098 | 2019-04-17 | ||
KR10-2019-0045098 | 2019-04-17 | ||
KR1020190071560A KR102275702B1 (en) | 2019-04-17 | 2019-06-17 | system for executing virtual interactive contents software using recognition of player's kinetic movement |
KR10-2019-0071560 | 2019-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020213786A1 true WO2020213786A1 (en) | 2020-10-22 |
Family
ID=72837405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/007315 WO2020213786A1 (en) | 2019-04-17 | 2019-06-18 | Virtual interactive content execution system using body movement recognition |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020213786A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379781A (en) * | 2020-12-10 | 2021-02-19 | 深圳华芯信息技术股份有限公司 | Man-machine interaction method, system and terminal based on foot information identification |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130071059A (en) * | 2011-12-20 | 2013-06-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
KR20140046197A (en) * | 2012-10-10 | 2014-04-18 | 주식회사 씨씨 | An apparatus and method for providing gesture recognition and computer-readable medium having thereon program |
KR20150035854A (en) * | 2015-02-17 | 2015-04-07 | 주식회사 홍인터내셔날 | A dart game apparatus capable of authentification using throw line on a remote multi mode |
US20180293442A1 (en) * | 2017-04-06 | 2018-10-11 | Ants Technology (Hk) Limited | Apparatus, methods and computer products for video analytics |
KR101963682B1 (en) * | 2018-09-10 | 2019-03-29 | 주식회사 큐랩 | Data management system for physical measurement data by performing sports contents based on augmented reality |
-
2019
- 2019-06-18 WO PCT/KR2019/007315 patent/WO2020213786A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130071059A (en) * | 2011-12-20 | 2013-06-28 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
KR20140046197A (en) * | 2012-10-10 | 2014-04-18 | 주식회사 씨씨 | An apparatus and method for providing gesture recognition and computer-readable medium having thereon program |
KR20150035854A (en) * | 2015-02-17 | 2015-04-07 | 주식회사 홍인터내셔날 | A dart game apparatus capable of authentification using throw line on a remote multi mode |
US20180293442A1 (en) * | 2017-04-06 | 2018-10-11 | Ants Technology (Hk) Limited | Apparatus, methods and computer products for video analytics |
KR101963682B1 (en) * | 2018-09-10 | 2019-03-29 | 주식회사 큐랩 | Data management system for physical measurement data by performing sports contents based on augmented reality |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379781A (en) * | 2020-12-10 | 2021-02-19 | 深圳华芯信息技术股份有限公司 | Man-machine interaction method, system and terminal based on foot information identification |
CN112379781B (en) * | 2020-12-10 | 2023-02-28 | 深圳华芯信息技术股份有限公司 | Man-machine interaction method, system and terminal based on foot information identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9015638B2 (en) | Binding users to a gesture based system and providing feedback to the users | |
CA2757057C (en) | Managing virtual ports | |
CN102257456B (en) | Correcting angle error in a tracking system | |
CN102414641B (en) | Altering view perspective within display environment | |
US20180150686A1 (en) | Systems and methods for operating a virtual reality environment using colored marker lights attached to game objects | |
US5704836A (en) | Motion-based command generation technology | |
US20180117465A1 (en) | Interactive in-room show and game system | |
US20060192852A1 (en) | System, method, software arrangement and computer-accessible medium for providing audio and/or visual information | |
CN102449641A (en) | Color calibration for object tracking | |
CN105073210A (en) | User body angle, curvature and average extremity positions extraction using depth images | |
WO2016208930A1 (en) | Automatic aiming system and method for mobile game | |
CN101919241A (en) | Dual-mode projection apparatus and method for locating a light spot in a projected image | |
WO2017105120A1 (en) | Baseball practice apparatus, sensing apparatus and sensing method utilized thereby, and method for controlling ball pitching | |
US11173375B2 (en) | Information processing apparatus and information processing method | |
CN107408003A (en) | Message processing device, information processing method and program | |
CN110559632A (en) | intelligent skiing fitness simulation simulator and control method thereof | |
WO2020213786A1 (en) | Virtual interactive content execution system using body movement recognition | |
JP7315489B2 (en) | Peripheral tracking system and method | |
KR102275702B1 (en) | system for executing virtual interactive contents software using recognition of player's kinetic movement | |
WO2016204335A1 (en) | Device for providing augmented virtual exercise space by exercise system based on immersive interactive contents and method therefor | |
KR101692267B1 (en) | Virtual reality contents system capable of interacting between head mounted user and people, and control method thereof | |
CN210583568U (en) | Wisdom skiing body-building simulator | |
US20190151751A1 (en) | Multi-dimensional movement recording and analysis method for movement entrainment education and gaming | |
WO2011115364A2 (en) | Apparatus for processing image data to track location of light source | |
WO2016072738A1 (en) | Apparatus for controlling lighting devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19925006 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925006 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925006 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 25/04/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19925006 Country of ref document: EP Kind code of ref document: A1 |