US20140342344A1 - Apparatus and method for sensory-type learning - Google Patents
Apparatus and method for sensory-type learning Download PDFInfo
- Publication number
- US20140342344A1 US20140342344A1 US14/365,464 US201214365464A US2014342344A1 US 20140342344 A1 US20140342344 A1 US 20140342344A1 US 201214365464 A US201214365464 A US 201214365464A US 2014342344 A1 US2014342344 A1 US 2014342344A1
- Authority
- US
- United States
- Prior art keywords
- video
- object domain
- domain
- blocks
- divided
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
- A63F13/428—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/57—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
- A63F13/577—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/64—Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H2200/00—Computerized interactive toys, e.g. dolls
Definitions
- Apparatuses and methods consistent with exemplary embodiments relate to a learning apparatus and method, and more specifically to an apparatus and method for sensory-type learning.
- the keyboard, mouse and joystick are some of the main apparatuses for controlling a game.
- the above control devices are general-purpose apparatuses that are not capable of fully enhancing the peculiar features of each game, for example, an airplane game, an automobile game, a fighting game, etc.
- these apparatuses are used in a rather static fashion by having the user manipulate the apparatuses in a seated position in a chair, and using these apparatuses on a chair for an extended period of time is physically stressful to the user's body and can easily cause fatigue to the user.
- Various kinds of available sensory-type games include racing games that are played watching a monitor screen in a real car interior, shooting games that are played by pulling a trigger toward enemies in the monitor screen by use of a real-gun-looking device, games that use a ski board to slalom downhill from a mountain in the monitor screen, and fire-fighting games that have a fire extinguisher of a fire engine arranged therein to put out the fire in the monitor screen by use of the fire extinguisher.
- Korean Utility Model 20-239844 (SIMULATION GAME SYSTEM USING MACHINE VISION AND PATTERN-RECOGNITION) discloses recording a human motion that is within a chroma-key screen (background for extracting outside shade), comparing an imitation of a dance of a video character that is pre-set as a base dance with a still reference image, and scoring a result of the comparison.
- the present invention provides an apparatus and a method for sensory-type learning that can enhance game features to improve a learning effect of a learner, at a low cost without wasted space while not using a chroma-key screen or blue screen.
- An aspect of the present invention features an apparatus for sensory-type learning that includes: a video divider configured to divide a video of a recorded learner into a plurality of blocks and divide the video divided into the plurality of blocks into predetermined time intervals; a differential video extractor configured to extract a differential video by comparing changes in the video divided into the time intervals; an object domain generator configured to generate a first object domain by connecting the extracted differential videos, the first object domain being a single object domain; a contact determiner configured to determine whether the first object domain came into contact with a second object domain pertaining to a background object appearing on a screen; and a movement controller configured to apply the change in animation to the background object and control the apparatus for sensory-type learning to perform a predetermined operation in accordance with the change in animation, if it is determined that the first object domain came into contact with the second object domain.
- the video divider can be configured to divide a current video as an (n)th frame and a next video of the current video as an (n+1)th frame when the video divider divides the video divided into the plurality of blocks into predetermined time intervals.
- the object domain generator can be configured to generate the single object domain by extracting a 3-dimensional vector based on a result of comparing the changes in the video extracted by the differential video extractor and by performing domain optimization for a domain in which the differential videos are connected with one another based on connectivity of coordinate values distributed in the 3-dimensional vector.
- the object domain generator can be configured to extract the 3-dimensional vector by searching for blocks that are identical or similar to a reference time frame by use of blocks of the extracted differential video.
- the object domain generator can be configured to generate the second object domain by dividing an image of the background object into a plurality of blocks.
- the size of the blocks constituting the second object domain can be identical to that of blocks constituting the first object domain.
- the size of the blocks constituting the second object domain can be different from that of blocks constituting the first object domain.
- the contact determiner can be configured to determine an amount of contact by use of at least one from among a percentage value of domains where the first object domain and the second object domain overlap with each other and a percentage value of a number of overlapped images in the video divided into the predetermined time intervals.
- the movement controller can be configured to predict a movement direction of the first object domain based on the 3-dimensional vector extracted by the object domain generator, when the first object domain comes in contact with the second object domain.
- the movement controller can be configured to apply the change in animation to the background object in accordance with the predicted movement direction of the first object domain.
- Another aspect of an exemplary embodiment features a method for sensory-type learning that includes: (a) dividing a video of a recorded learner into a plurality of blocks; (b) dividing the video divided into the plurality of blocks into predetermined time intervals; (c) extracting a differential video by comparing changes in the video divided into the time intervals; (d) extracting a 3-dimensional vector based on a result of comparing the changes in the video, and generating a first object domain based on connectivity of coordinate values distributed in the 3-dimensional vector, the first object domain having differential videos connected with one another; (e) determining whether the first domain object is in contact with a second object domain, the second object domain having an image of a background object appearing on a screen divided into a plurality of blocks; (f) applying a change in animation to the background object and having an apparatus for sensory-type learning perform a predetermined operation in accordance with the change in animation, if it is determined that the first object domain is in contact with the second object domain.
- the video divided into the plurality of blocks can be divided into the predetermined time intervals so as to have 30 frames per second.
- the operation (e) can include: (e-1) calculating a percentage value of domains where the first object domain and the second object domain overlap with each other; (e-2) calculating a percentage value of the number of overlapped images in a plurality of videos divided into the predetermined time intervals; (e-3) determining the contact by use of at least one from among the value calculated in the operation (e-1) and the value calculated in the operation (e-2).
- the features of a game can be enhanced, and the learning effect of a learner can be improved, at a low cost without wasting the space.
- the load of computation can be minimized by, for example, dividing different body parts of the learner into different codes.
- the learning process proceeds by enhancing the game-like features so that the learner can actively participate in the learning, the learning can be more fun and more engaging.
- FIG. 1 is a brief illustration of an apparatus for sensory-type learning in accordance with an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating the configuration of Kibot in accordance with an embodiment of the present invention.
- FIGS. 3 and 4 are flow diagrams illustrating a method for sensory-type learning in accordance with an embodiment of the present invention.
- FIG. 5 shows how a contact is determined by a contact determination unit in accordance with an embodiment of the present invention.
- FIG. 6 illustrates a learning screen in accordance with an embodiment of the present invention.
- FIG. 7 illustrates a learning screen in accordance with another embodiment of the present invention.
- FIG. 1 is a brief illustration of an apparatus for sensory-type learning in accordance with an embodiment of the present invention.
- An apparatus for sensory-type learning 100 in accordance with an embodiment of the present invention can allow a learner to proceed with learning through bodily motions while watching the appearance of the learner displayed through a video camera and can have a character shape that is friendly to learning children.
- the apparatus for sensory-type learning 100 in accordance with an embodiment of the present invention will be referred to as kids Robot, or in short, Kibot 100 .
- Kibot 100 in accordance with an embodiment of the present invention can include a video camera for capturing an image of a learner and a display device for displaying the image of the learner captured by the video camera.
- Kibot 100 has the video camera installed therein or is connected with a USB type of video camera.
- the display device can also be located at a front portion of Kibot 100 to display the image of the learner or can be connected with an external display device and transfer motions of the learner captured through the video camera to the external display device.
- Kibot 100 can include a light emitting diode (LED) emitting unit and an audio output device, and can perform audio output (sound effects) and operations corresponding to the learner's movement, for example, changing the color of the LED emitting unit, adjusting the frequency of lighting, etc., while continuing with the learning through the movement of the learner.
- LED light emitting diode
- audio output sound effects
- Kibot 100 can extract the learner's movement captured through the video camera as a 3 dimensional vector, have the learner's movement interact with a background object displayed on a learning screen according to the learner's movement, and display the interacted learner's movement on the display device.
- Kibot 100 can react with the above-described various operations according to the learner's movement and as the learning process proceeds based on the learner's movement, the learner can be encouraged to participate in the learning voluntarily with much interest.
- FIG. 2 is a block diagram illustrating the configuration of Kibot 100 in accordance with an embodiment of the present invention.
- Kibot 100 in accordance with an embodiment of the present invention includes a video camera 110 , a video division unit 120 , a differential video extraction unit 130 , an object domain generation unit 140 , a contact determination unit 150 , a movement control unit 160 and a display unit 170 .
- the video camera 110 captures images of the learner in real time, and the video division unit 120 divides the real-time captured video of the learner into a plurality of blocks.
- the video division unit 120 can divide the video of the learner captured through the video camera 110 into 8 ⁇ 8 blocks, or into various block sizes, such as 4 ⁇ 4, 16 ⁇ 16, 32 ⁇ 32, etc.
- the video division unit 120 divides the plurality of divided blocks into predetermined time intervals.
- the video division unit 120 can divide the video that has been divided into 8 ⁇ 8 blocks into time intervals so as to have 30 frames per second. Moreover, the video can be divided into time intervals to have less than or more than 30 frames per second.
- the video division unit 120 divides each frame of 30 frames-per-second video into 8 ⁇ 8 blocks.
- the differential video extraction unit 130 extracts a differential video by comparing changes in the video divided into 30 frames per second (each frame being divided into 8 ⁇ 8 blocks) by the video division unit 120 .
- the differential video extraction unit 130 can extract the differential video by comparing the changes in the video, based on time, between an (n)th frame, which is a current video, and an (n+1)th frame, which is the next video of the current video, in the 30 frames per second.
- the differential video can be constituted with changed blocks in two videos (n, N+1), which are divided into 8 ⁇ 8 blocks.
- the object domain generation unit 140 generates a single object domain by connecting the differential videos extracted by the differential video extraction unit 130 .
- the object domain generation unit 140 extracts a 3-dimensional vector by searching for blocks that are identical or similar to a reference time frame by use of the differential video extracted by the differential video extraction unit 130 .
- the object domain generation unit 140 can express a direction, in which the learner's movement is changed, in a 3-dimensional victor that has 2-dimensional x and y values and a z value of a time axis.
- the object domain generation unit 140 can generate a single object domain (“learner object domain” hereinafter) that is a portion in which movement has occurred among the videos captured from the learner and in which the movement is changed.
- the object domain generation unit 140 can generate an object domain for a background object appearing in a game screen.
- the object domain generation unit 140 can generate the object domain in which an image of the background object is divided into a plurality of blocks (hence referred to as “background object domain” hereinafter), and the background object domain can be divided into 8 ⁇ 8 blocks, or any various block sizes such as 4 ⁇ 4, 16 ⁇ 16, etc.
- the contact determination unit 150 determines whether the learner object domain and the background object domain came into contact.
- the contact determination unit 150 can determine the contact by use of at least one of a percentage value of domains where the learner object domain and the background object domain overlap with each other and a percentage value of the number of overlapped images in the 30 frames-per-second video.
- the movement control unit 160 can predict a movement direction of the learner object domain based on the 3-dimensional vector extracted from the object domain generation unit 140 .
- the movement control unit 160 can predict the movement direction of the learner object domain and apply a change in animation to the background object according to the predicted movement direction.
- the movement control unit 160 can apply a change in animation that the background object falls downwardly.
- the movement control unit 160 can apply the change in animation to the background object and then control Kibot 100 to perform a predetermined operation according to the change in animation.
- the movement control unit 160 can control Kibot 100 to turn on the LED emitting unit or output an audio announcement that says “Good job! Mission accomplished!”
- the display unit 170 can be placed at the front portion of Kibot 100 to display the motions of the learner captured through the video camera 110 , and can also display the image of the learner by overlapping with the game screen.
- FIG. 2 refers to software or hardware, such as Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), and perform their respective predetermined functions.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- these elements are not limited to such software or hardware, and the elements can be each configured to be present in an addressable storage medium and to play back one or more processors.
- the elements can include elements such as software elements, object-oriented software elements, class elements and task elements, processes, functions, attributes, procedures, subroutines, program code segments, drivers, firmware, microcode, circuit, data, database, data structures, tables, arrays and variables.
- FIGS. 3 and 4 are flow diagrams illustrating a method for sensory-type learning in accordance with an embodiment of the present invention.
- FIGS. 3 and 4 will be described with reference to Kibot 100 illustrated in FIG. 1 .
- Kibot 100 divides (i.e., spatially divides) a video of the learner captured in real time through the video camera 110 into 8 ⁇ 8 blocks (S 301 ).
- the captured video of the learner can be divided into various block sizes, for example, 4 ⁇ 4, 16 ⁇ 16, 32 ⁇ 32, etc.
- Kibot 100 divides (i.e., temporally divides) the video, which has been divided into 8 ⁇ 8 blocks, into time intervals so as to have 30 frames per second (S 302 ).
- Kibot 100 extracts a differential video by comparing changes in the video divided into 30 frames per second (each frame being divided into 8 ⁇ 8 blocks) (S 303 ).
- Kibot 100 extracts a 3-dimensional vector by searching for blocks that are identical or similar to a reference time frame by use of the extracted differential video (S 304 ).
- Kibot 100 After S 304 , by searching for a domain (blocks), in which differential videos are connected with one another, based on connectivity of coordinate values distributed in the 3-dimensional vector and performing domain optimization for the searched domain, Kibot 100 generates a learner object domain, which is a portion in which movement has occurred among the videos captured from the learner and in which the movement is changed (S 305 ).
- Kibot 100 After S 305 , Kibot 100 generates a background object domain by dividing an image of a background object appearing in a game screen into 8 ⁇ 8 blocks (S 306 ).
- the background object domain can be divided into various other block sizes than 8 ⁇ 8 blocks, for example, 4 ⁇ 4, 16 ⁇ 16, etc.
- Kibot 100 determines whether the learner object domain came into contact with the background object domain (S 307 ).
- Kibot 100 can determine the contact by use of at least one of a percentage value of domains where the learner object domain and the background object domain overlap with each other and a percentage value of the number of overlapped images in the 30 frames-per-second video.
- Kibot 100 applies a change in animation to the background object according to a movement direction of the learner object domain and performs a predetermined operation according to the change in animation (S 308 ).
- FIG. 5 shows how a contact is determined by the contact determination unit in accordance with an embodiment of the present invention.
- the learner object domain may not necessarily have the shape of a hand, as illustrated in FIG. 5 , since the learner object domain has differential videos connected therein, but for the convenience of description, the learner object domain is illustrated herein to have the shape similar to a hand.
- the contact between the learner object domain and the background object domain can be determined by calculating the percentage value of how many frames are overlapped, as in FIG. 5 , in 30 frames per second.
- FIG. 6 illustrates a learning screen in accordance with an embodiment of the present invention.
- an animation can be performed to put the pineapple hung on the tree in a basket at a bottom of a screen.
- Kibot 100 can output an English voice “pineapple” and turn on the LED emitting unit several times per second.
- FIG. 7 illustrates a learning screen in accordance with another embodiment of the present invention.
- Kibot 100 When Kibot 100 outputs a particular word in English pronunciation, the learner can use a hand thereof to make contact with a corresponding background object and proceed with learning.
- the learner can use the hand to select the “clean” background object having a leaf drawn thereon.
- Kibot 100 can output an audio announcement saying “Wow! Good job! Shall we go to the next step?” and continue with the learning, at which the LED emitting unit of Kibot 100 can be turned on several times and a fanfare can be outputted.
- Kibot 100 can output an audio message saying “Why don't you select something else?” and motivate the learner for voluntary participation.
- the present invention can be utilized in telecommunications and robot industries.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Processing Or Creating Images (AREA)
- Electrically Operated Instructional Devices (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110139497A KR101410410B1 (ko) | 2011-12-21 | 2011-12-21 | 체감형 학습 장치 및 방법 |
KR10-2011-0139497 | 2011-12-21 | ||
PCT/KR2012/001492 WO2013094820A1 (fr) | 2011-12-21 | 2012-02-28 | Appareil et procédé d'apprentissage de type sensoriel |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140342344A1 true US20140342344A1 (en) | 2014-11-20 |
Family
ID=48668679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/365,464 Abandoned US20140342344A1 (en) | 2011-12-21 | 2012-02-28 | Apparatus and method for sensory-type learning |
Country Status (3)
Country | Link |
---|---|
US (1) | US20140342344A1 (fr) |
KR (1) | KR101410410B1 (fr) |
WO (1) | WO2013094820A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140292650A1 (en) * | 2013-04-02 | 2014-10-02 | Samsung Display Co., Ltd. | Optical detection of bending motions of a flexible display |
USD771642S1 (en) * | 2013-10-31 | 2016-11-15 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
USD772240S1 (en) * | 2013-10-31 | 2016-11-22 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
US20190204946A1 (en) * | 2016-09-07 | 2019-07-04 | Chul Woo Lee | Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101750060B1 (ko) | 2015-08-13 | 2017-06-22 | 이철우 | 반응형 영상 생성방법 및 생성프로그램 |
WO2017026834A1 (fr) * | 2015-08-13 | 2017-02-16 | 이철우 | Procédé de génération et programme de génération de vidéo réactive |
WO2018048227A1 (fr) * | 2016-09-07 | 2018-03-15 | 이철우 | Dispositif, procédé et programme de production d'une image de type à réaction multidimensionnelle, et procédé et programme de reproduction d'une image de type à réaction multidimensionnelle |
CN113426101B (zh) * | 2021-06-22 | 2023-10-20 | 咪咕互动娱乐有限公司 | 教学方法、装置、设备及计算机可读存储介质 |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020160823A1 (en) * | 2000-02-18 | 2002-10-31 | Hajime Watabe | Game apparatus, storage medium and computer program |
US6771277B2 (en) * | 2000-10-06 | 2004-08-03 | Sony Computer Entertainment Inc. | Image processor, image processing method, recording medium, computer program and semiconductor device |
US20050221892A1 (en) * | 2004-03-31 | 2005-10-06 | Konami Corporation | Game device, computer control method, and computer-readable information storage medium |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US20080242415A1 (en) * | 2007-03-27 | 2008-10-02 | Nazeer Ahmed | Motion-based input for platforms and applications |
US20090082114A1 (en) * | 2007-09-24 | 2009-03-26 | Victoria Stratford | Interactive transforming animated handheld game |
US7702161B2 (en) * | 2005-10-28 | 2010-04-20 | Aspeed Technology Inc. | Progressive differential motion JPEG codec |
US20100099492A1 (en) * | 2007-06-29 | 2010-04-22 | Konami Digital Entertainment Co., Ltd. | Game device, game control method, game control program, and computer-readable recording medium on which the program is recorded |
US20120268609A1 (en) * | 2011-04-22 | 2012-10-25 | Samsung Electronics Co., Ltd. | Video object detecting apparatus, video object deforming apparatus, and methods thereof |
US20120309533A1 (en) * | 2011-06-01 | 2012-12-06 | Nintendo Co., Ltd. | Game apparatus, storage medium, game controlling method and game system |
US20130088424A1 (en) * | 2010-04-14 | 2013-04-11 | Samsung Electronics Co., Ltd. | Device and method for processing virtual worlds |
US20150030233A1 (en) * | 2011-12-12 | 2015-01-29 | The University Of British Columbia | System and Method for Determining a Depth Map Sequence for a Two-Dimensional Video Sequence |
US8989262B2 (en) * | 2009-08-07 | 2015-03-24 | Electronics And Telecommunications Research Institute | Motion picture encoding apparatus and method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5344804B2 (ja) * | 2007-06-07 | 2013-11-20 | 株式会社タイトー | 投影した影を利用するゲーム装置 |
-
2011
- 2011-12-21 KR KR1020110139497A patent/KR101410410B1/ko active IP Right Grant
-
2012
- 2012-02-28 US US14/365,464 patent/US20140342344A1/en not_active Abandoned
- 2012-02-28 WO PCT/KR2012/001492 patent/WO2013094820A1/fr active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020160823A1 (en) * | 2000-02-18 | 2002-10-31 | Hajime Watabe | Game apparatus, storage medium and computer program |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US6771277B2 (en) * | 2000-10-06 | 2004-08-03 | Sony Computer Entertainment Inc. | Image processor, image processing method, recording medium, computer program and semiconductor device |
US20050221892A1 (en) * | 2004-03-31 | 2005-10-06 | Konami Corporation | Game device, computer control method, and computer-readable information storage medium |
US7702161B2 (en) * | 2005-10-28 | 2010-04-20 | Aspeed Technology Inc. | Progressive differential motion JPEG codec |
US20080242415A1 (en) * | 2007-03-27 | 2008-10-02 | Nazeer Ahmed | Motion-based input for platforms and applications |
US20100099492A1 (en) * | 2007-06-29 | 2010-04-22 | Konami Digital Entertainment Co., Ltd. | Game device, game control method, game control program, and computer-readable recording medium on which the program is recorded |
US20090082114A1 (en) * | 2007-09-24 | 2009-03-26 | Victoria Stratford | Interactive transforming animated handheld game |
US8989262B2 (en) * | 2009-08-07 | 2015-03-24 | Electronics And Telecommunications Research Institute | Motion picture encoding apparatus and method thereof |
US20130088424A1 (en) * | 2010-04-14 | 2013-04-11 | Samsung Electronics Co., Ltd. | Device and method for processing virtual worlds |
US20120268609A1 (en) * | 2011-04-22 | 2012-10-25 | Samsung Electronics Co., Ltd. | Video object detecting apparatus, video object deforming apparatus, and methods thereof |
US20120309533A1 (en) * | 2011-06-01 | 2012-12-06 | Nintendo Co., Ltd. | Game apparatus, storage medium, game controlling method and game system |
US20150030233A1 (en) * | 2011-12-12 | 2015-01-29 | The University Of British Columbia | System and Method for Determining a Depth Map Sequence for a Two-Dimensional Video Sequence |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140292650A1 (en) * | 2013-04-02 | 2014-10-02 | Samsung Display Co., Ltd. | Optical detection of bending motions of a flexible display |
US9990004B2 (en) * | 2013-04-02 | 2018-06-05 | Samsung Dispaly Co., Ltd. | Optical detection of bending motions of a flexible display |
USD771642S1 (en) * | 2013-10-31 | 2016-11-15 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
USD772240S1 (en) * | 2013-10-31 | 2016-11-22 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
USD867373S1 (en) * | 2013-10-31 | 2019-11-19 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
USD902223S1 (en) * | 2013-10-31 | 2020-11-17 | King.Com Ltd. | Game display screen or portion thereof with graphical user interface |
US20190204946A1 (en) * | 2016-09-07 | 2019-07-04 | Chul Woo Lee | Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image |
US11003264B2 (en) * | 2016-09-07 | 2021-05-11 | Chui Woo Lee | Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image |
US20220269360A1 (en) * | 2016-09-07 | 2022-08-25 | Chul Woo Lee | Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image |
US12086335B2 (en) * | 2016-09-07 | 2024-09-10 | Momenti, Inc. | Device, method and program for generating multidimensional reaction-type image, and method and program for reproducing multidimensional reaction-type image |
Also Published As
Publication number | Publication date |
---|---|
KR20130071978A (ko) | 2013-07-01 |
KR101410410B1 (ko) | 2014-06-27 |
WO2013094820A1 (fr) | 2013-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20140342344A1 (en) | Apparatus and method for sensory-type learning | |
US11514653B1 (en) | Streaming mixed-reality environments between multiple devices | |
US11532172B2 (en) | Enhanced training of machine learning systems based on automatically generated realistic gameplay information | |
US11373354B2 (en) | Techniques for rendering three-dimensional animated graphics from video | |
US11563998B2 (en) | Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, video distribution method, and video distribution program | |
Yannakakis | Game AI revisited | |
Miles et al. | A review of virtual environments for training in ball sports | |
US11826628B2 (en) | Virtual reality sports training systems and methods | |
CN102129343B (zh) | 运动捕捉系统中的受指导的表演 | |
CN102129292B (zh) | 在运动捕捉系统中识别用户意图 | |
US8591329B2 (en) | Methods and apparatuses for constructing interactive video games by use of video clip | |
US20160314620A1 (en) | Virtual reality sports training systems and methods | |
US20170216675A1 (en) | Fitness-based game mechanics | |
EP4316618A1 (fr) | Programme, procédé, et dispositif de traitement d'informations | |
CN102947777A (zh) | 用户跟踪反馈 | |
CN104740869A (zh) | 一种融合真实环境的虚实结合的交互方法及系统 | |
US20230030260A1 (en) | Systems and methods for improved player interaction using augmented reality | |
JP2021077255A (ja) | 画像処理装置、画像処理方法及び画像処理システム | |
US9827495B2 (en) | Simulation device, simulation method, program, and information storage medium | |
Moll et al. | Alternative inputs for games and AR/VR applications: deep headbanging on the web | |
Cannavò et al. | Supporting motion-capture acting with collaborative Mixed Reality | |
CN112245910B (zh) | 一种基于Quest头显的建模、极限运动方法和系统 | |
Khan et al. | Efficient navigation in virtual environments: A comparative study of two interaction techniques: The magic wand vs. the human joystick | |
KR100607046B1 (ko) | 체감형 게임용 화상처리 방법 및 이를 이용한 게임 방법 | |
Prakash et al. | Games technology: Console architectures, game engines and invisible interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KT CORPORATION, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, YOUNG HOON;KANG, CHAN HUI;KIM, JONG CHEOL;AND OTHERS;REEL/FRAME:033139/0457 Effective date: 20140409 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |