KR20170100852A - Massive behavior information processing method for crowd participating interactive contents - Google Patents
Massive behavior information processing method for crowd participating interactive contents Download PDFInfo
- Publication number
- KR20170100852A KR20170100852A KR1020160023244A KR20160023244A KR20170100852A KR 20170100852 A KR20170100852 A KR 20170100852A KR 1020160023244 A KR1020160023244 A KR 1020160023244A KR 20160023244 A KR20160023244 A KR 20160023244A KR 20170100852 A KR20170100852 A KR 20170100852A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- behavior
- participant
- coordinate data
- position coordinate
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/067—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
- G06K19/07—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
- G06K19/0723—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- H04L29/06034—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
Abstract
A large capacity behavior information processing method for developing a crowd-participating interactive content generates position coordinate data indicating positions of individual participants in the experience space using position identification data for identifying positions of a plurality of participants located in the experience space Detecting behavioral event discrimination information of the plurality of participants in the experience space, determining whether an event of the individual participant is generated based on the behavioral event discrimination information, and comparing the positional coordinate data and the behavior And combining the information on the event to generate the behavior information of the individual participant.
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention [0002] The present invention relates to a large capacity behavior information processing method for developing a mass participation interactive content.
Crowd-participant interactive contents are contents displayed by using real-time rendering technique on a display medium by processing various inputs such as motion, touch input, and device input of a large number of participants.
For example, media art images are projected on an unstructured screen such as a building wall and a floor through projection mapping using a plurality of projectors, and dynamic object recognition using a plurality of camera sensors and real- The interaction effect can be produced. At this time, the smart device and the multimodal sensor can be used to detect the photographer, the message, and the movement of the user, so that the user can experience the interactive production connected with the space.
In order to implement such software, a technique for processing experiential behavior information, which is an input of a large number of experients, is needed. Experiential input is obtained by processing information of various sensor devices such as camera sensor, RGB-D sensor, touch sensor, multimodal sensor (smart phone), and position sensor (RTLS).
For example, the camera sensor mainly uses object detection and object tracking, and the RGB-D sensor uses object detection, object tracking, human skeleton detection technology Lt; / RTI > Multimodal sensors use various sensor inputs, such as acceleration, position, angle, temperature, and humidity, provided by the sensor. In addition, location sensors and touch sensors use location and touch input information.
Crowd-participation interactive content is developed by using large capacity behavior information data obtained by various sensors as input. In this case, the large-capacity behavior information is real-time data transmitted at 30 frames per second or more, and when a large number of participants participate at the same time, the data increases in proportion to the number of participants.
SUMMARY OF THE INVENTION It is an object of the present invention to provide a large capacity behavior information processing method capable of generating behavior information and generating interactive contents using the behavior information in a separate system.
The method for processing large capacity behavior information for developing crowd-participating interactive contents according to an embodiment of the present invention is a method for processing a plurality of participants in a space Detecting behavioral event discrimination information of the plurality of participants in the experience space, discriminating whether or not the behavioral event of the individual participant is generated based on the behavioral event discrimination information, and And generating the behavior information of the individual participant by combining the position coordinate data and the information about the behavior event.
The location identification data may be generated using information received from the smart device or tag possessed by the participant.
The position coordinate data may include an ID of an individual participant, a time stamp, and a spatial coordinate.
The position coordinate data can be maintained for a predetermined period of time.
The location identification data may be generated using vision information received from a camera photographing a participant located in the experience space.
The vision information may include at least one of face contour, mark, and color and pattern of the participant.
The behavior event determination information may be sensed by a sensor installed in the experience space, and the behavior event determination information may include position information, a sensing value, a time stamp, and a detection radius of the sensor.
The position coordinate data may include an ID of an individual participant, a time stamp, and a spatial coordinate. The position coordinate data may be stored in the past data for a predetermined period of time, and the position coordinate data and the information on the behavior event And reducing the combining error using the time stamp of the position coordinate data and the time stamp of the behavior event determination information at the time of combining.
According to the present invention, it is possible to grasp, in real time, the positional coordinates of a plurality of participants located in the experience space and specific behavioral events of each participant.
Furthermore, by combining the position coordinates of the individual participants and the behavior event data, it is possible to generate large-capacity behavior information for interactive content production, and transmit the generated behavior information to a separate device for producing interactive contents, It is possible to prevent a large load from being applied.
FIG. 1 is a schematic diagram of a system for executing a large capacity behavior information processing method for developing a crowd-participating interactive content according to an embodiment of the present invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
A large capacity behavior information processing method according to an embodiment of the present invention relates to a method of processing behavior information used for development of crowd-participating interactive contents, and identifies the position and behavior of a participant performing various actions while moving within a predetermined experience space And generates behavior information used to generate interactive contents accordingly. The large-capacity behavior information generated by the embodiment of the present invention is transmitted to a separate apparatus for generating interactive contents and used for generation of interactive contents.
The behavior information processing method according to the embodiment of the present invention specifies the real time location coordinates of the participants located in the experience space, identifies the behavior events of each participant, and combines the behavior events of the participants in the specific position coordinates to generate the behavior information .
The experiential space may be a space outside the indoor space, and may be an arbitrary space where a plurality of participants participate to perform predetermined actions and generate and display interactive contents accordingly.
Referring to FIG. 1, a system for performing the behavior information processing method of the present invention includes a plurality of
The plurality of
The sensor
On the other hand, a function of specifying an unspecified participant who does not have the registered sensor and tracking the position can be provided. To this end, a plurality of
The
The sensor
At this time, the position coordinate data may include IDs of individual participants, time stamps, and spatial coordinates. Here, the spatial coordinates may include x-coordinate, y-coordinate, and z-coordinate of the point where the individual participant is located in real time, assuming that the experience space is a three-dimensional space having x, y, and z axes. The real-time position coordinates of the individual participants can be specified by the position coordinate data.
Meanwhile, behavior event information of a plurality of participants in the experience space is detected. The behavioral event means a predetermined action or operation, and in the interactive content, a predetermined image or sound effect for the behavioral event of the participant can be outputted.
A
The behavior
Information on the behavioral event generated by the sensor
The behavior
The behavior
The behavior
While the present invention has been particularly shown and described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, And all changes and modifications to the scope of the invention.
10: Sensor
20: Sensor signal processing module
30: Camera
40: Vision signal processing module
50: Real-time coordinate calculation server
60: Behavior identification sensor
70: behavior information processing module
80: Behavior information integration server
Claims (8)
Detecting behavioral event discrimination information of the plurality of participants in the experience space,
Determining whether the behavior event of the individual participant is generated based on the behavior event determination information, and
And generating behavior information of an individual participant by combining the position coordinate data and the information on the behavior event.
Wherein the location identification data is generated using information received from a smart device or a tag held by the participant.
Wherein the position coordinate data includes an ID of an individual participant, a time stamp, and a spatial coordinate.
Wherein the location coordinate data is maintained for a predetermined period of time.
Wherein the location identification data is generated using vision information received from a camera capturing a participant located in the experience space.
Wherein the vision information includes at least one of a face contour, a mark, and a color and a pattern of clothes of the participant.
The behavior event determination information is sensed by a sensor installed in the experience space,
Wherein the behavior event determination information includes position information of the sensor, a sensing value, a time stamp, and a detection radius.
Wherein the position coordinate data includes an ID of an individual participant, a time stamp, and a spatial coordinate,
The position coordinate data is maintained for a predetermined period of time,
And reducing a combining error using a time stamp of the position coordinate data and a time stamp of the behavior event determination information when the position coordinate data and the information on the behavior event are combined. A method for processing behavior information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160023244A KR20170100852A (en) | 2016-02-26 | 2016-02-26 | Massive behavior information processing method for crowd participating interactive contents |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160023244A KR20170100852A (en) | 2016-02-26 | 2016-02-26 | Massive behavior information processing method for crowd participating interactive contents |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20170100852A true KR20170100852A (en) | 2017-09-05 |
Family
ID=59924660
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160023244A KR20170100852A (en) | 2016-02-26 | 2016-02-26 | Massive behavior information processing method for crowd participating interactive contents |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20170100852A (en) |
-
2016
- 2016-02-26 KR KR1020160023244A patent/KR20170100852A/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101768958B1 (en) | Hybird motion capture system for manufacturing high quality contents | |
CN109298629B (en) | System and method for guiding mobile platform in non-mapped region | |
US10341647B2 (en) | Method for calibrating a camera and calibration system | |
WO2014162554A1 (en) | Image processing system and image processing program | |
US20110128388A1 (en) | Camera calibration system and coordinate data generation system and method thereof | |
JP2014533347A (en) | How to extend the range of laser depth map | |
KR20160122709A (en) | Methods and systems for determining elstimation of motion of a device | |
EP3417919B1 (en) | Transformation matrix deriving device, position estimation apparatus, transformation matrix deriving method, and position estimation method | |
JP2015079444A5 (en) | ||
KR20130013015A (en) | Method and apparatus for estimating 3d position and orientation by means of sensor fusion | |
US11127156B2 (en) | Method of device tracking, terminal device, and storage medium | |
US7377650B2 (en) | Projection of synthetic information | |
US11282222B2 (en) | Recording medium, object detection apparatus, object detection method, and object detection system | |
JP2015079433A5 (en) | ||
US20210157396A1 (en) | System and method related to data fusing | |
JP2017038777A (en) | Motion recognition apparatus | |
JP2007098555A (en) | Position indicating method, indicator and program for achieving the method | |
US9292963B2 (en) | Three-dimensional object model determination using a beacon | |
JPH10198506A (en) | System for detecting coordinate | |
TWI822423B (en) | Computing apparatus and model generation method | |
JP2009266155A (en) | Apparatus and method for mobile object tracking | |
KR20170100852A (en) | Massive behavior information processing method for crowd participating interactive contents | |
JP2019101476A (en) | Operation guide system | |
JP6839137B2 (en) | Support devices and programs | |
JP2020190427A (en) | Information terminal device and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
J201 | Request for trial against refusal decision | ||
J121 | Written withdrawal of request for trial |