KR20170100852A - Massive behavior information processing method for crowd participating interactive contents - Google Patents

Massive behavior information processing method for crowd participating interactive contents Download PDF

Info

Publication number
KR20170100852A
KR20170100852A KR1020160023244A KR20160023244A KR20170100852A KR 20170100852 A KR20170100852 A KR 20170100852A KR 1020160023244 A KR1020160023244 A KR 1020160023244A KR 20160023244 A KR20160023244 A KR 20160023244A KR 20170100852 A KR20170100852 A KR 20170100852A
Authority
KR
South Korea
Prior art keywords
information
behavior
participant
coordinate data
position coordinate
Prior art date
Application number
KR1020160023244A
Other languages
Korean (ko)
Inventor
양정하
Original Assignee
(주)이지위드
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)이지위드 filed Critical (주)이지위드
Priority to KR1020160023244A priority Critical patent/KR20170100852A/en
Publication of KR20170100852A publication Critical patent/KR20170100852A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/067Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
    • G06K19/07Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
    • G06K19/0723Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • H04L29/06034
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware

Abstract

A large capacity behavior information processing method for developing a crowd-participating interactive content generates position coordinate data indicating positions of individual participants in the experience space using position identification data for identifying positions of a plurality of participants located in the experience space Detecting behavioral event discrimination information of the plurality of participants in the experience space, determining whether an event of the individual participant is generated based on the behavioral event discrimination information, and comparing the positional coordinate data and the behavior And combining the information on the event to generate the behavior information of the individual participant.

Description

[0001] Massive behavior information processing method for crowd participating interactive contents [

BACKGROUND OF THE INVENTION 1. Field of the Invention [0002] The present invention relates to a large capacity behavior information processing method for developing a mass participation interactive content.

Crowd-participant interactive contents are contents displayed by using real-time rendering technique on a display medium by processing various inputs such as motion, touch input, and device input of a large number of participants.

For example, media art images are projected on an unstructured screen such as a building wall and a floor through projection mapping using a plurality of projectors, and dynamic object recognition using a plurality of camera sensors and real- The interaction effect can be produced. At this time, the smart device and the multimodal sensor can be used to detect the photographer, the message, and the movement of the user, so that the user can experience the interactive production connected with the space.

In order to implement such software, a technique for processing experiential behavior information, which is an input of a large number of experients, is needed. Experiential input is obtained by processing information of various sensor devices such as camera sensor, RGB-D sensor, touch sensor, multimodal sensor (smart phone), and position sensor (RTLS).

For example, the camera sensor mainly uses object detection and object tracking, and the RGB-D sensor uses object detection, object tracking, human skeleton detection technology Lt; / RTI > Multimodal sensors use various sensor inputs, such as acceleration, position, angle, temperature, and humidity, provided by the sensor. In addition, location sensors and touch sensors use location and touch input information.

Crowd-participation interactive content is developed by using large capacity behavior information data obtained by various sensors as input. In this case, the large-capacity behavior information is real-time data transmitted at 30 frames per second or more, and when a large number of participants participate at the same time, the data increases in proportion to the number of participants.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a large capacity behavior information processing method capable of generating behavior information and generating interactive contents using the behavior information in a separate system.

The method for processing large capacity behavior information for developing crowd-participating interactive contents according to an embodiment of the present invention is a method for processing a plurality of participants in a space Detecting behavioral event discrimination information of the plurality of participants in the experience space, discriminating whether or not the behavioral event of the individual participant is generated based on the behavioral event discrimination information, and And generating the behavior information of the individual participant by combining the position coordinate data and the information about the behavior event.

The location identification data may be generated using information received from the smart device or tag possessed by the participant.

The position coordinate data may include an ID of an individual participant, a time stamp, and a spatial coordinate.

The position coordinate data can be maintained for a predetermined period of time.

The location identification data may be generated using vision information received from a camera photographing a participant located in the experience space.

The vision information may include at least one of face contour, mark, and color and pattern of the participant.

The behavior event determination information may be sensed by a sensor installed in the experience space, and the behavior event determination information may include position information, a sensing value, a time stamp, and a detection radius of the sensor.

The position coordinate data may include an ID of an individual participant, a time stamp, and a spatial coordinate. The position coordinate data may be stored in the past data for a predetermined period of time, and the position coordinate data and the information on the behavior event And reducing the combining error using the time stamp of the position coordinate data and the time stamp of the behavior event determination information at the time of combining.

According to the present invention, it is possible to grasp, in real time, the positional coordinates of a plurality of participants located in the experience space and specific behavioral events of each participant.

Furthermore, by combining the position coordinates of the individual participants and the behavior event data, it is possible to generate large-capacity behavior information for interactive content production, and transmit the generated behavior information to a separate device for producing interactive contents, It is possible to prevent a large load from being applied.

FIG. 1 is a schematic diagram of a system for executing a large capacity behavior information processing method for developing a crowd-participating interactive content according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

A large capacity behavior information processing method according to an embodiment of the present invention relates to a method of processing behavior information used for development of crowd-participating interactive contents, and identifies the position and behavior of a participant performing various actions while moving within a predetermined experience space And generates behavior information used to generate interactive contents accordingly. The large-capacity behavior information generated by the embodiment of the present invention is transmitted to a separate apparatus for generating interactive contents and used for generation of interactive contents.

The behavior information processing method according to the embodiment of the present invention specifies the real time location coordinates of the participants located in the experience space, identifies the behavior events of each participant, and combines the behavior events of the participants in the specific position coordinates to generate the behavior information .

The experiential space may be a space outside the indoor space, and may be an arbitrary space where a plurality of participants participate to perform predetermined actions and generate and display interactive contents accordingly.

Referring to FIG. 1, a system for performing the behavior information processing method of the present invention includes a plurality of sensors 10 installed in an experience space for sensing a position and / or an action of a participant, Is transmitted to the sensor signal processing module 20. That is, the sensor 10 generates data identifying the position of the participant located in the experience space.

The plurality of sensors 10 may be RFID, beacons, smart devices, ultra wide band wireless devices, Wi-Fi devices, and the like. The plurality of sensors 10 may be carried by the participant directly or may be attached to the participant's body. Before the participant enters the experience space, such a sensor 10 is registered and a unique ID is assigned to each sensor 10 so that the position of the participant holding the sensor can be tracked in real time. For example, when a signal receiver capable of receiving signals output from each sensor 10 is installed at a plurality of points and each sensor 10 is positioned by the time difference between the position of each signal receiver and the signal received therefrom The coordinates of the point can be calculated.

The sensor signal processing module 20 receives and processes a signal transmitted from each sensor 10. For example, the sensor signal processing module 20 can calculate the position coordinates of each sensor 10 using the signal of the sensor 10. [

On the other hand, a function of specifying an unspecified participant who does not have the registered sensor and tracking the position can be provided. To this end, a plurality of cameras 30 for capturing unspecified participants may be provided, and a vision processing module 40 for processing image data photographed by the camera 30 may be provided. The camera 30 may be installed at a plurality of predetermined positions.

The camera 30 continuously performs shooting and transmits the photographed image to the vision processing module 40. The vision information transmitted from the camera 30 is used as data for identifying the position of the corresponding participant. At this time, the vision information is used not only as a location identification but also as data for specifying a photographed participant. For example, the vision information may include one or more of a contour of a participant's face, a mark, and a color and pattern of the clothes. The face contour of the photographed participant can be identified to specify an individual participant and the position of the participant can be calculated.

The sensor signal processing module 20 described above processes the signal transmitted from the sensor 10 and transmits the processed signal to the real-time coordinate calculation server 50. The real-time coordinate calculation server 50 uses the received signal to calculate the real- Position coordinate data can be generated. The vision processing module 40 processes the signal transmitted from the camera 30 and transmits the processed signal to the real-time coordinate calculation server 50. The real-time coordinate calculation server 50 uses the received signal to calculate the real- Coordinate data can be generated.

At this time, the position coordinate data may include IDs of individual participants, time stamps, and spatial coordinates. Here, the spatial coordinates may include x-coordinate, y-coordinate, and z-coordinate of the point where the individual participant is located in real time, assuming that the experience space is a three-dimensional space having x, y, and z axes. The real-time position coordinates of the individual participants can be specified by the position coordinate data.

Meanwhile, behavior event information of a plurality of participants in the experience space is detected. The behavioral event means a predetermined action or operation, and in the interactive content, a predetermined image or sound effect for the behavioral event of the participant can be outputted.

A sensor 60 for detecting behavioral event discrimination information is provided and the behavioral information processing module 70 receives and processes signals of the behavioral identification sensor 60. [ For example, the behavior identifying sensor 60 can generate and output information such as coordinates, direction of detection, and detection radius. Meanwhile, the data generated by the sensor signal processing module 20 and the vision processing module 40 described above can also be used as data for discriminating an action event. The behavioral event determination information may include positional information of the behavioral identification sensor 60, a sensing value, a time stamp, and a detection radius.

The behavior information processing module 70 can determine whether an behavior event has occurred based on the behavior event determination information. For example, a threshold value may be set according to the type of the behavior identifying sensor 60, and if the set value is exceeded, it can be determined that the behavior event has occurred.

Information on the behavioral event generated by the sensor signal processing module 20, the vision processing module 40 and the behavior information processing module 70 may be transmitted to the behavior information integration server 80. [

The behavior information integration server 80 combines the real-time position coordinate data transmitted from the real-time coordinate calculation server 50 and the information about the behavior event received by the behavior information processing module 70 to generate the behavior information of the individual participant do. Accordingly, it is possible to generate, in real time, data on the position of the participant in the experience space and the predetermined behavior event made by each participant.

The behavior information integration server 80 may include means for correcting errors due to transmission delay of the position coordinate data and the behavior event information. For example, the behavior information integration server 80 operates to maintain past coordinate data for a predetermined period of time. When the position coordinate data and the behavior event information are combined, the time stamp of the position coordinate data and the behavior event determination information And reducing the combining error using a time stamp. Accordingly, if there is a transmission delay, error in time difference can be reduced by utilizing past position coordinate data.

The behavior information integration server 80 may include a content interface for transmitting behavior information to an apparatus for producing interactive contents.

While the present invention has been particularly shown and described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, And all changes and modifications to the scope of the invention.

10: Sensor
20: Sensor signal processing module
30: Camera
40: Vision signal processing module
50: Real-time coordinate calculation server
60: Behavior identification sensor
70: behavior information processing module
80: Behavior information integration server

Claims (8)

Generating position coordinate data indicating a position of an individual participant in the experience space using position identification data for identifying positions of a plurality of participants located in the experience space;
Detecting behavioral event discrimination information of the plurality of participants in the experience space,
Determining whether the behavior event of the individual participant is generated based on the behavior event determination information, and
And generating behavior information of an individual participant by combining the position coordinate data and the information on the behavior event.
The method of claim 1,
Wherein the location identification data is generated using information received from a smart device or a tag held by the participant.
3. The method of claim 2,
Wherein the position coordinate data includes an ID of an individual participant, a time stamp, and a spatial coordinate.
4. The method of claim 3,
Wherein the location coordinate data is maintained for a predetermined period of time.
The method of claim 1,
Wherein the location identification data is generated using vision information received from a camera capturing a participant located in the experience space.
The method of claim 5,
Wherein the vision information includes at least one of a face contour, a mark, and a color and a pattern of clothes of the participant.
The method of claim 1,
The behavior event determination information is sensed by a sensor installed in the experience space,
Wherein the behavior event determination information includes position information of the sensor, a sensing value, a time stamp, and a detection radius.
8. The method of claim 7,
Wherein the position coordinate data includes an ID of an individual participant, a time stamp, and a spatial coordinate,
The position coordinate data is maintained for a predetermined period of time,
And reducing a combining error using a time stamp of the position coordinate data and a time stamp of the behavior event determination information when the position coordinate data and the information on the behavior event are combined. A method for processing behavior information.
KR1020160023244A 2016-02-26 2016-02-26 Massive behavior information processing method for crowd participating interactive contents KR20170100852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160023244A KR20170100852A (en) 2016-02-26 2016-02-26 Massive behavior information processing method for crowd participating interactive contents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160023244A KR20170100852A (en) 2016-02-26 2016-02-26 Massive behavior information processing method for crowd participating interactive contents

Publications (1)

Publication Number Publication Date
KR20170100852A true KR20170100852A (en) 2017-09-05

Family

ID=59924660

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160023244A KR20170100852A (en) 2016-02-26 2016-02-26 Massive behavior information processing method for crowd participating interactive contents

Country Status (1)

Country Link
KR (1) KR20170100852A (en)

Similar Documents

Publication Publication Date Title
KR101768958B1 (en) Hybird motion capture system for manufacturing high quality contents
CN109298629B (en) System and method for guiding mobile platform in non-mapped region
US10341647B2 (en) Method for calibrating a camera and calibration system
WO2014162554A1 (en) Image processing system and image processing program
US20110128388A1 (en) Camera calibration system and coordinate data generation system and method thereof
JP2014533347A (en) How to extend the range of laser depth map
KR20160122709A (en) Methods and systems for determining elstimation of motion of a device
EP3417919B1 (en) Transformation matrix deriving device, position estimation apparatus, transformation matrix deriving method, and position estimation method
JP2015079444A5 (en)
KR20130013015A (en) Method and apparatus for estimating 3d position and orientation by means of sensor fusion
US11127156B2 (en) Method of device tracking, terminal device, and storage medium
US7377650B2 (en) Projection of synthetic information
US11282222B2 (en) Recording medium, object detection apparatus, object detection method, and object detection system
JP2015079433A5 (en)
US20210157396A1 (en) System and method related to data fusing
JP2017038777A (en) Motion recognition apparatus
JP2007098555A (en) Position indicating method, indicator and program for achieving the method
US9292963B2 (en) Three-dimensional object model determination using a beacon
JPH10198506A (en) System for detecting coordinate
TWI822423B (en) Computing apparatus and model generation method
JP2009266155A (en) Apparatus and method for mobile object tracking
KR20170100852A (en) Massive behavior information processing method for crowd participating interactive contents
JP2019101476A (en) Operation guide system
JP6839137B2 (en) Support devices and programs
JP2020190427A (en) Information terminal device and program

Legal Events

Date Code Title Description
J201 Request for trial against refusal decision
J121 Written withdrawal of request for trial