KR20150137320A - analysis system and method for response of audience - Google Patents

analysis system and method for response of audience Download PDF

Info

Publication number
KR20150137320A
KR20150137320A KR1020140064989A KR20140064989A KR20150137320A KR 20150137320 A KR20150137320 A KR 20150137320A KR 1020140064989 A KR1020140064989 A KR 1020140064989A KR 20140064989 A KR20140064989 A KR 20140064989A KR 20150137320 A KR20150137320 A KR 20150137320A
Authority
KR
South Korea
Prior art keywords
response
motion
audience
acoustic
calculating
Prior art date
Application number
KR1020140064989A
Other languages
Korean (ko)
Other versions
KR101591402B1 (en
Inventor
최이권
김유화
이승권
양근화
Original Assignee
모젼스랩(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 모젼스랩(주) filed Critical 모젼스랩(주)
Priority to KR1020140064989A priority Critical patent/KR101591402B1/en
Priority to PCT/KR2014/012516 priority patent/WO2015182841A1/en
Publication of KR20150137320A publication Critical patent/KR20150137320A/en
Application granted granted Critical
Publication of KR101591402B1 publication Critical patent/KR101591402B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a system for analyzing response of an audience, which analyzes a concentration level and response of an audience who views image content or a performance, and to a method therefor. The system of the present invention comprises: an image collecting unit which continuously obtains images of a content viewing audience; a sound collecting unit which continuously obtains sound of a content viewing space; a calculating unit which calculates response of the audience from image data obtained from the image collecting unit and sound data obtained from the sound collecting unit; and a storage unit which accumulates the response of the audience calculated by the calculating unit in a time serial manner, and stores the same. The calculating unit comprises: a motion response calculating unit which calculates motion response by comparing the continuously obtained image data; a gaze response calculating unit which divides the heads and the bodies of the audience from the continuously obtained image data to extract the same, and calculates gaze response from how the head part changes direction from the body; and a sound response calculating unit which calculates sound response by comparing the continuously obtained sound data. The present invention calculates audience response based on motions of an audience, gaze change of the audience, and sound change when analyzing the audience response, thereby providing the system and method for analyzing response of an audience with a high degree of accuracy.

Description

BACKGROUND OF THE INVENTION Field of the Invention [0001] The present invention relates to an analysis system,

The present invention relates to an audience response analysis system and method for analyzing and analyzing the concentration and response of spectators of visual contents or performances, and is directed to the development of 'Smart type interactive playground' 2012.06.01 ~ 2015.05.31) ', which is an invention relating to the result calculated.

Recently, it is very important for the producers of visual contents and performance contents (hereinafter referred to as "contents") to grasp the audience's reaction to contents produced and displayed (performed). This kind of audience response is used as a publicity strategy for the contents, a direction of profit generation, and reference materials for future contents production.

In this way, in order to grasp the audience reaction to the contents, conventionally, after the watching, the satisfaction, the immersion, and the interest of the audience have been investigated through the questionnaire.

However, such a questionnaire method has limitations in grasping the objective reaction of the audience, and as the viewing time elapses, the response in the memory is distorted and the accurate reaction can not be grasped. .

In order to solve such a problem, Korean Patent Registration No. 10-1337833 discloses a method for capturing a scene of a viewer in real time during a content viewing, extracting a face portion from a captured image, Extracts a brightness histogram for the difference image, calculates a face motion based on the brightness change amount, and calculates an immersion degree of the viewer through the face motion amount.

Although the above-described conventional techniques have proposed the theoretical direction of the change of the spectral response analysis method that calculates the reaction of the audience through the photographed images of the audience, there are the following problems in actual application thereof.

In other words, in the conventional art, since the motion calculation of the viewer is calculated based on the pixel brightness change of the captured image, not only the brightness of the pixel due to the motion of the viewer in the dark screen space but also the brightness of the screen The brightness change of the pixel due to the brightness change is also recognized as the movement of the viewer and the accurate audience response can not be obtained.

In addition, in the related art, there is an advantage in extracting only the facial part of the viewer and capturing the motion of the captured image, so that it is advantageous to grasp the reaction for each individual audience, but it is not suitable for grasping the response of the whole audience. Concretely, the reaction and immersion of the viewer can be expressed not only by the movement of the face but also by the movement of the whole body such as the hands and the feet. However, in the related art, there is a problem that the accuracy of the audience response is deteriorated .

In the conventional technique, the viewer's reaction is derived only with the movement of the viewer. However, the reaction of the actual viewer is represented not only by the movement but also by the change of the gaze and the expression of the voice (laughter, scream, elasticity, etc.) The audience response can not be derived.

(001) Korea Patent No. 10-1337833 (002) Korean Patent Publication No. 10-2014-0042504 (003) Korean Patent Publication No. 10-2014-0042505

SUMMARY OF THE INVENTION The present invention has been made in order to solve the above-mentioned problems of the related art, and it is an object of the present invention to provide an apparatus and method for analyzing a response of an audience, which comprehensively reflects a change of an audience's movement, And to provide an audience response analysis system and method.

According to an aspect of the present invention, there is provided an image processing apparatus including an image collecting unit for continuously acquiring an image of a viewer of a content; An acoustic collector for continuously acquiring sound of a content viewing space; An operation unit for calculating a viewer response from the image data acquired by the image acquisition unit and the sound data acquired by the sound acquisition unit; And a storage unit for accumulating and accumulating the viewer reactivity calculated in the operation unit in a time series manner, wherein the operation unit comprises: a motion response calculation unit for calculating a motion response from the comparison of the continuously acquired image data; An eye gaze response calculation unit for extracting a body and a head of an audience from the image data successively acquired and calculating a gaze response from a change in head direction with respect to the body; And an acoustic response diagram calculating unit for calculating an acoustic response diagram from the comparison of the acoustic data obtained continuously. The viewer response diagram is calculated from the motion response diagram, the visual acuity response diagram, and the acoustic response diagram.

At this time,

Figure pat00001
) ≪ / RTI >
Figure pat00002
Lt; RTI ID = 0.0 >
Figure pat00003
Is a frame-by-frame motion response diagram,
Figure pat00004
Is the amount of motion variation calculated from the comparison of the captured image data,
Figure pat00005
Is a motion threshold,
Figure pat00006
May be a correction value.

Then,

Figure pat00007
) ≪ / RTI >
Figure pat00008
Lt; RTI ID = 0.0 >
Figure pat00009
Is a visual line-by-frame response diagram,
Figure pat00010
Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data,
Figure pat00011
Is a line-of-sight threshold value,
Figure pat00012
May be a correction value.

Also,

Figure pat00013
),
Figure pat00014
Lt; RTI ID = 0.0 >
Figure pat00015
Is an acoustic response diagram for each frame,
Figure pat00016
Is an amount of sound change calculated from a comparison of the collected sound data,
Figure pat00017
Is an acoustic threshold,
Figure pat00018
May be a correction value.

Then,

Figure pat00019
May be calculated from the sound data from which the content sound is removed.

Also, the operation unit may calculate a motion response, a visual acuity, and an acoustical response according to each scene by calculating a motion response, a visual acuity, and an acoustical response of the frames included in each scene.

And the arithmetic operation unit, for the entire contents,

Figure pat00020
Calculating a motion response from the motion vector;
Figure pat00021
To calculate a gaze response;
Figure pat00022
Lt; RTI ID = 0.0 >:< / RTI &
Figure pat00023
Is the motion response for the entire content,
Figure pat00024
Is an eye-gaze response to the whole content,
Figure pat00025
Is the acoustic response to the entire contents,
Figure pat00026
Represents a weight for each scene, and each scene
Figure pat00027
Is the amount of motion variation,
Figure pat00028
Is a motion threshold,
Figure pat00029
Is a change in the line of sight,
Figure pat00030
Is a line-of-sight threshold,
Figure pat00031
Is an acoustic variation amount,
Figure pat00032
Is an acoustic threshold,
Figure pat00033
May be a correction value.

According to another aspect of the present invention, there is provided a method of analyzing a reaction of an audience viewing a content, comprising the steps of: (A) collecting image data and sound data of a viewing space during a content viewing through an image collecting unit and an acoustic collecting unit; (B) calculating a motion response from the image data; (C) calculating a line of sight response from the image data; (D) calculating an acoustic response from the acoustic data; And (E) calculating an audience response (FEI) per frame by summing the motion response, the visual acuity, and the acoustic response.

In this case, the step (B) may include: (B1) calculating a motion variation amount of the image data; (B2) calculating a motion response per frame through the amount of motion change;

Figure pat00034
) ≪ / RTI >
Figure pat00035
Lt; RTI ID = 0.0 >
Figure pat00036
Is a frame-by-frame motion response diagram,
Figure pat00037
Is the amount of motion variation calculated from the comparison of the captured image data,
Figure pat00038
Is a motion threshold,
Figure pat00039
May be a correction value.

The step (C) may further include: (C1) calculating a gaze change amount; (C2) calculating a line-by-frame line-of-sight response from the line-of-sight variation amount;

Figure pat00040
) ≪ / RTI >
Figure pat00041
Lt; RTI ID = 0.0 >
Figure pat00042
Is a visual line-by-frame response diagram,
Figure pat00043
Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data,
Figure pat00044
Is a line-of-sight threshold value,
Figure pat00045
May be a correction value.

Further, the visual line change amount (

Figure pat00046
) May be calculated on the basis of the number of changes between the divided directions by dividing the view direction of the viewer into six directions of left / right / front and up / down directions.

The step (D) may further include: (D1) filtering out sound sources included in the content presentation from the collected sound data; (D2) calculating a change amount of the sound data; And (D3) calculating an acoustical response from the variation of the acoustical data.

Figure pat00047
),
Figure pat00048
Lt; RTI ID = 0.0 >
Figure pat00049
Is an acoustic response diagram for each frame,
Figure pat00050
Is an amount of sound change calculated from a comparison of the collected sound data,
Figure pat00051
Is an acoustic threshold,
Figure pat00052
May be a correction value.

Further, the present invention may further include (F) calculating an average value of the audience response (FEI) per each scene in each frame to calculate a scene-specific audience response (SEI).

The present invention further includes (G) calculating an audience response (PEI) for the entire content through the scene-specific audience response (SEI) calculated in the step (F) , ≪ / RTI >

Figure pat00053
Calculating a motion response from the motion vector; Equation
Figure pat00054
To calculate a gaze response; Equation
Figure pat00055
The visual acuity is calculated from the sum of the motion response, the visual acuity, and the acoustical response to the entire contents,
Figure pat00056
Is the motion response for the entire content,
Figure pat00057
Is an eye-gaze response to the whole content,
Figure pat00058
Is the acoustic response to the entire contents,
Figure pat00059
Represents a weight for each scene, and each scene
Figure pat00060
Is the amount of motion variation,
Figure pat00061
Is a motion threshold,
Figure pat00062
Is a change in the line of sight,
Figure pat00063
Is a line-of-sight threshold,
Figure pat00064
Is an acoustic variation amount,
Figure pat00065
Is an acoustic threshold,
Figure pat00066
May be a correction value.

The following effects can be expected in the audience response analysis system and method according to the present invention as discussed above.

That is, in analyzing the reaction of the audience, the present invention calculates the audience response based on the movement of the audience, the change of the audience line of sight, and the change of the sound (voice) There are advantages.

An advantage of the present invention is to provide an audience response analysis system and method capable of calculating the audience response in units of frames, scenes, and entire contents by cumulatively calculating the degree of change in audience response have.

In addition, the audience response analysis according to the present invention is advantageous in that it can distinguish the static immersion state from the dynamic immersion state by graphically displaying the viewer response to the user, thereby grasping a more accurate audience response.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing an audience response analysis method according to the prior art; FIG.
2 is a block diagram showing a configuration of a specific embodiment of an audience reaction analysis system according to the present invention.
FIG. 3 is a graph showing a formula applied to an audience reaction analysis according to the present invention. FIG.
FIG. 4 is a flowchart showing a specific embodiment of an audience response analysis method according to the present invention. FIG.
5 is an exemplary view showing an audience response analysis example according to the present invention.
FIG. 6 is an exemplary diagram showing another example of audience response analysis according to the present invention. FIG.
7 is an exemplary diagram showing another example of audience response analysis according to the present invention.

Hereinafter, an audience response analysis system and method according to a specific embodiment of the present invention will be described with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. , Which may vary depending on the intention or custom of the user, the operator, and the like. Therefore, the definition should be based on the contents throughout this specification.

Each block of the accompanying block diagrams and combinations of steps of the flowcharts may be performed by computer program instructions (execution engines), which may be stored in a general-purpose computer, special purpose computer, or other processor of a programmable data processing apparatus The instructions that are executed through the processor of the computer or other programmable data processing equipment will generate means for performing the functions described in each block or flowchart of the block diagram.

These computer program instructions may also be stored in a computer usable or computer readable memory capable of directing a computer or other programmable data processing apparatus to implement the functionality in a particular manner so that the computer usable or computer readable memory It is also possible for the instructions stored in the block diagram to produce an article of manufacture containing instruction means for performing the functions described in each block or flowchart of the flowchart.

Computer program instructions may also be loaded onto a computer or other programmable data processing equipment so that a series of operating steps may be performed on a computer or other programmable data processing equipment to create a computer- It is also possible that the instructions that perform the data processing equipment are capable of providing the steps for executing the functions described in each block of the block diagram and at each step of the flowchart.

Also, each block or step may represent a portion of a module, segment, or code that includes one or more executable instructions for executing the specified logical functions, and in some alternative embodiments, It is also possible for functions to occur out of order.

That is, it is also possible that the two blocks or steps shown are actually concurrently performed, and that the blocks or steps are performed in the reverse order of the function as required.

FIG. 2 is a block diagram showing a configuration of a specific embodiment of an audience response analysis system according to the present invention, and FIG. 3 is a graph showing a formula applied to an audience response analysis according to the present invention.

2, the audience response analysis system according to the present invention includes an image collection unit 100, an audio collection unit 200, an operation unit 300, and a storage unit 400.

The image collecting unit 100 collects image data, and the image data refers to image data of a viewer who enjoys a content (image / performance) As shown in FIG.

The sound collecting unit 200 may be a microphone module installed in parallel with the image collecting unit 100, preferably a microphone module installed in the entire screening (performance) space, Or may be a plurality of microphone modules distributed and installed so as to collect sounds of even distribution.

The operation unit 300 is a unit for calculating the audience response from the image data and the sound data collected from the image collection unit 100 and the sound collection unit 200. In order to calculate the audience response, A gaze response calculation unit 320 for calculating a gaze response, and an acoustic response calculation unit 330 for calculating an acoustic response.

The motion reactivity calculator 310 calculates a motion variation by comparing the image data, and calculates a motion response from the motion variation.

Specifically, the motion response is calculated from the following equation (1), where

Figure pat00067
Is a frame-by-frame motion response diagram,
Figure pat00068
Is the amount of motion variation calculated from the comparison of the captured image data,
Figure pat00069
Is a motion threshold,
Figure pat00070
Is a correction value.

Figure pat00071

The motion variation amount is calculated by comparing the image captured in the previous frame with the image of the currently captured frame to calculate the degree of change of the image. The degree of change of the object on the captured image is determined without depending on the change in pixel brightness .

The technique of comparing the captured images to determine the degree of change (movement) of the object has been widely commercialized and widely applied in the fields of surveillance cameras and the like, and is not described in detail herein.

In addition, the numerical expression of the amount of motion change can be expressed by various methods as long as the unified criterion is applied. For example, the degree of change in the entire image size may be expressed in% format, It can also be displayed by the number of objects whose motion is detected in the object (audience).

The threshold value

Figure pat00072
Is a value that sets the value of the amount of motion change in a state where the audience's degree of immersion is the lowest.

The line-of-sight response calculation unit 320 separately extracts the body and the head of the viewer from the continuously acquired image data, and calculates the line-of-sight response from the change of the head portion with respect to the body.

Specifically, the visual acuity response is calculated from the following equation (2), where

Figure pat00073
Is a visual line-by-frame response diagram,
Figure pat00074
Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data,
Figure pat00075
Is a line-of-sight threshold value,
Figure pat00076
Is a correction value.

Figure pat00077

In consideration of the efficiency of the data processing and the reliability of the measurement data, the viewing direction may be divided into a specific number of areas, and the distance between the divided areas may be determined. Is calculated on the basis of the number of times of movement.

The gaze threshold value is also a set value indicating the degree of change of the gaze direction when the normal audience has the lowest degree of immersion.

Meanwhile, the acoustically-acoustically-responsive-state calculating unit 330 calculates acoustical acoustically-acoustically-acoustically-acoustically-acoustically-acoustically-acoustically-acoustically-based,

Figure pat00078
Is an acoustic response diagram for each frame,
Figure pat00079
Is an amount of sound change calculated from a comparison of the collected sound data,
Figure pat00080
Is an acoustic threshold,
Figure pat00081
Is a correction value.

Figure pat00082

At this time, the amount of sound change represents the amount of change of sound at each frame corresponding time, and when the sound collected by the sound collecting unit is directly compared with the sound, the sound is affected by the effect sound included in the content.

Accordingly, in order to reflect only the sound change by the accurate audience, it is preferable to calculate based on the sound data from which the sound by the content screening (performance) is removed.

In this case, a technology for removing a specific sound source from sound data including a specific sound source is a technology that has already been commercialized and applied in a technology field for separating an accompaniment and a song from a sound source file, and a detailed description thereof will be omitted herein .

The threshold value is as described above.

On the other hand, in the calculation of each reaction,

Figure pat00083
Are indicated by the same variables, but they are set to different values by the operator depending on the characteristics of each reaction.

In the calculation of each reaction of the present invention, a commonly applied formula Y = (Q-C) 2 + Cth means that the corresponding equations are plotted in a graph as shown in FIG.

As the absolute value of the QC value increases, the degree of reactivity is calculated to be high. When the QC value increases to a negative value, it represents a static immersion state (when the content is seriously concentrated). When the QC value becomes a positive value And a dynamic immersion state (in the case where the reaction to the content increases due to the increase in joy, fear, etc.).

On the other hand, the calculation unit calculates the audience response from the motion response, the visual acuity, and the acoustic response calculated as described above.

The viewer response is basically calculated from the sum of the motion response, the visual acuity, and the acoustical response, and can be calculated by adding or subtracting the correction value as necessary.

Meanwhile, the audience response may be calculated for each of a frame unit (PEI), a scene unit (SEI), and an entire content (FEI).

In this case, the audience response (PEI) per frame is calculated from the sum of the motion response, the visual acuity, and the acoustical response as described above, and the audience response (SEI) Is calculated from the average value.

The PEI of the entire contents unit is calculated by the audience response calculated in the scene unit. Specifically, the PEI is calculated by the following equations (4), (5) and (6) The visual acuity, the visual acuity, and the acoustical response, and can be calculated from the sum of them. Of course, if necessary, we can also calculate the weights for each of the responses.

Figure pat00084

Figure pat00085

Figure pat00086

In the above equations (4) to (5)

Figure pat00087
Is the motion response for the entire content,
Figure pat00088
Is an eye-gaze response to the whole content,
Figure pat00089
Is the acoustic response to the entire contents,
Figure pat00090
Represents a weight for each scene, and each scene
Figure pat00091
Is the amount of motion variation,
Figure pat00092
Is a motion threshold,
Figure pat00093
Is a change in the line of sight,
Figure pat00094
Is a line-of-sight threshold,
Figure pat00095
Is an acoustic variation amount,
Figure pat00096
Is an acoustic threshold,
Figure pat00097
Is a correction value.

The storage unit accumulates motion response, visual acuity, acoustical response, and viewer reactivity calculated from the motion response, the visual acuity, and the acuity response according to each frame and scene calculated by the computing unit.

From the stored data, various analysis results of audience response to the contents can be derived.

Hereinafter, the method for analyzing the audience response according to the present invention will be described in detail with reference to the accompanying drawings.

FIG. 4 is a flowchart illustrating a method for analyzing an audience response according to the present invention. FIG. 5 is a diagram illustrating an example of audience response analysis according to the present invention, and FIG. FIG. 7 is an exemplary diagram showing another example of audience response analysis according to the present invention.

As shown in FIG. 4, the audience response analysis method according to the present invention begins with collecting image data and sound data at the start of the viewing of the contents (S100).

The image data and the sound data thus collected are used to calculate the motion response, the visual acuity, and the acoustical response, respectively, on a frame-by-frame basis.

The motion response, the visual acuity, and the acoustical response may be calculated at the same time by a separate process, sequentially calculated, or concurrently with content presentation (performance), if necessary.

First, in order to calculate the motion response, the operation unit calculates a motion variation amount of the image data (S210).

Next, a motion response per frame is calculated through the amount of motion change (S212).

At this time, the motion reactivity calculated by Equation (3) is as described above.

Then, in order to calculate the gaze response, the calculation unit calculates the gaze change amount (S220). In this case, it is preferable that the visual line change amount is measured based on the degree of change between the corresponding regions by dividing the visual line direction of the viewer into a specific number of regions. In the present invention, the visual line direction is defined as a left / right / front / Direction, and calculated based on the number of changes between the divided directions.

Next, a visual acuity for each frame is calculated from the visual line change amount (S222). At this time, the line-of-sight response is calculated from Equation (4) as described above.

Meanwhile, in order to calculate the acoustic response, the operation unit filters and removes the sound source included in the content presentation from the collected sound data (S230). At this time, it is as described above that the removal of the content sound source is for calculating an acoustic response based only on the sound generated by the audience.

Next, a change amount of the filtered sound data is calculated (S232).

Then, an acoustic response is calculated using Equation 5 (S234).

As described above, when the motion response, the visual acuity, and the acoustical reactivity are calculated for each frame, the FEI per frame is calculated through the sum of the motion response, the visual acuity, and the acoustic acuity (S300).

Next, the operation unit performs steps 100 through 300 for an area of the scene unit, divides the FEI for each frame into each scene unit, calculates an average value, and calculates an audience response index (SEI) (S400, S500).

After performing the scene-based audience response (SEI) on the entire content, the viewer response degree (PEI) for the whole content is calculated through Equation (6) (S600, S700).

FIG. 5 shows an example of the audience response for each frame, scene, and entire contents according to the present invention. As shown in the figure, the degree of motion of the viewer can be graphically displayed on the analysis screen, and the audience response of the entire frame, scene, and entire contents can be output.

Meanwhile, FIG. 6 shows an example of receiving various conditions (period, date, etc.) from the data stored in the storage unit and outputting statistical data of the data calculated according to the conditions.

In addition, FIG. 7 shows an example in which the audience response level for each scene and the reactivity level for the entire content are displayed in a graph.

It is to be understood that the invention is not limited to the disclosed embodiment, but is capable of many modifications and variations within the scope of the appended claims. It is self-evident.

The present invention relates to a viewer reaction analysis system and method for analyzing and analyzing the concentration and reaction of spectators of image contents or performances. In analyzing the reaction of an audience, according to the present invention, And the sound (voice), the system can provide an accurate audience response analysis system and method.

100: image collecting unit 200:
300: operation unit 310: motion reaction rate calculation unit
320: line-of-sight response calculation unit 330: acoustic response-
400:

Claims (14)

An image collecting unit for continuously acquiring images of a content viewing audience;
An acoustic collector for continuously acquiring sound of a content viewing space;
An operation unit for calculating a viewer response from the image data acquired by the image acquisition unit and the sound data acquired by the sound acquisition unit; And
And a storage unit for accumulating and accumulating the audience response calculated in the operation unit in a time series manner,
The operation unit,
A motion response calculator for calculating a motion response from the comparison of the continuously acquired image data;
An eye gaze response calculation unit for extracting a body and a head of an audience from the image data successively acquired and calculating a gaze response from a change in head direction with respect to the body; And
And an acoustic response diagram calculating unit for calculating an acoustic response diagram based on the comparison of the acoustic data obtained continuously, wherein the viewer response analyzing unit calculates an audience response from the motion response, the visual acuity, and the acoustic response.
The method according to claim 1,
The motion response (
Figure pat00098
) ≪ / RTI >
Figure pat00099
≪ / RTI >
here
Figure pat00100
Is a frame-by-frame motion response diagram,
Figure pat00101
Is the amount of motion variation calculated from the comparison of the captured image data,
Figure pat00102
Is a motion threshold,
Figure pat00103
Is a correction value.
3. The method of claim 2,
The above-
Figure pat00104
) ≪ / RTI >
Figure pat00105
≪ / RTI >
here
Figure pat00106
Is a visual line-by-frame response diagram,
Figure pat00107
Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data,
Figure pat00108
Is a line-of-sight threshold value,
Figure pat00109
Is a correction value.
The method of claim 3,
Acoustic response between
Figure pat00110
),
Figure pat00111
≪ / RTI >
here
Figure pat00112
Is an acoustic response diagram for each frame,
Figure pat00113
Is an amount of sound change calculated from a comparison of the collected sound data,
Figure pat00114
Is an acoustic threshold,
Figure pat00115
Is a correction value.
5. The method of claim 4,
The acoustic variation
Figure pat00116
Quot;
And the audio data is calculated from the audio data from which the content sound is removed.
6. The method of claim 5,
The operation unit,
A visual acuity, a visual acuity, and an acoustical reactivity with respect to frames included in each scene are calculated to calculate a motion response, a visual acuity, and an acoustical response according to each scene.
The method according to claim 6,
Wherein the operation unit is configured to,
Figure pat00117
Calculating a motion response from the motion vector;
Figure pat00118
To calculate a gaze response;
Figure pat00119
Lt; RTI ID = 0.0 >:< / RTI &
here,
Figure pat00120
Is the motion response for the entire content,
Figure pat00121
Is an eye-gaze response to the whole content,
Figure pat00122
Is the acoustic response to the entire contents,
Figure pat00123
Represents a weight for each scene, and each scene
Figure pat00124
Is the amount of motion variation,
Figure pat00125
Is a motion threshold,
Figure pat00126
Is a change in the line of sight,
Figure pat00127
Is a line-of-sight threshold,
Figure pat00128
Is an acoustic variation amount,
Figure pat00129
Is an acoustic threshold,
Figure pat00130
Is a correction value.
A method of analyzing a reaction of an audience viewing a content,
(A) collecting image data and sound data of a viewing space during a content viewing through an image collecting unit and an acoustic collecting unit;
(B) calculating a motion response from the image data;
(C) calculating a line of sight response from the image data;
(D) calculating an acoustic response from the acoustic data; And
(E) calculating an audience response (FEI) per frame by summing the motion response, the visual acuity, and the acoustic response.
9. The method of claim 8,
The step (B)
(B1) calculating a motion variation amount of the image data;
(B2) calculating a frame-by-frame motion response through the amount of motion change;
The motion response (
Figure pat00131
) ≪ / RTI >
Figure pat00132
≪ / RTI >
here
Figure pat00133
Is a frame-by-frame motion response diagram,
Figure pat00134
Is the amount of motion variation calculated from the comparison of the captured image data,
Figure pat00135
Is a motion threshold,
Figure pat00136
Is a correction value.
9. The method of claim 8,
The step (C)
(C1) a line-of-sight variation amount;
(C2) calculating a line-by-frame line-of-sight response from the line-of-sight variation amount,
The above-
Figure pat00137
) ≪ / RTI >
Figure pat00138
≪ / RTI >
here
Figure pat00139
Is a visual line-by-frame response diagram,
Figure pat00140
Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data,
Figure pat00141
Is a line-of-sight threshold value,
Figure pat00142
Is a correction value.
11. The method of claim 10,
The gaze change amount (
Figure pat00143
)silver,
Wherein the viewing direction of the viewer is divided into six directions, i.e., left / right / front and up / down directions, and is calculated on the basis of the number of changes between the divided directions.
9. The method of claim 8,
The step (D)
(D1) filtering and removing the sound source included in the content presentation from the collected sound data;
(D2) calculating a change amount of the sound data; And
(D3) calculating an acoustic response from the variation of the acoustic data;
Acoustic response between
Figure pat00144
),
Figure pat00145
≪ / RTI >
here
Figure pat00146
Is an acoustic response diagram for each frame,
Figure pat00147
Is an amount of sound change calculated from a comparison of the collected sound data,
Figure pat00148
Is an acoustic threshold,
Figure pat00149
Is a correction value.
13. The method according to any one of claims 8 to 12,
(F) calculating an audience response (SEI) per scene by calculating an average value of an audience response (FEI) per frame for each scene unit.
14. The method of claim 13,
(G) calculating an audience response (PEI) for the whole content through the scene-specific audience response (SEI) calculated in the step (F)
Wherein the PEI comprises:
For the entire content,
Equation
Figure pat00150
Calculating a motion response from the motion vector;
Equation
Figure pat00151
To calculate a gaze response;
Equation
Figure pat00152
The visual acuity is calculated from the sum of the motion response, the visual acuity, and the acoustical response to the entire contents,
here,
Figure pat00153
Is the motion response for the entire content,
Figure pat00154
Is an eye-gaze response to the whole content,
Figure pat00155
Is the acoustic response to the entire contents,
Figure pat00156
Represents a weight for each scene, and each scene
Figure pat00157
Is the amount of motion variation,
Figure pat00158
Is a motion threshold,
Figure pat00159
Is a change in the line of sight,
Figure pat00160
Is a line-of-sight threshold,
Figure pat00161
Is an acoustic variation amount,
Figure pat00162
Is an acoustic threshold,
Figure pat00163
Is a correction value.
KR1020140064989A 2014-05-29 2014-05-29 analysis system and method for response of audience KR101591402B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020140064989A KR101591402B1 (en) 2014-05-29 2014-05-29 analysis system and method for response of audience
PCT/KR2014/012516 WO2015182841A1 (en) 2014-05-29 2014-12-18 System and method for analyzing audience reaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140064989A KR101591402B1 (en) 2014-05-29 2014-05-29 analysis system and method for response of audience

Publications (2)

Publication Number Publication Date
KR20150137320A true KR20150137320A (en) 2015-12-09
KR101591402B1 KR101591402B1 (en) 2016-02-03

Family

ID=54699143

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140064989A KR101591402B1 (en) 2014-05-29 2014-05-29 analysis system and method for response of audience

Country Status (2)

Country Link
KR (1) KR101591402B1 (en)
WO (1) WO2015182841A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101668387B1 (en) 2016-05-16 2016-10-21 정문영 Apparatus for analyzing behavior of movie viewers and method thereof
KR102179426B1 (en) * 2019-06-27 2020-11-16 김재신 System for Operating World Art Olympic
KR102184396B1 (en) * 2019-10-14 2020-11-30 김재신 System for operating World Art Olympic and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009187441A (en) * 2008-02-08 2009-08-20 Toyohashi Univ Of Technology Moving image recommendation system based on visual line track information
KR20090121016A (en) * 2008-05-21 2009-11-25 박영민 Viewer response measurement method and system
KR101337833B1 (en) 2012-09-28 2013-12-06 경희대학교 산학협력단 Method for estimating response of audience concerning content
KR20140042504A (en) 2012-09-28 2014-04-07 경희대학교 산학협력단 Method for estimating response of audience group concerning content
KR20140042505A (en) 2012-09-28 2014-04-07 경희대학교 산학협력단 Method for estimating attention level of audience group concerning content

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010026871A (en) * 2008-07-22 2010-02-04 Nikon Corp Information processor and information processing system
JP5609160B2 (en) * 2010-02-26 2014-10-22 ソニー株式会社 Information processing system, content composition apparatus and method, and recording medium
JP2013016903A (en) * 2011-06-30 2013-01-24 Toshiba Corp Information processor and information processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009187441A (en) * 2008-02-08 2009-08-20 Toyohashi Univ Of Technology Moving image recommendation system based on visual line track information
KR20090121016A (en) * 2008-05-21 2009-11-25 박영민 Viewer response measurement method and system
KR101337833B1 (en) 2012-09-28 2013-12-06 경희대학교 산학협력단 Method for estimating response of audience concerning content
KR20140042504A (en) 2012-09-28 2014-04-07 경희대학교 산학협력단 Method for estimating response of audience group concerning content
KR20140042505A (en) 2012-09-28 2014-04-07 경희대학교 산학협력단 Method for estimating attention level of audience group concerning content

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101668387B1 (en) 2016-05-16 2016-10-21 정문영 Apparatus for analyzing behavior of movie viewers and method thereof
KR102179426B1 (en) * 2019-06-27 2020-11-16 김재신 System for Operating World Art Olympic
CN112150313A (en) * 2019-06-27 2020-12-29 金在信 Relay broadcasting operation system for world artistic energy olympic competition
KR102184396B1 (en) * 2019-10-14 2020-11-30 김재신 System for operating World Art Olympic and method thereof

Also Published As

Publication number Publication date
WO2015182841A1 (en) 2015-12-03
KR101591402B1 (en) 2016-02-03

Similar Documents

Publication Publication Date Title
US8636361B2 (en) Learning-based visual attention prediction system and method thereof
US20190236779A1 (en) Diagnostic imaging assistance apparatus and system, and diagnostic imaging assistance method
JP2016146547A5 (en)
US10600189B1 (en) Optical flow techniques for event cameras
CN110837750B (en) Face quality evaluation method and device
CN107767358B (en) Method and device for determining ambiguity of object in image
JP2011130204A (en) Video information processing method, and video information processing apparatus
US20200394418A1 (en) Image processing method, an image processing apparatus, and a surveillance system
KR101591402B1 (en) analysis system and method for response of audience
CN112492297B (en) Video processing method and related equipment
CN110826522A (en) Method and system for monitoring abnormal human behavior, storage medium and monitoring equipment
CN113691721B (en) Method, device, computer equipment and medium for synthesizing time-lapse photographic video
CN110874572B (en) Information detection method and device and storage medium
KR101220223B1 (en) Method and apparatus for visual discomfort metric of stereoscopic video, recordable medium which program for executing method is recorded
CN111448589B (en) Device, system and method for detecting body movement of a patient
JP7210890B2 (en) Behavior recognition device, behavior recognition method, its program, and computer-readable recording medium recording the program
US10904429B2 (en) Image sensor
WO2012054048A1 (en) Apparatus and method for evaluating an object
TWI478099B (en) Learning-based visual attention prediction system and mathod thereof
CN115209121B (en) Full-range simulation system and method with intelligent integration function
US10755088B2 (en) Augmented reality predictions using machine learning
CN110555394A (en) Fall risk assessment method based on human body shape characteristics
CN115048954A (en) Retina-imitating target detection method and device, storage medium and terminal
CN115116136A (en) Abnormal behavior detection method, device and medium
CN115376041A (en) Three-dimensional panoramic video motion sickness degree prediction method based on content perception

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
LAPS Lapse due to unpaid annual fee