KR20150137320A - analysis system and method for response of audience - Google Patents
analysis system and method for response of audience Download PDFInfo
- Publication number
- KR20150137320A KR20150137320A KR1020140064989A KR20140064989A KR20150137320A KR 20150137320 A KR20150137320 A KR 20150137320A KR 1020140064989 A KR1020140064989 A KR 1020140064989A KR 20140064989 A KR20140064989 A KR 20140064989A KR 20150137320 A KR20150137320 A KR 20150137320A
- Authority
- KR
- South Korea
- Prior art keywords
- response
- motion
- audience
- acoustic
- calculating
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
Description
The present invention relates to an audience response analysis system and method for analyzing and analyzing the concentration and response of spectators of visual contents or performances, and is directed to the development of 'Smart type interactive playground' 2012.06.01 ~ 2015.05.31) ', which is an invention relating to the result calculated.
Recently, it is very important for the producers of visual contents and performance contents (hereinafter referred to as "contents") to grasp the audience's reaction to contents produced and displayed (performed). This kind of audience response is used as a publicity strategy for the contents, a direction of profit generation, and reference materials for future contents production.
In this way, in order to grasp the audience reaction to the contents, conventionally, after the watching, the satisfaction, the immersion, and the interest of the audience have been investigated through the questionnaire.
However, such a questionnaire method has limitations in grasping the objective reaction of the audience, and as the viewing time elapses, the response in the memory is distorted and the accurate reaction can not be grasped. .
In order to solve such a problem, Korean Patent Registration No. 10-1337833 discloses a method for capturing a scene of a viewer in real time during a content viewing, extracting a face portion from a captured image, Extracts a brightness histogram for the difference image, calculates a face motion based on the brightness change amount, and calculates an immersion degree of the viewer through the face motion amount.
Although the above-described conventional techniques have proposed the theoretical direction of the change of the spectral response analysis method that calculates the reaction of the audience through the photographed images of the audience, there are the following problems in actual application thereof.
In other words, in the conventional art, since the motion calculation of the viewer is calculated based on the pixel brightness change of the captured image, not only the brightness of the pixel due to the motion of the viewer in the dark screen space but also the brightness of the screen The brightness change of the pixel due to the brightness change is also recognized as the movement of the viewer and the accurate audience response can not be obtained.
In addition, in the related art, there is an advantage in extracting only the facial part of the viewer and capturing the motion of the captured image, so that it is advantageous to grasp the reaction for each individual audience, but it is not suitable for grasping the response of the whole audience. Concretely, the reaction and immersion of the viewer can be expressed not only by the movement of the face but also by the movement of the whole body such as the hands and the feet. However, in the related art, there is a problem that the accuracy of the audience response is deteriorated .
In the conventional technique, the viewer's reaction is derived only with the movement of the viewer. However, the reaction of the actual viewer is represented not only by the movement but also by the change of the gaze and the expression of the voice (laughter, scream, elasticity, etc.) The audience response can not be derived.
SUMMARY OF THE INVENTION The present invention has been made in order to solve the above-mentioned problems of the related art, and it is an object of the present invention to provide an apparatus and method for analyzing a response of an audience, which comprehensively reflects a change of an audience's movement, And to provide an audience response analysis system and method.
According to an aspect of the present invention, there is provided an image processing apparatus including an image collecting unit for continuously acquiring an image of a viewer of a content; An acoustic collector for continuously acquiring sound of a content viewing space; An operation unit for calculating a viewer response from the image data acquired by the image acquisition unit and the sound data acquired by the sound acquisition unit; And a storage unit for accumulating and accumulating the viewer reactivity calculated in the operation unit in a time series manner, wherein the operation unit comprises: a motion response calculation unit for calculating a motion response from the comparison of the continuously acquired image data; An eye gaze response calculation unit for extracting a body and a head of an audience from the image data successively acquired and calculating a gaze response from a change in head direction with respect to the body; And an acoustic response diagram calculating unit for calculating an acoustic response diagram from the comparison of the acoustic data obtained continuously. The viewer response diagram is calculated from the motion response diagram, the visual acuity response diagram, and the acoustic response diagram.
At this time,
) ≪ / RTI > Lt; RTI ID = 0.0 > Is a frame-by-frame motion response diagram, Is the amount of motion variation calculated from the comparison of the captured image data, Is a motion threshold, May be a correction value.Then,
) ≪ / RTI > Lt; RTI ID = 0.0 > Is a visual line-by-frame response diagram, Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data, Is a line-of-sight threshold value, May be a correction value.Also,
), Lt; RTI ID = 0.0 > Is an acoustic response diagram for each frame, Is an amount of sound change calculated from a comparison of the collected sound data, Is an acoustic threshold, May be a correction value.Then,
May be calculated from the sound data from which the content sound is removed.Also, the operation unit may calculate a motion response, a visual acuity, and an acoustical response according to each scene by calculating a motion response, a visual acuity, and an acoustical response of the frames included in each scene.
And the arithmetic operation unit, for the entire contents,
Calculating a motion response from the motion vector; To calculate a gaze response; Lt; RTI ID = 0.0 >:< / RTI & Is the motion response for the entire content, Is an eye-gaze response to the whole content, Is the acoustic response to the entire contents, Represents a weight for each scene, and each scene Is the amount of motion variation, Is a motion threshold, Is a change in the line of sight, Is a line-of-sight threshold, Is an acoustic variation amount, Is an acoustic threshold, May be a correction value.According to another aspect of the present invention, there is provided a method of analyzing a reaction of an audience viewing a content, comprising the steps of: (A) collecting image data and sound data of a viewing space during a content viewing through an image collecting unit and an acoustic collecting unit; (B) calculating a motion response from the image data; (C) calculating a line of sight response from the image data; (D) calculating an acoustic response from the acoustic data; And (E) calculating an audience response (FEI) per frame by summing the motion response, the visual acuity, and the acoustic response.
In this case, the step (B) may include: (B1) calculating a motion variation amount of the image data; (B2) calculating a motion response per frame through the amount of motion change;
) ≪ / RTI > Lt; RTI ID = 0.0 > Is a frame-by-frame motion response diagram, Is the amount of motion variation calculated from the comparison of the captured image data, Is a motion threshold, May be a correction value.The step (C) may further include: (C1) calculating a gaze change amount; (C2) calculating a line-by-frame line-of-sight response from the line-of-sight variation amount;
) ≪ / RTI > Lt; RTI ID = 0.0 > Is a visual line-by-frame response diagram, Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data, Is a line-of-sight threshold value, May be a correction value.Further, the visual line change amount (
) May be calculated on the basis of the number of changes between the divided directions by dividing the view direction of the viewer into six directions of left / right / front and up / down directions.The step (D) may further include: (D1) filtering out sound sources included in the content presentation from the collected sound data; (D2) calculating a change amount of the sound data; And (D3) calculating an acoustical response from the variation of the acoustical data.
), Lt; RTI ID = 0.0 > Is an acoustic response diagram for each frame, Is an amount of sound change calculated from a comparison of the collected sound data, Is an acoustic threshold, May be a correction value.Further, the present invention may further include (F) calculating an average value of the audience response (FEI) per each scene in each frame to calculate a scene-specific audience response (SEI).
The present invention further includes (G) calculating an audience response (PEI) for the entire content through the scene-specific audience response (SEI) calculated in the step (F) , ≪ / RTI >
Calculating a motion response from the motion vector; Equation To calculate a gaze response; Equation The visual acuity is calculated from the sum of the motion response, the visual acuity, and the acoustical response to the entire contents, Is the motion response for the entire content, Is an eye-gaze response to the whole content, Is the acoustic response to the entire contents, Represents a weight for each scene, and each scene Is the amount of motion variation, Is a motion threshold, Is a change in the line of sight, Is a line-of-sight threshold, Is an acoustic variation amount, Is an acoustic threshold, May be a correction value.The following effects can be expected in the audience response analysis system and method according to the present invention as discussed above.
That is, in analyzing the reaction of the audience, the present invention calculates the audience response based on the movement of the audience, the change of the audience line of sight, and the change of the sound (voice) There are advantages.
An advantage of the present invention is to provide an audience response analysis system and method capable of calculating the audience response in units of frames, scenes, and entire contents by cumulatively calculating the degree of change in audience response have.
In addition, the audience response analysis according to the present invention is advantageous in that it can distinguish the static immersion state from the dynamic immersion state by graphically displaying the viewer response to the user, thereby grasping a more accurate audience response.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing an audience response analysis method according to the prior art; FIG.
2 is a block diagram showing a configuration of a specific embodiment of an audience reaction analysis system according to the present invention.
FIG. 3 is a graph showing a formula applied to an audience reaction analysis according to the present invention. FIG.
FIG. 4 is a flowchart showing a specific embodiment of an audience response analysis method according to the present invention. FIG.
5 is an exemplary view showing an audience response analysis example according to the present invention.
FIG. 6 is an exemplary diagram showing another example of audience response analysis according to the present invention. FIG.
7 is an exemplary diagram showing another example of audience response analysis according to the present invention.
Hereinafter, an audience response analysis system and method according to a specific embodiment of the present invention will be described with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description of the present invention, detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. , Which may vary depending on the intention or custom of the user, the operator, and the like. Therefore, the definition should be based on the contents throughout this specification.
Each block of the accompanying block diagrams and combinations of steps of the flowcharts may be performed by computer program instructions (execution engines), which may be stored in a general-purpose computer, special purpose computer, or other processor of a programmable data processing apparatus The instructions that are executed through the processor of the computer or other programmable data processing equipment will generate means for performing the functions described in each block or flowchart of the block diagram.
These computer program instructions may also be stored in a computer usable or computer readable memory capable of directing a computer or other programmable data processing apparatus to implement the functionality in a particular manner so that the computer usable or computer readable memory It is also possible for the instructions stored in the block diagram to produce an article of manufacture containing instruction means for performing the functions described in each block or flowchart of the flowchart.
Computer program instructions may also be loaded onto a computer or other programmable data processing equipment so that a series of operating steps may be performed on a computer or other programmable data processing equipment to create a computer- It is also possible that the instructions that perform the data processing equipment are capable of providing the steps for executing the functions described in each block of the block diagram and at each step of the flowchart.
Also, each block or step may represent a portion of a module, segment, or code that includes one or more executable instructions for executing the specified logical functions, and in some alternative embodiments, It is also possible for functions to occur out of order.
That is, it is also possible that the two blocks or steps shown are actually concurrently performed, and that the blocks or steps are performed in the reverse order of the function as required.
FIG. 2 is a block diagram showing a configuration of a specific embodiment of an audience response analysis system according to the present invention, and FIG. 3 is a graph showing a formula applied to an audience response analysis according to the present invention.
2, the audience response analysis system according to the present invention includes an
The
The
The
The
Specifically, the motion response is calculated from the following equation (1), where
Is a frame-by-frame motion response diagram, Is the amount of motion variation calculated from the comparison of the captured image data, Is a motion threshold, Is a correction value.
The motion variation amount is calculated by comparing the image captured in the previous frame with the image of the currently captured frame to calculate the degree of change of the image. The degree of change of the object on the captured image is determined without depending on the change in pixel brightness .
The technique of comparing the captured images to determine the degree of change (movement) of the object has been widely commercialized and widely applied in the fields of surveillance cameras and the like, and is not described in detail herein.
In addition, the numerical expression of the amount of motion change can be expressed by various methods as long as the unified criterion is applied. For example, the degree of change in the entire image size may be expressed in% format, It can also be displayed by the number of objects whose motion is detected in the object (audience).
The threshold value
Is a value that sets the value of the amount of motion change in a state where the audience's degree of immersion is the lowest.The line-of-sight
Specifically, the visual acuity response is calculated from the following equation (2), where
Is a visual line-by-frame response diagram, Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data, Is a line-of-sight threshold value, Is a correction value.
In consideration of the efficiency of the data processing and the reliability of the measurement data, the viewing direction may be divided into a specific number of areas, and the distance between the divided areas may be determined. Is calculated on the basis of the number of times of movement.
The gaze threshold value is also a set value indicating the degree of change of the gaze direction when the normal audience has the lowest degree of immersion.
Meanwhile, the acoustically-acoustically-responsive-
At this time, the amount of sound change represents the amount of change of sound at each frame corresponding time, and when the sound collected by the sound collecting unit is directly compared with the sound, the sound is affected by the effect sound included in the content.
Accordingly, in order to reflect only the sound change by the accurate audience, it is preferable to calculate based on the sound data from which the sound by the content screening (performance) is removed.
In this case, a technology for removing a specific sound source from sound data including a specific sound source is a technology that has already been commercialized and applied in a technology field for separating an accompaniment and a song from a sound source file, and a detailed description thereof will be omitted herein .
The threshold value is as described above.
On the other hand, in the calculation of each reaction,
Are indicated by the same variables, but they are set to different values by the operator depending on the characteristics of each reaction.In the calculation of each reaction of the present invention, a commonly applied formula Y = (Q-C) 2 + Cth means that the corresponding equations are plotted in a graph as shown in FIG.
As the absolute value of the QC value increases, the degree of reactivity is calculated to be high. When the QC value increases to a negative value, it represents a static immersion state (when the content is seriously concentrated). When the QC value becomes a positive value And a dynamic immersion state (in the case where the reaction to the content increases due to the increase in joy, fear, etc.).
On the other hand, the calculation unit calculates the audience response from the motion response, the visual acuity, and the acoustic response calculated as described above.
The viewer response is basically calculated from the sum of the motion response, the visual acuity, and the acoustical response, and can be calculated by adding or subtracting the correction value as necessary.
Meanwhile, the audience response may be calculated for each of a frame unit (PEI), a scene unit (SEI), and an entire content (FEI).
In this case, the audience response (PEI) per frame is calculated from the sum of the motion response, the visual acuity, and the acoustical response as described above, and the audience response (SEI) Is calculated from the average value.
The PEI of the entire contents unit is calculated by the audience response calculated in the scene unit. Specifically, the PEI is calculated by the following equations (4), (5) and (6) The visual acuity, the visual acuity, and the acoustical response, and can be calculated from the sum of them. Of course, if necessary, we can also calculate the weights for each of the responses.
In the above equations (4) to (5)
Is the motion response for the entire content, Is an eye-gaze response to the whole content, Is the acoustic response to the entire contents, Represents a weight for each scene, and each scene Is the amount of motion variation, Is a motion threshold, Is a change in the line of sight, Is a line-of-sight threshold, Is an acoustic variation amount, Is an acoustic threshold, Is a correction value.The storage unit accumulates motion response, visual acuity, acoustical response, and viewer reactivity calculated from the motion response, the visual acuity, and the acuity response according to each frame and scene calculated by the computing unit.
From the stored data, various analysis results of audience response to the contents can be derived.
Hereinafter, the method for analyzing the audience response according to the present invention will be described in detail with reference to the accompanying drawings.
FIG. 4 is a flowchart illustrating a method for analyzing an audience response according to the present invention. FIG. 5 is a diagram illustrating an example of audience response analysis according to the present invention, and FIG. FIG. 7 is an exemplary diagram showing another example of audience response analysis according to the present invention.
As shown in FIG. 4, the audience response analysis method according to the present invention begins with collecting image data and sound data at the start of the viewing of the contents (S100).
The image data and the sound data thus collected are used to calculate the motion response, the visual acuity, and the acoustical response, respectively, on a frame-by-frame basis.
The motion response, the visual acuity, and the acoustical response may be calculated at the same time by a separate process, sequentially calculated, or concurrently with content presentation (performance), if necessary.
First, in order to calculate the motion response, the operation unit calculates a motion variation amount of the image data (S210).
Next, a motion response per frame is calculated through the amount of motion change (S212).
At this time, the motion reactivity calculated by Equation (3) is as described above.
Then, in order to calculate the gaze response, the calculation unit calculates the gaze change amount (S220). In this case, it is preferable that the visual line change amount is measured based on the degree of change between the corresponding regions by dividing the visual line direction of the viewer into a specific number of regions. In the present invention, the visual line direction is defined as a left / right / front / Direction, and calculated based on the number of changes between the divided directions.
Next, a visual acuity for each frame is calculated from the visual line change amount (S222). At this time, the line-of-sight response is calculated from Equation (4) as described above.
Meanwhile, in order to calculate the acoustic response, the operation unit filters and removes the sound source included in the content presentation from the collected sound data (S230). At this time, it is as described above that the removal of the content sound source is for calculating an acoustic response based only on the sound generated by the audience.
Next, a change amount of the filtered sound data is calculated (S232).
Then, an acoustic response is calculated using Equation 5 (S234).
As described above, when the motion response, the visual acuity, and the acoustical reactivity are calculated for each frame, the FEI per frame is calculated through the sum of the motion response, the visual acuity, and the acoustic acuity (S300).
Next, the operation unit performs
After performing the scene-based audience response (SEI) on the entire content, the viewer response degree (PEI) for the whole content is calculated through Equation (6) (S600, S700).
FIG. 5 shows an example of the audience response for each frame, scene, and entire contents according to the present invention. As shown in the figure, the degree of motion of the viewer can be graphically displayed on the analysis screen, and the audience response of the entire frame, scene, and entire contents can be output.
Meanwhile, FIG. 6 shows an example of receiving various conditions (period, date, etc.) from the data stored in the storage unit and outputting statistical data of the data calculated according to the conditions.
In addition, FIG. 7 shows an example in which the audience response level for each scene and the reactivity level for the entire content are displayed in a graph.
It is to be understood that the invention is not limited to the disclosed embodiment, but is capable of many modifications and variations within the scope of the appended claims. It is self-evident.
The present invention relates to a viewer reaction analysis system and method for analyzing and analyzing the concentration and reaction of spectators of image contents or performances. In analyzing the reaction of an audience, according to the present invention, And the sound (voice), the system can provide an accurate audience response analysis system and method.
100: image collecting unit 200:
300: operation unit 310: motion reaction rate calculation unit
320: line-of-sight response calculation unit 330: acoustic response-
400:
Claims (14)
An acoustic collector for continuously acquiring sound of a content viewing space;
An operation unit for calculating a viewer response from the image data acquired by the image acquisition unit and the sound data acquired by the sound acquisition unit; And
And a storage unit for accumulating and accumulating the audience response calculated in the operation unit in a time series manner,
The operation unit,
A motion response calculator for calculating a motion response from the comparison of the continuously acquired image data;
An eye gaze response calculation unit for extracting a body and a head of an audience from the image data successively acquired and calculating a gaze response from a change in head direction with respect to the body; And
And an acoustic response diagram calculating unit for calculating an acoustic response diagram based on the comparison of the acoustic data obtained continuously, wherein the viewer response analyzing unit calculates an audience response from the motion response, the visual acuity, and the acoustic response.
The motion response ( ) ≪ / RTI >
≪ / RTI >
here Is a frame-by-frame motion response diagram, Is the amount of motion variation calculated from the comparison of the captured image data, Is a motion threshold, Is a correction value.
The above- ) ≪ / RTI >
≪ / RTI >
here Is a visual line-by-frame response diagram, Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data, Is a line-of-sight threshold value, Is a correction value.
Acoustic response between ),
≪ / RTI >
here Is an acoustic response diagram for each frame, Is an amount of sound change calculated from a comparison of the collected sound data, Is an acoustic threshold, Is a correction value.
The acoustic variation Quot;
And the audio data is calculated from the audio data from which the content sound is removed.
The operation unit,
A visual acuity, a visual acuity, and an acoustical reactivity with respect to frames included in each scene are calculated to calculate a motion response, a visual acuity, and an acoustical response according to each scene.
Wherein the operation unit is configured to,
Calculating a motion response from the motion vector;
To calculate a gaze response;
Lt; RTI ID = 0.0 >:< / RTI &
here, Is the motion response for the entire content, Is an eye-gaze response to the whole content, Is the acoustic response to the entire contents, Represents a weight for each scene, and each scene Is the amount of motion variation, Is a motion threshold, Is a change in the line of sight, Is a line-of-sight threshold, Is an acoustic variation amount, Is an acoustic threshold, Is a correction value.
(A) collecting image data and sound data of a viewing space during a content viewing through an image collecting unit and an acoustic collecting unit;
(B) calculating a motion response from the image data;
(C) calculating a line of sight response from the image data;
(D) calculating an acoustic response from the acoustic data; And
(E) calculating an audience response (FEI) per frame by summing the motion response, the visual acuity, and the acoustic response.
The step (B)
(B1) calculating a motion variation amount of the image data;
(B2) calculating a frame-by-frame motion response through the amount of motion change;
The motion response ( ) ≪ / RTI >
≪ / RTI >
here Is a frame-by-frame motion response diagram, Is the amount of motion variation calculated from the comparison of the captured image data, Is a motion threshold, Is a correction value.
The step (C)
(C1) a line-of-sight variation amount;
(C2) calculating a line-by-frame line-of-sight response from the line-of-sight variation amount,
The above- ) ≪ / RTI >
≪ / RTI >
here Is a visual line-by-frame response diagram, Is the change amount of the line of sight according to the change of the direction of the body and the head of the viewer from the captured image data, Is a line-of-sight threshold value, Is a correction value.
The gaze change amount ( )silver,
Wherein the viewing direction of the viewer is divided into six directions, i.e., left / right / front and up / down directions, and is calculated on the basis of the number of changes between the divided directions.
The step (D)
(D1) filtering and removing the sound source included in the content presentation from the collected sound data;
(D2) calculating a change amount of the sound data; And
(D3) calculating an acoustic response from the variation of the acoustic data;
Acoustic response between ),
≪ / RTI >
here Is an acoustic response diagram for each frame, Is an amount of sound change calculated from a comparison of the collected sound data, Is an acoustic threshold, Is a correction value.
(F) calculating an audience response (SEI) per scene by calculating an average value of an audience response (FEI) per frame for each scene unit.
(G) calculating an audience response (PEI) for the whole content through the scene-specific audience response (SEI) calculated in the step (F)
Wherein the PEI comprises:
For the entire content,
Equation Calculating a motion response from the motion vector;
Equation To calculate a gaze response;
Equation The visual acuity is calculated from the sum of the motion response, the visual acuity, and the acoustical response to the entire contents,
here, Is the motion response for the entire content, Is an eye-gaze response to the whole content, Is the acoustic response to the entire contents, Represents a weight for each scene, and each scene Is the amount of motion variation, Is a motion threshold, Is a change in the line of sight, Is a line-of-sight threshold, Is an acoustic variation amount, Is an acoustic threshold, Is a correction value.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140064989A KR101591402B1 (en) | 2014-05-29 | 2014-05-29 | analysis system and method for response of audience |
PCT/KR2014/012516 WO2015182841A1 (en) | 2014-05-29 | 2014-12-18 | System and method for analyzing audience reaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140064989A KR101591402B1 (en) | 2014-05-29 | 2014-05-29 | analysis system and method for response of audience |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20150137320A true KR20150137320A (en) | 2015-12-09 |
KR101591402B1 KR101591402B1 (en) | 2016-02-03 |
Family
ID=54699143
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140064989A KR101591402B1 (en) | 2014-05-29 | 2014-05-29 | analysis system and method for response of audience |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101591402B1 (en) |
WO (1) | WO2015182841A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101668387B1 (en) | 2016-05-16 | 2016-10-21 | 정문영 | Apparatus for analyzing behavior of movie viewers and method thereof |
KR102179426B1 (en) * | 2019-06-27 | 2020-11-16 | 김재신 | System for Operating World Art Olympic |
KR102184396B1 (en) * | 2019-10-14 | 2020-11-30 | 김재신 | System for operating World Art Olympic and method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009187441A (en) * | 2008-02-08 | 2009-08-20 | Toyohashi Univ Of Technology | Moving image recommendation system based on visual line track information |
KR20090121016A (en) * | 2008-05-21 | 2009-11-25 | 박영민 | Viewer response measurement method and system |
KR101337833B1 (en) | 2012-09-28 | 2013-12-06 | 경희대학교 산학협력단 | Method for estimating response of audience concerning content |
KR20140042504A (en) | 2012-09-28 | 2014-04-07 | 경희대학교 산학협력단 | Method for estimating response of audience group concerning content |
KR20140042505A (en) | 2012-09-28 | 2014-04-07 | 경희대학교 산학협력단 | Method for estimating attention level of audience group concerning content |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010026871A (en) * | 2008-07-22 | 2010-02-04 | Nikon Corp | Information processor and information processing system |
JP5609160B2 (en) * | 2010-02-26 | 2014-10-22 | ソニー株式会社 | Information processing system, content composition apparatus and method, and recording medium |
JP2013016903A (en) * | 2011-06-30 | 2013-01-24 | Toshiba Corp | Information processor and information processing method |
-
2014
- 2014-05-29 KR KR1020140064989A patent/KR101591402B1/en not_active IP Right Cessation
- 2014-12-18 WO PCT/KR2014/012516 patent/WO2015182841A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009187441A (en) * | 2008-02-08 | 2009-08-20 | Toyohashi Univ Of Technology | Moving image recommendation system based on visual line track information |
KR20090121016A (en) * | 2008-05-21 | 2009-11-25 | 박영민 | Viewer response measurement method and system |
KR101337833B1 (en) | 2012-09-28 | 2013-12-06 | 경희대학교 산학협력단 | Method for estimating response of audience concerning content |
KR20140042504A (en) | 2012-09-28 | 2014-04-07 | 경희대학교 산학협력단 | Method for estimating response of audience group concerning content |
KR20140042505A (en) | 2012-09-28 | 2014-04-07 | 경희대학교 산학협력단 | Method for estimating attention level of audience group concerning content |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101668387B1 (en) | 2016-05-16 | 2016-10-21 | 정문영 | Apparatus for analyzing behavior of movie viewers and method thereof |
KR102179426B1 (en) * | 2019-06-27 | 2020-11-16 | 김재신 | System for Operating World Art Olympic |
CN112150313A (en) * | 2019-06-27 | 2020-12-29 | 金在信 | Relay broadcasting operation system for world artistic energy olympic competition |
KR102184396B1 (en) * | 2019-10-14 | 2020-11-30 | 김재신 | System for operating World Art Olympic and method thereof |
Also Published As
Publication number | Publication date |
---|---|
WO2015182841A1 (en) | 2015-12-03 |
KR101591402B1 (en) | 2016-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8636361B2 (en) | Learning-based visual attention prediction system and method thereof | |
US20190236779A1 (en) | Diagnostic imaging assistance apparatus and system, and diagnostic imaging assistance method | |
JP2016146547A5 (en) | ||
US10600189B1 (en) | Optical flow techniques for event cameras | |
CN110837750B (en) | Face quality evaluation method and device | |
CN107767358B (en) | Method and device for determining ambiguity of object in image | |
JP2011130204A (en) | Video information processing method, and video information processing apparatus | |
US20200394418A1 (en) | Image processing method, an image processing apparatus, and a surveillance system | |
KR101591402B1 (en) | analysis system and method for response of audience | |
CN112492297B (en) | Video processing method and related equipment | |
CN110826522A (en) | Method and system for monitoring abnormal human behavior, storage medium and monitoring equipment | |
CN113691721B (en) | Method, device, computer equipment and medium for synthesizing time-lapse photographic video | |
CN110874572B (en) | Information detection method and device and storage medium | |
KR101220223B1 (en) | Method and apparatus for visual discomfort metric of stereoscopic video, recordable medium which program for executing method is recorded | |
CN111448589B (en) | Device, system and method for detecting body movement of a patient | |
JP7210890B2 (en) | Behavior recognition device, behavior recognition method, its program, and computer-readable recording medium recording the program | |
US10904429B2 (en) | Image sensor | |
WO2012054048A1 (en) | Apparatus and method for evaluating an object | |
TWI478099B (en) | Learning-based visual attention prediction system and mathod thereof | |
CN115209121B (en) | Full-range simulation system and method with intelligent integration function | |
US10755088B2 (en) | Augmented reality predictions using machine learning | |
CN110555394A (en) | Fall risk assessment method based on human body shape characteristics | |
CN115048954A (en) | Retina-imitating target detection method and device, storage medium and terminal | |
CN115116136A (en) | Abnormal behavior detection method, device and medium | |
CN115376041A (en) | Three-dimensional panoramic video motion sickness degree prediction method based on content perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
LAPS | Lapse due to unpaid annual fee |