KR101838792B1 - Method and apparatus for sharing user's feeling about contents - Google Patents

Method and apparatus for sharing user's feeling about contents Download PDF

Info

Publication number
KR101838792B1
KR101838792B1 KR1020160019897A KR20160019897A KR101838792B1 KR 101838792 B1 KR101838792 B1 KR 101838792B1 KR 1020160019897 A KR1020160019897 A KR 1020160019897A KR 20160019897 A KR20160019897 A KR 20160019897A KR 101838792 B1 KR101838792 B1 KR 101838792B1
Authority
KR
South Korea
Prior art keywords
emotion
image
user
information
image content
Prior art date
Application number
KR1020160019897A
Other languages
Korean (ko)
Other versions
KR20170098380A (en
Inventor
박정민
김준성
임정연
김건오
Original Assignee
한국산업기술대학교산학협력단
김준성
임정연
김건오
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국산업기술대학교산학협력단, 김준성, 임정연, 김건오 filed Critical 한국산업기술대학교산학협력단
Priority to KR1020160019897A priority Critical patent/KR101838792B1/en
Publication of KR20170098380A publication Critical patent/KR20170098380A/en
Application granted granted Critical
Publication of KR101838792B1 publication Critical patent/KR101838792B1/en

Links

Images

Classifications

    • H04N5/23219
    • G06K9/00268
    • G06K9/00302
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method and apparatus for sharing a user ' s feelings about content are disclosed. A method for analyzing image content based on emotion recognition includes the steps of acquiring a captured image by a user viewing the image content, recognizing an emotion of a user viewing the image content by analyzing the image, May be stored in association with the image content.

Description

[0001] METHOD AND APPARATUS FOR SHARING USER'S FEELING ABOUT CONTENTS [0002]

Description of the Related Art [0002] The following description relates to a technique of sharing a user's feelings about contents, specifically, a technique of recognizing and storing a user's feelings about the content and providing it to other users.

Conventional video players provide not only content but also additional information related to the content. For example, specialized services such as a personal broadcasting service, a sports relay, and a caption uploading service may be provided as additional information, respectively. The user's reaction to the content helps to analyze the user's preference for the content. For example, the content creator can produce better content based on the user's reaction to the content.

According to an embodiment of the present invention, there is provided a method of analyzing image contents based on emotion recognition, comprising the steps of: acquiring a photographed image by a user viewing image content; recognizing emotions of a user viewing the image content by analyzing the image; And storing emotion information for the recognized emotion in association with the image content.

The recognizing step includes the steps of recognizing the face region of the user in the image, detecting the feature points of the face in the face region, and determining the emotion of the user based on the amount of change of the feature points over time can do.

The storing step may store at least one of the identification information of the image content, the time information on which the feeling is recognized, the image information of the image content outputted when the feeling is recognized, and the type information of the recognized feeling .

According to an embodiment of the present invention, there is provided a method of reproducing an image content, the method comprising: receiving emotion information for image content from a server; and reproducing the image content based on the emotion information, It is possible to display the type of emotion and the time period in which the emotion appears.

The emotion information may be based on emotion information of another user who watched the image content.

FIG. 1 is a diagram illustrating a general configuration of a system for sharing emotion information about image contents according to an exemplary embodiment of the present invention. Referring to FIG.
FIG. 2 is a flowchart illustrating an image content analysis method based on emotion recognition according to an exemplary embodiment of the present invention.
3 is a flowchart illustrating an image content playback method according to an exemplary embodiment of the present invention.
4A is a diagram illustrating a Direct Show interface of an image reproducing apparatus according to an embodiment.
4B is a diagram illustrating an interface of a network according to one embodiment.
FIG. 5 is a diagram illustrating a system operation process according to an embodiment.
6 is a diagram showing an example of emotion detection according to an embodiment.
7 is a diagram illustrating operations between various kinds of video playback apparatuses and servers according to an embodiment.
8A is a diagram illustrating an example of connection to a server in a web page according to an embodiment.
8B is a view showing an example of the emotion recording of the user according to the embodiment.
8C is a diagram illustrating an example of emotion statistics data of a user according to an exemplary embodiment.

The specific structural or functional descriptions below are illustrated for purposes of illustration only and are not to be construed as limiting the scope of the present disclosure. Various modifications and variations may be made thereto by those skilled in the art to which the present invention pertains. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment, It should be understood that references to "an embodiment" are not all referring to the same embodiment.

Although the terms first or second may be used to distinguish the various components, the components should not be construed as being limited by the first or second term. It is also to be understood that the terminology used in the description is by way of example only and is not intended to be limiting. The singular expressions include plural expressions unless the context clearly dictates otherwise.

In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description of the present invention with reference to the accompanying drawings, the same components are denoted by the same reference numerals regardless of the reference numerals, and a duplicate description thereof will be omitted.

FIG. 1 is a diagram illustrating a general configuration of a system for sharing emotion information about image contents according to an exemplary embodiment of the present invention. Referring to FIG.

The system for sharing emotion information about image contents according to an exemplary embodiment includes collecting audience information, deriving emotion information, storing the emotion information, and reproducing the image content based on the derived emotion information. Here, the image content may include content such as a movie, a broadcast, an advertisement, a game, etc. as a content including image information. The system may be operated through communication between the server 110 and the client. Here, the client may include the audience information collecting apparatus 120 and the image reproducing apparatus 130. The emotion information may be generated by the server 110 using the audience information that is generated by the audience information collecting device 120 or received from the audience information gathering device 120. [ The emotion information is information about an emotion felt by a user who views the image contents, and the emotion includes an emotion, an amusement, a sadness, a surprise (or helplessness).

When the emotion information is generated by the audience information collecting apparatus 120, the audience information collecting apparatus 120 recognizes the emotion of the user who watches the image contents while the specific image contents are being reproduced, and collects the emotion information of the user have. For example, the audience information collecting apparatus 120 acquires an image of a user who views the image content through a camera, detects feature points of the face in the acquired image, and then, based on the amount of change in the feature point position with time The user's emotion can be recognized. The collected emotion information may be transmitted to the server 110 and stored in association with the image contents. Thereafter, when the same image content is reproduced by another user through the image reproducing apparatus 130, the server 110 may transmit the image content and the emotion information related to the image content to the image reproducing apparatus 130 of another user have. The image reproducing apparatus, which has received the image content and the emotion information, can display the emotion to be felt by the user in advance in the time period in which the image content is reproduced based on the emotion information. Through this process, it becomes possible for the users who view the image contents to share the emotions that they feel while watching the image contents.

When the emotion information is generated by the server 110, the server 110 may generate a database of emotion information, and the database may include emotion information of a plurality of users who watched the image contents, user information (e.g., Gender, age, nationality, etc.), rating information on image contents, and the like. When the viewing of the image content is terminated, the audience information collected by the audience information collecting device 120 is transmitted to the server 110, and the server 110 can generate the emotion information based on the collected audience information. Hereinafter, the emotion information may be provided through the image reproducing apparatus 130 or a web page.

The emotion information about the image contents helps to grasp the detailed symbol of the user about the image contents. For example, a filmmaker can generate a better movie by receiving feedback from a user's response to the movie, and an ad creator can produce an advertisement that motivates the purchaser based on the user's reaction.

FIG. 2 is a flowchart illustrating an image content analysis method based on emotion recognition according to an exemplary embodiment of the present invention.

According to one embodiment, when the emotion information is generated by the audience information collecting apparatus 120, the audience information collecting apparatus 120 may continuously or periodically photograph the user who views the image contents. The audience information collecting device 120 can recognize the emotion of the user who views the image content by analyzing the photographed image. For example, the audience information collecting apparatus 120 can recognize the face region of the user on the image of the viewer and detect the feature points of the face in the face region. The feature points of the face represent feature points located in the face major region such as nose, eye, eyebrow, mouth, and the like. The audience information collecting apparatus 120 can determine the emotion of the user based on the amount of change of the feature points with respect to time. For example, the audience information collecting apparatus 120 may determine whether the user's emotional state is sad, sad, upset, or normal based on the positional change amount of the minutiae corresponding to the mouth area.

The audience information collecting device 120 can capture the screen of the image content viewed by the user in connection with the emotion information of the user. For example, the audience information collecting device 120 may include a title of a captured image content and a user ID in a path where a capture file exists, and generate a reproduction time of the image content as a file name.

The screen to be captured may be in the form of a BMP file, and the audience information collecting apparatus 120 may initialize the bitmap header so as to create bitmap data. In addition, the audience information collecting apparatus 120 may add a sample grabber filter, which is a filter capable of capturing a screen. The audience information collecting apparatus 120 can store a screen in which a current emotion occurs using a sample grabber filter in a buffer. The audience information collecting apparatus 120 can generate a bitmap file through a buffer storing a current screen and a bitmap header with a Create File function and a Write File function as arguments.

The emotion information generated by the audience information collecting device 120 and information on the image content viewed by the user may be transmitted to the server 110. [

According to another embodiment, when the emotion information is generated by the server 110, the audience information collecting apparatus 120 may collect information on the user who views the image contents. Specifically, the audience information collecting device 120 can photograph the user's facial expressions and the like.

According to an exemplary embodiment, the audience information collecting apparatus 120 may transmit information about the collected user to the server 110 through which the server 110 obtains the captured image . According to one embodiment, when the user views the image content, the audience information collecting apparatus 120 may continuously or periodically photograph the user and transmit the photographed image to the server 110. [

According to one embodiment, the server 110 may analyze the image and recognize the emotion of the user viewing the image content. For example, the server 110 can recognize the face region of the user in the image of the viewer and detect the feature points of the face in the face region. The feature points of the face represent feature points located in the face major region such as nose, eye, eyebrow, mouth, and the like. The server 110 can determine the emotion of the user based on the amount of change of the feature points with respect to time.

According to another embodiment, the server 110 can capture the screen of the image content viewed by the user in connection with the emotion information of the user. The emotion of the user can be recognized through the server 110 and then stored in the log file as a log format. Thereafter, when a change in the log is detected while the image is being reproduced, the server 110 reads information of the sensed emotion, and can store information about the current reproduction time and the emotion of the sensed image. At this time, the server 110 can capture a screen of the image content at the reproduction time.

According to one embodiment, the server 110 may store emotion information for the recognized emotion in association with the image content. Specifically, the server 110 may store at least one of the identification information of the image content, the time information on which the emotion is recognized, the image information of the image content outputted when the emotion is recognized, and the type information of the recognized emotion.

According to another embodiment, the audience information collecting apparatus 120 may perform the functions of the video reproducing apparatus 130 together. That is, the audience information collecting apparatus 120 may display emotion information of another user and collect emotion information of a user who is currently watching.

3 is a flowchart illustrating an image content playback method according to an exemplary embodiment of the present invention.

The image reproducing apparatus 130 can receive emotion information on the image content from the server 110. [ Alternatively, the video playback apparatus 130 may receive the video content from the server 110. [

The image reproducing apparatus 130 can reproduce the image content based on the received emotion information. For example, when reproducing image contents, the image reproducing apparatus 130 may display the type of emotion and the time period in which the emotion appears in a bar form. The emotion information may be based on the emotions of other users who watched the same video content.

For example, the user may use the emotional bar on the playback bar to inform the web page or the image playback device 130 of the playback period information of the image content, Can be easily grasped.

According to one embodiment, the interface used in the present invention can largely include two interfaces. The first is the interface of the video reproducing apparatus 130 and the second is the network interface. The following two types of interfaces are described below.

4A is a diagram illustrating an interface of an image reproducing apparatus according to an embodiment.

The interface of the video reproducing apparatus 130 may include functions of the video reproducing apparatus 130 such as playback, pause, stop, subtitle insertion, full screen, and screen capture. The UI module and the emotion interval module operate by calling a function in the interface of the image reproducing apparatus 130. [ In addition, the interface provides appropriate information to each module, so that each module can display the appropriate screen according to the information.

4B is a diagram illustrating an interface of a network according to one embodiment.

The network interface may include information for communication with the server 110 in which emotion information and login information are stored. The login module, the send / receive module of the emotion information, and the movie evaluation module call a function in the network interface and can provide data suitable for each function as an argument. The interface receiving the argument can communicate with the function in the server 110 according to each function and transmit or receive the corresponding information.

FIG. 5 is a diagram illustrating a system operation process according to an embodiment.

The video reproducing apparatus 130 can communicate with the server 110 through the login information input through the network interface. The image reproducing apparatus 130 may receive the emotion information corresponding to the user currently watching from the server 110. [ The received emotion information may be displayed in connection with the image content currently being reproduced.

For example, the display may display emotion information of other users for the currently playing image content through the first status bar. This can be referred to as the accumulated emotion interval. In addition, the display may display the current user's emotion information on the currently playing image content through the second status bar. This may be referred to as the emotion interval of the current user.

6 is a diagram showing an example of emotion detection according to an embodiment.

For example, when the emotion information is generated by the server 110, the server 110 uses an ASM (Active Shape Model) algorithm for finding an object in the image using the shape of the object included in the image and the texture information The face can be recognized. The server 110 can grasp the feature points such as the eyebrow position of the face recognized through the camera, the size and height of the eyes and mouth, and measure the change amount in real time. The server 110 can detect four kinds of emotions such as happiness, angry, surprise, and sadness based on the amount of change.

Since the position of the face changes when the user moves his or her head while viewing the image content or the posture changes, the server 110 can set a new reference point at an existing reference point in order to obtain the amount of change in both eyebrows, mouth, nose, In addition, the server 110 can discriminate the surprised emotion by changing the mouth and the position of the eyebrows. Since the position of the eyebrow is changed when the eye blinks, the server 110 may exclude blinking from emotion recognition in order to increase the accuracy.

7 is a diagram illustrating operations between various kinds of video playback apparatuses and servers according to an embodiment.

According to one embodiment, when reproducing a specific image content, the image reproducing apparatus 131 may transmit the name of the image to the server 110 to request whether there is emotion information of the image content. When there is no information about the emotion information and the image content, the server 110 newly inserts the information of the corresponding image content into the database movie information table and transmits a message to the image reproducing apparatus 131 that the emotion information does not exist. When the emotion information exists, the server 110 may search for a section corresponding to the representative emotion information among various emotions, and may transmit the section to the image reproducing apparatus 131.

According to another embodiment, the user may request the server 110 for emotion information about the image content through the web page 132. [ The server 110 may output the emotion information about the image content to the web page.

8A is a diagram illustrating an example of connection to a server in a web page according to an embodiment. The web page can display not only emotion information, but also various information such as the performance, performance, and rating of the reality given after watching the movie.

8B is a view showing an example of the emotion recording of the user according to the embodiment. The user can grasp the emotion information about the movie he / she viewed through the web page. The user can retrieve or delete screen captures for the emotions.

8C is a diagram illustrating an example of emotion statistics data of a user according to an exemplary embodiment. The image reproducing apparatus 130 or web page can display emotion information of other users corresponding to a specific section of the image as statistics. When the section is selected, it can be displayed through a graph or the like as to how the user feels emotion by age.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (5)

Acquiring a photographed image of a user viewing the image content;
Analyzing the image and recognizing the emotion of a user viewing the image content;
Storing emotion information for the recognized emotion in association with the image content;
Receiving emotion information about a video content from a server; And
And reproducing the image content based on emotion information of another user who watched the image content,
The method of claim 1,
A first emotional bar type, a second emotional type, and a second emotional type, wherein when the image content is reproduced, the type of emotions of the other user and the time periods in which the emotions of the other user are displayed, Displays a time period in which the emotion of the user who watches the image content appears in the form of a second emotional bar using a pattern or color depending on the emotional type of the user who watches the image content, 1 emotional bar and the second emotional bar are arranged adjacent to each other,
A method of reproducing image contents based on emotion recognition.
The method according to claim 1,
Wherein the recognizing comprises:
Recognizing a face region of the user in the image;
Detecting feature points of a face in the face region; And
Determining the emotion of the user based on the amount of change of the feature points with respect to time
Wherein the emotion recognition method comprises:
The method according to claim 1,
Wherein the storing step comprises:
Wherein the emotion recognition unit stores at least one of identification information of the image content, time information of the emotion recognized, image information of the image content output when the emotion is recognized, and type information of the recognized emotion, Playback method.

delete delete
KR1020160019897A 2016-02-19 2016-02-19 Method and apparatus for sharing user's feeling about contents KR101838792B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160019897A KR101838792B1 (en) 2016-02-19 2016-02-19 Method and apparatus for sharing user's feeling about contents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160019897A KR101838792B1 (en) 2016-02-19 2016-02-19 Method and apparatus for sharing user's feeling about contents

Publications (2)

Publication Number Publication Date
KR20170098380A KR20170098380A (en) 2017-08-30
KR101838792B1 true KR101838792B1 (en) 2018-03-15

Family

ID=59760415

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160019897A KR101838792B1 (en) 2016-02-19 2016-02-19 Method and apparatus for sharing user's feeling about contents

Country Status (1)

Country Link
KR (1) KR101838792B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11847827B2 (en) 2020-07-09 2023-12-19 Samsung Electronics Co., Ltd. Device and method for generating summary video

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102664589B1 (en) * 2018-09-11 2024-05-10 현대자동차주식회사 Emotion classifying apparatus, and controlling method of the emotion classifying apparatus
KR102555524B1 (en) * 2021-06-29 2023-07-21 주식회사 케이에스앤픽 Server for providing video contents platform and method thereof
KR102427964B1 (en) * 2021-12-16 2022-08-02 조현경 Interactive Responsive Web Drama Playback system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008234431A (en) * 2007-03-22 2008-10-02 Toshiba Corp Comment accumulation device, comment creation browsing device, comment browsing system, and program
JP2016024577A (en) * 2014-07-18 2016-02-08 株式会社Nttドコモ User behavior recording device, user behavior recording method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008234431A (en) * 2007-03-22 2008-10-02 Toshiba Corp Comment accumulation device, comment creation browsing device, comment browsing system, and program
JP2016024577A (en) * 2014-07-18 2016-02-08 株式会社Nttドコモ User behavior recording device, user behavior recording method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11847827B2 (en) 2020-07-09 2023-12-19 Samsung Electronics Co., Ltd. Device and method for generating summary video

Also Published As

Publication number Publication date
KR20170098380A (en) 2017-08-30

Similar Documents

Publication Publication Date Title
US7889073B2 (en) Laugh detector and system and method for tracking an emotional response to a media presentation
US8726304B2 (en) Time varying evaluation of multimedia content
US7953254B2 (en) Method and apparatus for generating meta data of content
JP6282769B2 (en) Engagement value processing system and engagement value processing device
KR101838792B1 (en) Method and apparatus for sharing user's feeling about contents
JP5391144B2 (en) Facial expression change degree measuring device, program thereof, and program interest degree measuring device
KR101895846B1 (en) Facilitating television based interaction with social networking tools
US20120072936A1 (en) Automatic Customized Advertisement Generation System
KR101618590B1 (en) Method and system for providing immersive effects
CN110868554B (en) Method, device and equipment for changing faces in real time in live broadcast and storage medium
US20140223474A1 (en) Interactive media systems
WO2018097177A1 (en) Engagement measurement system
KR20140043070A (en) Devices, systems, methods, and media for detecting, indexing, and comparing video signals from a video display in a background scene using a camera-enabled device
CN106851395B (en) Video playing method and player
JP6236875B2 (en) Content providing program, content providing method, and content providing apparatus
KR101947079B1 (en) Psychological reaction inference system and method of a watching viewer on broadcasting contents
JP7206741B2 (en) HEALTH CONDITION DETERMINATION SYSTEM, HEALTH CONDITION DETERMINATION DEVICE, SERVER, HEALTH CONDITION DETERMINATION METHOD, AND PROGRAM
JP2011239158A (en) User reaction estimation apparatus, user reaction estimation method and user reaction estimation program
CN113762156B (en) Video data processing method, device and storage medium
CN105992065B (en) Video on demand social interaction method and system
JP2011254232A (en) Information processing device, information processing method, and program
TR201921941A2 (en) A system that allows the audience to be presented with a mood-related scenario.
TW201826086A (en) Engagement measurement system
Uribe et al. New accessibility services in HbbTV based on a deep learning approach for media content analysis
JP2013114732A (en) Recorder and recording method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant