CN110719505B - Shared media content providing method and system based on emotion - Google Patents

Shared media content providing method and system based on emotion Download PDF

Info

Publication number
CN110719505B
CN110719505B CN201910916179.8A CN201910916179A CN110719505B CN 110719505 B CN110719505 B CN 110719505B CN 201910916179 A CN201910916179 A CN 201910916179A CN 110719505 B CN110719505 B CN 110719505B
Authority
CN
China
Prior art keywords
shared media
related information
experience
mode
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910916179.8A
Other languages
Chinese (zh)
Other versions
CN110719505A (en
Inventor
刘庭芳
向小岩
曹健
谭姝
张海蒂
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201910916179.8A priority Critical patent/CN110719505B/en
Publication of CN110719505A publication Critical patent/CN110719505A/en
Application granted granted Critical
Publication of CN110719505B publication Critical patent/CN110719505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences

Abstract

The embodiment of the invention acquires emotion related information of each experiencer of a shared medium in real time, determines the emotion related information classification level of each experiencer according to the acquired emotion related information of each experiencer, performs comprehensive calculation according to the determined emotion related information classification level of each experiencer and corresponding weight, determines to obtain a corresponding shared medium content providing mode, and performs switching. Thus, embodiments of the present invention may provide adaptive shared media content based on the mood of the owner of the shared media content.

Description

Shared media content providing method and system based on emotion
Technical Field
The invention relates to the technical field of computers, in particular to a shared media content providing method and system based on emotion.
Background
In the media sharing experience, a part of users sharing the shared media may generate a series of emotion and physical sign changes, if the emotion and physical sign changes are ignored, the part of users may feel rejected, or negative experiences such as the content of the similar shared media is not suitable for the preferences of the users and cannot be put into the shared media occur, and in extreme cases, the emotion and physiological discomfort of the part of users are more likely to be caused, so that the experience of the whole user group sharing the shared media is influenced.
Among them, the patent application with publication number US2018124464a1 discloses a method for converting media content based on emotion, which selects and switches media content according to user characteristics such as emotional reaction, historical preference and narrative preference of individual users and preferences of other friends of the users in the social network circle to the media content, and generates a path graph for replay or sending to the social media network for sharing with the switched nodes.
Although the above patent application solves the problem that an individual user may generate inappropriate or negative emotions when watching media content, by tracking the emotion change of the individual user in real time and combining the historical preference of the individual user and selecting and switching media content in advance according to the setting of the preference of other friends to the media content in a social network circle, the switched node graph can be shared to the social network. However, this patent application also has disadvantages: the switching of the media content based on the emotion of the individual user cannot timely affect other commonly experienced users, so that the users who experience the media content at the same time may experience different media contents, the media contents cannot be converted in real time according to the emotions of all the users, and the emotions of other experiencers cannot be experienced in real time, thereby affecting the mutual communication among all the users sharing the media content and reducing the participation sense of the common experience.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method for providing shared media content based on emotion, which is capable of providing adaptive shared media content according to the emotion of all experiences sharing the media content.
Embodiments of the present invention provide a system for providing shared media content based on emotion, which is capable of providing adaptive shared media content according to the emotion of all experiences sharing the media content.
The embodiment of the invention is realized as follows:
a shared media content providing method based on emotion, comprising:
obtaining emotion related information of each experiencer sharing a medium;
determining the emotion related information classification level of each experiencer according to the acquired emotion related information of each experiencer, and performing comprehensive calculation according to the determined emotion related information classification level of each experiencer and the corresponding weight to determine the corresponding shared media content providing mode;
and switching the providing modes of the shared media content according to the determined providing modes of the shared media content.
The emotion related information of each experiencer of the shared media is obtained by tracking sign change information of each experiencer of the shared media in real time.
Before the obtaining the emotion-related information of each experiencer of the shared media, the method further comprises the following steps:
setting experience modes of shared media, including an experience mode of a participatory shared media, an experience mode of a guardianship shared media and an experience mode of a following shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
When the experience mode of the participatory shared media is set, the method for determining the corresponding shared media content supply mode is as follows:
setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset;
and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, summing to obtain the emotion related information of all tracked persons, and determining to obtain the corresponding sharing media content providing mode.
When the monitored shared media experience mode is set, the corresponding shared media content providing mode is determined as follows:
setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person;
calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, summing to obtain the emotion related information of all monitored tracked persons, and determining to obtain a corresponding shared media content providing mode;
and feeding back the corresponding sharing media content providing mode to the guardian tracker.
When the experience mode of the following shared media is set, the determining of the shared media content providing mode to be switched comprises the following steps:
setting corresponding weight for each following tracked person, wherein the weight is set according to historical emotion related information of each following tracked person or in advance;
calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode;
and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
A shared media content providing system based on emotion, comprising: an experiencer feedback tracking module, a media experience judging module and a media experience interaction module, wherein,
the system comprises an experiencer feedback tracking module, a shared media tracking module and a feedback processing module, wherein the experiencer feedback tracking module is used for acquiring emotion related information of each experiencer of the shared media;
the media experience judging module is used for determining the emotion related information classification level of each experiencer according to the obtained emotion related information of each experiencer, and performing comprehensive calculation to determine the corresponding shared media content providing mode according to the determined emotion related information classification level of each experiencer and the corresponding weight;
and the media experience interaction module is used for switching the sharing media content providing modes according to the determined sharing media content providing modes.
Further comprising: the experience mode setting module is used for setting the experience modes of the shared media, including the experience mode of the participatory shared media, the experience mode of the guardian shared media and the experience mode of the following shared media, and indicating the experience feedback tracking module and the media experience interaction module to process according to the experience mode of the shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
The media experience judging module is further configured to determine, when the experience mode of the participating shared media is set, that the corresponding shared media content providing manner is: setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset; and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, and summing to obtain the emotion related information of all tracked persons and obtain the corresponding sharing media content providing mode.
The media experience judgment module is further configured to determine, when the monitored shared media experience mode is set, that the corresponding shared media content providing manner is: setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all monitored tracked persons and obtain the corresponding sharing media content providing mode;
the system also includes an experiencer feedback sharing module, which is also used for feeding back the corresponding sharing media content providing mode to the guardian tracker.
The media experience determining module is further configured to, when the experience mode of the follow-up shared media is set, determine a sharing media content providing manner to be switched, including: setting corresponding weight for each following tracked person, wherein the weight is set according to historical emotion related information of each following tracked person or in advance; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode; and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
As can be seen from the above, in the embodiment of the present invention, emotion-related information of each experiencer of a shared media is obtained in real time, a rating of emotion-related information of each experiencer is determined according to the obtained emotion-related information of each experiencer, comprehensive calculation is performed according to the determined rating of emotion-related information of each experiencer and a corresponding weight, a corresponding shared media content providing manner is determined, and switching is performed. Thus, embodiments of the present invention may provide adaptive shared media content based on the mood of the owner of the shared media content.
Drawings
FIG. 1 is a flow chart of a method for providing shared media content based on emotion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system for providing shared media content based on emotion according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an embodiment of a method for providing shared media content based on emotion according to the present invention;
FIG. 4 is a schematic structural diagram of an experiencer accessing a shared media system according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for providing shared media content in an experience mode of a participating shared media according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for providing shared media content in an experience mode of a monitored shared media according to an embodiment of the present invention;
FIG. 7 is a flowchart of a method for providing shared media content in a follow-up shared media experience mode according to an embodiment of the present invention;
FIG. 8 is a flowchart of a method for providing feedback to a follower-type experiencer in a follower-type shared media experience mode according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
It can be seen from the background art that, when providing shared media content for multiple experiences at the same time, the shared media content cannot be switched based on the emotion of each person, but only the media content of the user can be switched, so that other experiences cannot be affected, and the experiences may provide the shared media content in different ways of providing shared media content. In order to solve the problem, the embodiment of the invention acquires emotion related information of each experiencer of the shared media in real time, determines the emotion related information division level of each experiencer according to the acquired emotion related information of each experiencer, performs comprehensive calculation according to the determined emotion related information division level of each experiencer and the corresponding weight, determines to obtain a corresponding shared media content providing mode, and performs switching.
Thus, embodiments of the present invention may provide adaptive shared media content based on the mood of the owner of the shared media content.
Specifically, the embodiment of the present invention obtains emotion-related information of each experience of the shared medium by tracking sign change information of each experience of the shared medium, where the sign change information includes, but is not limited to, head movement, breathing frequency, line of sight change, pupil size, pulse frequency, or sweat gland condition of hands, and the sign change information may represent emotion-related information of the experiences, and the emotion-related information includes, but is not limited to, information of concentration, boredom, excitement, fright, or depression, and the like, and determines an emotion-related information classification level of each experience according to the emotion-related information classification level of each experience of the shared medium, and determines a corresponding shared medium content providing manner by comprehensive calculation according to the determined emotion-related information classification level and pair weight of each experience, so as to perform switching. The sharing media providing mode comprises but is not limited to changing plot, picture, background music or/and tactile experience and the like, so that the participation sense of the experiencers is improved, the shared emotion can be shared in real time, the sharing media content providing mode can be adjusted according to the emotion change of the experiencers, the shared media content can better meet the contract and receive the characteristics of a plurality of experiencers sharing the media, and the better experience effect is achieved.
Fig. 1 is a flowchart of a method for providing shared media content based on emotion according to an embodiment of the present invention, which includes the following specific steps:
step 101, obtaining emotion related information of each experience of a shared medium;
102, determining the emotion related information classification level of each experiencer according to the acquired emotion related information of each experiencer, and performing comprehensive calculation according to the determined emotion related information classification level of each experiencer and the corresponding weight to determine a corresponding shared media content providing mode;
and 103, switching the providing modes of the shared media content according to the determined providing modes of the shared media content.
In the method, the obtaining of the emotion related information of each experiencer of the shared medium is obtained by tracking the sign change information of each experiencer of the shared medium in real time.
Before the method, the method further comprises the following steps: setting experience modes of shared media, including an experience mode of a participatory shared media, an experience mode of a guardianship shared media and an experience mode of a following shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
In this method, when the experience mode of the participating shared media is set, the determining of the corresponding shared media content providing manner is:
setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset;
and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, summing to obtain the emotion related information of all tracked persons, and determining to obtain the corresponding sharing media content providing mode.
Here, after obtaining the corresponding sharing media content providing manner, the method further includes: and feeding back the obtained corresponding sharing media content providing mode to the tracked person.
In the method, when the monitored shared media experience mode is set, the determination of the corresponding shared media content providing mode is as follows: setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, summing to obtain the emotion related information of all monitored tracked persons, and determining to obtain a corresponding shared media content providing mode; and feeding back the corresponding sharing media content providing mode to the guardian tracker.
In the method, when the experience mode of the follow-up shared media is set, the determining of the shared media content providing mode to be switched comprises the following steps:
setting corresponding weight for each following tracked person, wherein the weight is set according to historical emotion related information of each following tracked person or in advance;
calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode;
and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
Fig. 2 is a schematic structural diagram of a system for providing shared media content based on emotion according to an embodiment of the present invention, including: an experiencer feedback tracking module, a media experience judging module and a media experience interaction module, wherein,
the system comprises an experiencer feedback tracking module, a shared media tracking module and a feedback processing module, wherein the experiencer feedback tracking module is used for acquiring emotion related information of each experiencer of the shared media;
the media experience judging module is used for determining the emotion related information classification level of each experiencer according to the obtained emotion related information of each experiencer, and performing comprehensive calculation to determine the corresponding shared media content providing mode according to the determined emotion related information classification level of each experiencer and the corresponding weight;
and the media experience interaction module is used for switching the sharing media content providing modes according to the determined sharing media content providing modes.
In the system, further comprising: the experience mode setting module is used for setting the experience modes of the shared media, including the experience mode of the participatory shared media, the experience mode of the guardian shared media and the experience mode of the following shared media, and indicating the experience feedback tracking module and the media experience interaction module to process according to the experience mode of the shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
In this system, the media experience determination module is further configured to determine, when the experience mode of the participating shared media is set, that the corresponding shared media content providing manner is: setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset; and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, and summing to obtain the emotion related information of all tracked persons and obtain the corresponding sharing media content providing mode.
In this system, the media experience determination module is further configured to determine, when the monitored shared media experience mode is set, that the corresponding shared media content providing manner is: setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all monitored tracked persons and obtain the corresponding sharing media content providing mode;
the system also includes an experiencer feedback sharing module, which is also used for feeding back the corresponding sharing media content providing mode to the guardian tracker.
In this system, the media experience decision module is further configured to, when the experience mode of the follow-up shared media is set, determine the shared media content provision manner to be switched to include: setting corresponding weight for each following tracked person, wherein the weight is set according to historical emotion related information of each following tracked person or in advance; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode; and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
Fig. 3 is a flowchart of an embodiment of a method for providing shared media content based on emotion according to the present invention, which includes the following steps:
step 301, selecting experience modes of shared media, including an experience mode of a participating shared media, an experience mode of a monitoring shared media and an experience mode of a following shared media;
step 302, setting a corresponding preset value in an experiencer mode setting unit aiming at the experience mode of the selected shared media;
in the step, when the experience mode of the participatory shared media is adopted, setting each experience person as a tracked person, and setting corresponding weight for each tracked person; when the experience mode of the guardianship type shared medium is adopted, a guardianship type tracked person and a guardianship type tracked person are set, corresponding weight is set for each guardianship type tracked person, and meanwhile, an emotion related information feedback threshold value is set for each guardianship type tracked person; when the experience mode of the following type shared media is adopted, a following type tracked person and a following type tracked person are set, a plurality of following type tracked persons can be set, corresponding weight is set for each following type tracked person, and meanwhile, each following type tracked person sets a form for switching the providing mode of the shared media according to emotion related information feedback;
303, monitoring the sign change information of each tracked person in real time by an experiencer feedback tracking module to obtain emotion related information of each tracked person;
in the step, the sign change information comprises the movement of the head, the breathing frequency, the eye line transformation of the glasses, the pupil size, the pulse frequency or/and the hand sweat gland and the like;
step 304, the media experience judgment module is used for judging according to the emotion related information of each tracked person obtained in the step 303, carrying out emotion grade division to obtain emotion grade, and storing the emotion grade into a history file set corresponding to the tracked person;
step 305, the media experience judging module judges the media experience mode of each experience person of the shared media according to the shared media experience mode selected in the step 301 and the emotion classification level obtained in the step 304;
step 306, for the experience mode of the participatory shared media, the media experience judgment module classifies the level and sets the corresponding weight according to the emotion of each tracked person, and judges to obtain the corresponding shared media providing mode, and all the experiencers experience the same media content in the shared media providing mode;
step 307, for the experience mode of the monitored shared media, the media experience judgment module judges to obtain a corresponding shared media providing mode according to the emotion classification level and the set weight of each monitored tracked person, and all the experiencers experience the same media content in the shared media providing mode; meanwhile, the experiencer feedback sharing module feeds back the emotion related information feedback threshold value set in the step 302 to each monitoring tracker, and the feedback mode can be preset and includes but is not limited to text prompt, sound prompt or vibration prompt and the like;
step 308, for the experience mode of the following shared media, the media experience judgment module judges to obtain the shared media main line information in the corresponding shared media providing mode according to the emotion classification level and the set weight of each following tracked person, and sets the shared media main line information as a first media experience switching judgment; and on the basis of the first media experience switching judgment, judging according to the form of switching the shared media providing mode corresponding to the follower tracker, and judging to obtain a final shared media providing mode (setting as a second media experience switching judgment). In this resulting shared media offering, all of the experiencers share the same shared media linepipe (same media narrative or media body), but different follower trackers may experience different media content;
in this step, the first determination is made that the shared medium providing method is mainly but not limited to changing the plot, picture, background music, olfactory or/and tactile experience, etc.; the second determination of the shared media provision mode is embodied as: the method is determined according to the emotion rating generated by the following tracker after the follow-up tracker determines the shared media for the first time and the form of the switching shared media providing method corresponding to the following tracker set in step 302, where the form of the switching shared media providing method corresponding to the following tracker includes, but is not limited to: switching the way of describing the main plot of the shared media, changing background music, changing picture, changing touch experience, or/and changing the way of voice narration.
Fig. 4 is a schematic structural diagram of an experiencer accessing shared media system according to an embodiment of the present invention, as shown in the figure, there may be multiple experiencers (called trackers in a certain experience mode) sharing media, and emotion-related information and sign changes of these experiencers will be obtained through various ways, including but not limited to: the intelligent watch, the intelligent glasses, the wearable gloves, the wearable socks or the wearable shoes and the like track in real time, judge through the grading of emotion and sign changes and corresponding artificial intelligence algorithms, and switch the providing modes of the shared media contents through feedback equipment such as but not limited to the wearable watch, the wearable socks, the wearable shoes, the intelligent glasses or the intelligent earphones and the like, so as to perform different shared media content experiences.
Fig. 5 is a flowchart of a method for providing shared media content in an experience mode of a participating shared media according to an embodiment of the present invention, which includes the following specific steps:
501, obtaining emotion related information of each experiencer by an experiencer feedback tracking module;
502, the media experience judgment module carries out emotion grade division on each experience person to obtain emotion grade;
step 503, the media experience judgment module judges the sharing media providing mode according to the obtained emotion classification level of each experience and the set corresponding weight;
step 504, judging whether the emotion related information is preset to be shared, if so, executing step 505; if not, ending;
step 505, sharing the emotion related information to other experiencers;
step 506, judging whether a threshold value for switching the providing mode of the shared media is reached, if so, executing step 507; if not, ending;
step 507, switching the providing mode of the shared media content, such as changing plot, picture, background music or tactile experience;
and step 508, sending the shared media content with the switched shared media content providing mode to each experiencer.
Fig. 6 is a flowchart of a method for providing shared media content in an experience mode of a guardian shared media according to an embodiment of the present invention, as shown in the figure: after the sign change of each tracked person is preset to be captured by the experiencer feedback tracking module in the mode, emotion and sign change grading is carried out according to emotion and sign change grading of the tracked person preset in the experiencer mode setting module in advance, meanwhile, the mode of media experience switching (including but not limited to changing plot, picture, background music, tactile experience and the like) is judged by combining the emotion and sign change grading of the tracked person and the weight of the corresponding tracked person, and the changed media experience content is sent to each experiencer. Meanwhile, whether the emotion or physical sign change is shared with other following experiencers or not is judged according to the preset of the tracked person in the experiencer mode setting module. In the guardian experience mode of sharing emotional media, each experience will experience the same media experience content.
Fig. 7 is a flowchart of a method for providing shared media content in an experience mode of a follow-up shared media according to an embodiment of the present invention, where, as shown in the figure, after sign changes of each tracked person in the preset mode are captured by an experiencer feedback tracking module, emotion and sign change ratings of the tracked person are preset in the experiencer mode setting module in advance, and meanwhile, a media experience switching manner (including but not limited to changing a plot, a picture, background music, haptic experience, and the like) is determined by combining the emotion and sign change ratings of the tracked person, and the changed media experience content is sent to each experienced person.
Fig. 8 is a flowchart of a feedback method of a follower-type experiencer in the experience mode of the follower-type shared media according to the embodiment of the present invention, as shown in the figure: after the sign change of each tracker in the preset mode is captured by the experiencer feedback tracking module, emotion and sign change grading is carried out according to emotion and sign change grading of the tracker preset in the experiencer mode setting module in advance, and meanwhile, the mode of media experience switching (including but not limited to the mode of setting up main plots of switching media, changing background music, changing pictures, changing touch experience and changing voice narration mode) is judged by combining the emotion and sign change grading of the tracker and the media switching mode of the tracker. In the follow-up media sharing experience mode, all of the experiences share the same media main line (same media narrative or media subject). Different trackers may experience different media content, for example, different trackers may experience different media content narratives, different background music, different haptic effects, partial pictures or different voice-overs, etc. depending on their mood and physical sign, but always maintain the same media narrative or media body.
As can be seen from the embodiments of the present invention, the embodiments of the present invention dynamically track sign changes of a specific experiencer (including, but not limited to, head movement, breathing frequency, line of sight change, pupil size, pulse frequency, hand sweat gland condition, etc.), perceive emotions of the experiencer (including, but not limited to, concentration, boredom, excitement, frightening, depression, etc.) according to the sign changes, share the emotion and sign changes and switch media experiences (including, but not limited to, changing plot, picture, background music, haptic experience, etc.) according to the sign changes and a preset sharing manner, so that shared media content better matches characteristics of the group of users, better serve all users in the group, and achieve a better shared media experience effect.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A method for providing shared media content based on emotion, comprising:
obtaining emotion related information of each experiencer sharing a medium;
determining the emotion related information classification level of each experiencer according to the acquired emotion related information of each experiencer, and performing comprehensive calculation according to the determined emotion related information classification level of each experiencer and the corresponding weight to determine the corresponding shared media content providing mode;
switching the providing modes of the shared media contents according to the determined providing modes of the shared media contents;
before the obtaining the emotion-related information of each experiencer of the shared media, the method further comprises the following steps:
setting experience modes of shared media, including an experience mode of a participatory shared media, an experience mode of a guardianship shared media and an experience mode of a following shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
2. The method of claim 1, wherein obtaining mood-related information for each of the experiences of the shared media is performed by tracking, in real-time, sign change information for each of the experiences of the shared media.
3. The method of claim 1, wherein when set to the experience mode for the participatory shared media, the determining the corresponding shared media content provision is by:
setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset;
and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, summing to obtain the emotion related information of all tracked persons, and determining to obtain the corresponding sharing media content providing mode.
4. The method of claim 1, wherein when set to the guardian shared media experience mode, the determining the corresponding shared media content offerings is:
setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person;
calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, summing to obtain the emotion related information of all monitored tracked persons, and determining to obtain a corresponding shared media content providing mode;
and feeding back the corresponding sharing media content providing mode to the guardian tracker.
5. The method of claim 1, wherein when set to the experience mode for the follow-up shared media, the determining the shared media content provision to switch to comprises:
setting corresponding weight for each following tracked person, wherein the weight is preset according to the historical emotion related information of each following tracked person;
calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode;
and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
6. A shared media content providing system based on emotion, comprising: an experiencer feedback tracking module, a media experience judging module and a media experience interaction module, wherein,
the system comprises an experiencer feedback tracking module, a shared media tracking module and a feedback processing module, wherein the experiencer feedback tracking module is used for acquiring emotion related information of each experiencer of the shared media;
the media experience judging module is used for determining the emotion related information classification level of each experiencer according to the obtained emotion related information of each experiencer, and performing comprehensive calculation to determine the corresponding shared media content providing mode according to the determined emotion related information classification level of each experiencer and the corresponding weight;
the media experience interaction module is used for switching the sharing media content providing modes according to the determined sharing media content providing modes; the system further comprises: the experience mode setting module is used for setting the experience modes of the shared media, including the experience mode of the participatory shared media, the experience mode of the guardian shared media and the experience mode of the following shared media, and indicating the experience feedback tracking module and the media experience interaction module to process according to the experience mode of the shared media;
when set to the experience mode of the participatory shared media, each of the experiences of the shared media is set as the tracked;
when the experience mode of the guardian type shared medium is set, each experiencer of the shared medium is set as a guardian type tracked person or a guardian type tracked person;
when set to the experience mode of the follow-up shared media, each of the experiences of the shared media is set to be a follow-up tracked or a follow-up tracker.
7. The system of claim 6, wherein the media experience decision module is further configured to determine, when set to the experience mode for the participating shared media, that the corresponding shared media content is provided in a manner that: setting a corresponding weight for each tracked person, wherein the weight is set according to historical emotion related information of each tracked person or is preset; and calculating the product between the emotion related information classification level of each tracked person and the corresponding weight, and summing to obtain the emotion related information of all tracked persons and obtain the corresponding sharing media content providing mode.
8. The system of claim 6, wherein the media experience decision module is further configured to determine, when the monitored shared media experience mode is set, that the corresponding shared media content is provided by: setting a corresponding weight for each guardian tracked person, wherein the weight is preset according to the historical emotion related information of each guardian tracked person; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all monitored tracked persons and obtain the corresponding sharing media content providing mode;
the system also includes an experiencer feedback sharing module, which is also used for feeding back the corresponding sharing media content providing mode to the guardian tracker.
9. The system of claim 6, wherein the media experience decision module, when set to the experience mode for the follow-up shared media, is further configured to determine the shared media content provision to switch to comprises: setting corresponding weight for each following tracked person, wherein the weight is preset according to the historical emotion related information of each following tracked person; calculating the product between the emotion related information classification level of each monitored tracked person and the corresponding weight, and then summing to obtain the emotion related information of all the following tracked persons and obtain the shared media main line information in the corresponding shared media content providing mode; and under the shared media main line information in the corresponding shared media content providing mode, determining different shared media contents under the corresponding shared media main line information according to the emotion related information of the follower tracker.
CN201910916179.8A 2019-09-26 2019-09-26 Shared media content providing method and system based on emotion Active CN110719505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910916179.8A CN110719505B (en) 2019-09-26 2019-09-26 Shared media content providing method and system based on emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910916179.8A CN110719505B (en) 2019-09-26 2019-09-26 Shared media content providing method and system based on emotion

Publications (2)

Publication Number Publication Date
CN110719505A CN110719505A (en) 2020-01-21
CN110719505B true CN110719505B (en) 2022-02-25

Family

ID=69210959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910916179.8A Active CN110719505B (en) 2019-09-26 2019-09-26 Shared media content providing method and system based on emotion

Country Status (1)

Country Link
CN (1) CN110719505B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010118753A (en) * 2008-11-11 2010-05-27 Sharp Corp Playback apparatus, control program of playback apparatus, and recording medium with control program of playback apparatus recorded thereon
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN107609334A (en) * 2017-09-20 2018-01-19 北京理工大学 A kind of audience response analysis system and method
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment
WO2019067783A1 (en) * 2017-09-29 2019-04-04 Chappell Arvel A Production and control of cinematic content responsive to user emotional state

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012019381A (en) * 2010-07-08 2012-01-26 Sony Corp Image processor and image processing method
CN105345818B (en) * 2015-11-04 2018-02-09 深圳好未来智能科技有限公司 Band is in a bad mood and the 3D video interactives robot of expression module
US10440434B2 (en) * 2016-10-28 2019-10-08 International Business Machines Corporation Experience-directed dynamic steganographic content switching
CN107105320B (en) * 2017-03-07 2019-08-06 上海媒智科技有限公司 A kind of Online Video temperature prediction technique and system based on user emotion
CN109271599A (en) * 2018-08-13 2019-01-25 百度在线网络技术(北京)有限公司 Data sharing method, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010118753A (en) * 2008-11-11 2010-05-27 Sharp Corp Playback apparatus, control program of playback apparatus, and recording medium with control program of playback apparatus recorded thereon
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN107609334A (en) * 2017-09-20 2018-01-19 北京理工大学 A kind of audience response analysis system and method
WO2019067783A1 (en) * 2017-09-29 2019-04-04 Chappell Arvel A Production and control of cinematic content responsive to user emotional state
CN109240786A (en) * 2018-09-04 2019-01-18 广东小天才科技有限公司 A kind of subject replacement method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
情绪视频监控系统中的人脸采集服务器设计与实现;蔡龙健;《中国优秀硕士学位论文全文数据库(电子期刊)》;20170215(第2期);全文 *

Also Published As

Publication number Publication date
CN110719505A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
US10944708B2 (en) Conversation agent
TWI681315B (en) Data transmission system and method thereof
US20190102706A1 (en) Affective response based recommendations
US20150058327A1 (en) Responding to apprehension towards an experience with an explanation indicative of similarity to a prior experience
US9665832B2 (en) Estimating affective response to a token instance utilizing a predicted affective response to its background
US8903176B2 (en) Systems and methods using observed emotional data
US20180101776A1 (en) Extracting An Emotional State From Device Data
WO2019067783A1 (en) Production and control of cinematic content responsive to user emotional state
KR20220039702A (en) Multimodal model for dynamically responsive virtual characters
US20180075763A1 (en) System and method of generating recommendations to alleviate loneliness
US20240012839A1 (en) Apparatus, systems and methods for providing conversational assistance
JP2023551476A (en) Graphic interchange format file identification for inclusion in video game content
Paraschos et al. Game difficulty adaptation and experience personalization: A literature review
JP7044244B2 (en) Reinforcement learning system
US20220368770A1 (en) Variable-intensity immersion for extended reality media
JP2021033982A (en) Moving image distribution system, moving image distribution method and moving image distribution program
CN110719505B (en) Shared media content providing method and system based on emotion
CA3233781A1 (en) Mental health intervention using a virtual environment
US11935140B2 (en) Initiating communication between first and second users
US20210235132A1 (en) System for providing a virtual focus group facility
WO2024071027A1 (en) Recommendation by analyzing brain information
US20230088373A1 (en) Progressive individual assessments using collected inputs
Maris et al. Towards Cheaper Tourists' Emotion and Satisfaction Estimation with PCA and Subgroup Analysis
Yusuf et al. Stand-alone application of user-specific emotion recognition model to improve real-time online voice communication using expressive avatar
CN117252731A (en) Portable travel service system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant