CN108574875A - Promote the interaction based on TV with social networking tool - Google Patents

Promote the interaction based on TV with social networking tool Download PDF

Info

Publication number
CN108574875A
CN108574875A CN201710769645.5A CN201710769645A CN108574875A CN 108574875 A CN108574875 A CN 108574875A CN 201710769645 A CN201710769645 A CN 201710769645A CN 108574875 A CN108574875 A CN 108574875A
Authority
CN
China
Prior art keywords
user
related system
amusement
amusement related
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710769645.5A
Other languages
Chinese (zh)
Inventor
李文龙
Y.杜
J.李
X.童
Y.张
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2011/001544 external-priority patent/WO2013037078A1/en
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN108574875A publication Critical patent/CN108574875A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Image Analysis (AREA)

Abstract

The present application title is the interaction based on TV promoted with social networking tool.Whom video analysis can be used for determining in viewing TV and their level of interest to actual program arrangement.Favorite program list can be obtained to each of multiple spectators of the programme arrangement on identical television receiver.

Description

Promote the interaction based on TV with social networking tool
Technical field
This relates generally to TV and the interaction with social networking tool.
Background technology
Social networking tool has become essential for the life of many people.Social networking tool allows their user It keeps their good friend of tracking and finds the source of the existing and new good friend of additional contact.
One advantage of social networking is can to identify the good friend with similar interests.However, in order to determine those interest Typically what needs many user's inputs.For example, user can maintain the Facebook pages, interested field is indicated.It can The amount of the information of offer can because of the time spent in it amount and it can involve for provide all users interest, like and The amount for the imagination illustrated completely not liked and be restricted.
Description of the drawings
Fig. 1 is the schematic depiction of one embodiment of the present of invention;
Fig. 2 is the flow chart of one embodiment of the present of invention;And
Fig. 3 is the flow chart of an alternative embodiment of the invention.
Specific implementation mode
According to some embodiments, the information of the television experience about user can be automatically communicated to social networking tool as use In the mode for increasing Social Interaction.However, some embodiments actually not only can determine whether user is online, but also may further determine that User actually whether close to user television indicator.In some embodiments, it can determine and use from the facial expression of user Like or do not like the programme arrangement currently shown in family.Moreover, in some embodiments, various televiewers' likes saving The mode that automation can be used in mesh list compiles.The information then can upload to social networking tool or other by way of for society Interaction.
With reference to figure 1, television indicator 18 in one embodiment can be equipped with TV filming apparatus 16.And in some implementations In example, which can be mounted on television indicator 18 or be integrated with television indicator 18, and filming apparatus certainly may be used To be kept completely separate with television indicator.However, filming apparatus 16 can capture the image for being actively watching those of TV people using it And mode as their facial expression can also be captured to install, this is favourable.To which TV 18 can receive video Source can be that television broadcasting, stream are broadcast internet information, the digital movie from storage devices such as such as DVD players, passed through Internet or the interactive game played using digital media player.
Output from filming apparatus 16 may be connected to processor-based system 10.The processor-based system 10 can To be any kind of computer, several examples are only lifted comprising laptop computer, desktop computer, entertainment device or bee Cellular telephone.Processor-based system 10 may include video interface 22, receives video from filming apparatus 16 and converts it At the correct format used for processor 12.Video interface can be that User Status module 24 provides video.
According to one embodiment, User Status module determines whether user is actually online, and in some embodiments, Determine user actually whether in viewing TV.Presence can for example from by network interface controller detection input and it is defeated Out determine.Whether user actually can for example determine from the video analysis of 16 video feed of filming apparatus in viewing program It whether there is user in front of video screen to detect.
In some embodiments, User Status module can detect many televiewers.Each of those spectators can pass through Face analysis is automated to identify.For example, in the setup mode, each spectators can be promoted to coordinate capture user's picture.Then it is What system can shoot the face-image of the spectators of TV programme with those pre-recorded video clippings or during Setting pattern Still photo compares, to identify the spectators of current active.
To, in some embodiments, whether User Status module 24 not only indicates any spectators in viewing TV, but also Actually also identify which of multiple spectators actually in viewing television indicator 18.
User Status module may couple to user interest detection module 26, also receive video feed from video interface 22. Video face Expression analysis tool can be used to analyze the facial expression of user to determine user couple in user interest detection module 26 Program is interested still to lose interest in.Equally, facial expression analysis is determined for user and likes program or do not like Program.Information from user interest detection module can be combined with the result from User Status module for providing information to Social networking interface 28.In some embodiments, the moment video face analysis of user liked and do not liked can be communicated to society Cross-linked network tool.As used herein, " social networking tool " is electronic communication, such as website, helps people and shows Some good friends or colleague's interaction and/or good friend or colleague it is found that new are helped by obvious common interest.Moreover, making For a part for social networking tool, it is possible to provide Email, tweet, text message or other communications, to indicate working as user Preceding activity and satisfaction.
In some embodiments, the video clipping from TV programme can be captured and be communicated to processor 12 for The instruction of the viewed status of user and the current interest degree of user is distributed on social networking interface 28 together.
Storage 14 can store the video of capture and go back program storage 30 and 50 for realizing the embodiment of the present invention.
Particularly, in some embodiments of the invention, the sequence described in figs 2 and 3 can hardware, software and/or It is realized in firmware.In the embodiment that software or firmware are realized, sequence can be by being stored in such as semiconductor, magnetically or optically storage device Computer executed instructions on equal non-transitory storage mediums are realized.
With reference to figure 2, a sequence can be fed in an embodiment by being received from filming apparatus 16 (being shown in FIG. 1) (such as being indicated in frame 32) and start.Starting stage (frame 34) can involve to be logged in by the password login or face of user interface, Wherein user is submitted using filming apparatus 16 and User Status module 24 to carry out video face analysis.Once user has passed through It logs in and/or face recognition identifies he or she oneself, video frequency program may be selected for watching in user, such as indicated in frame 36. The program can be used multiple types of tools including capture information, capture video, audio or metadata from electronic programming arrangement manual Editing and use are from user good friend (passing through social networking tool) or from internet or database images or text search Input identified to analyze them, or using any other tool.Then the presence of user can be examined by video face It surveys to determine, is such as indicated in frame 38.That is, user can be identified by analyzing filming apparatus feeding 16 to determine user not It enlivens only in the system 10 based on his or her processor and actually in face of activity TV programme and is watching Activity TV programme.
Then, facial expression analysis can be used to determine the degree of user interest, such as indicate in block 40.Many institutes can be used It is known for determining that user is interested or is lost interest in or user likes or do not like particular sequence in video Video face analytical technology.It indicates the interested of user so as to provide real time information or loses interest in or like or do not like Whether joyous degree has changed.For example, this can be related to the Current Content being actively watching in terms of the time, while providing interior from this Instruction of the capture video clipping of appearance together with the level of interest of user.
Video face analysis can Local or Remote implementation.Long-distance video analysis can for example, by by network connection by video It is sent to remote server and completes.
Then the information inferred from facial expression analysis social networks tool can be used to be communicated to good friend, such as refer in frame 42 Show.In some embodiments, social networking message distribution can be screened or filter, therefore is only those of good friend user, likes The good friend of same television program, actually online good friend, the good friend for being actually actively watching TV or these classifications certain group It closes, is such as indicated in frame 42.For example, if good friend likes identical TV programme, they can be connected.
The social networking tool interaction be provided for about can promote with the agreement of new good friend and create for The component of the information of the user of the resource of existing good friend's interaction.In addition, the information can be used for content supplier and gray people Mouth statistical collection.Particularly, content supplier or advertiser can get about during given program or advertisement in specific time What very detailed information user like.
Using an exemplary embodiment, six key steps can be used for facial detection of attribute.First, face detection can transport It goes to position the face rectangle region for giving digital picture or video frame.Then, facial marks detector can run with Six point marks, such as canthus and the corners of the mouth are found in the face rectangle each detected.Then, face rectangle image can be according to face Index point and be aligned and standardize to predefined standard size, such as 64x64 (that is, 64 pixels are wide to multiply 64 pixel height). Then, local feature can be extracted from the regional area of normalized faces image being pre-selected comprising local binary pattern, Block diagram or the block diagram for orienting gradient.Each regional area is then fed to the Weak Classifier based on Multilayer Perception and is used in advance It surveys.The output of Weak Classifier from each regional area is by polymerization as last detection score.The score can be in the model of 0-1 In enclosing, score is bigger, and facial detection of attribute confidence level is higher.Face detection may conform to boosting grades of standard Viola-Jones Join frame.Viola-Jones detectors can be found in public OpenCV software packages.Facial marks include six face points, It includes canthus and the corners of the mouth from right and left eyes.The grader based on Viola-Jones also can be used to detect canthus and mouth Angle.In addition, geometrical constraint can introduce six face points to reflect their geometrical relationship.
The face of all detections is convertible into greyscales, alignment and standardizes to predefined standard size, such as 64x64.Alignment can be carried out by calculating the rotation angle between canthus line and horizontal line first.Then make image angle rotation Canthus is parallel to horizontal line.Then, two eye centre distance w are calculated and calculate eye to mouth distance h.Then it is cut out from facial area 2wX2h rectangles are cut to make left eye center at 0.5w, 0.5h, right center 0.5w, 0.5h and mouth center in w, 1.5h.The square of cutting Shape is finally scaled to standard size.In order to mitigate the illumination difference between image, scalable video can be the histogram of equilibrium.
Local feature on the regional area of the face of extractable alignment and standardization.These local features can be part Binary mode, block diagram, the histogram for orienting gradient.For example, the local feature of extraction can be with for different facial attributes It is different.For example, in detection of smiling, local binary pattern is more better than other technologies, but in gender/age detection In, the block diagram for orienting gradient slightly preferably works.
Regional area is limited to four parts (x, y, w, h), wherein (x, y) is the upper left angle point and (w, h) of regional area It is the width and height of the rectangle of regional area.Boosting algorithms can be used for examining from training dataset selection for facial attribute The distinguishing region surveyed.
For the regional area of each selection, grader can be trained to carry out weak typing.Basic classification device can be multilayer Perception rather than support vector machines.Multilayer Perception (MLP) is because it can be to the algorithm based on support vector machines of the prior art Similar performance is provided and is advantageous in some embodiments.Moreover, because MLP only stores network weight and is propped up as model It holds vector machine (SVM) and stores sparse training sample, the moulded dimension ratio SVM of MLP is much smaller.The prediction of MLP is because it only includes Vector product operate and MLP directly give probability and score output (but being only used for forecast confidence) but it is relatively fast.
MLP may include input layer, output layer and a hidden layer.Assuming that in input layer, there are d node, (wherein d is office The size of portion's feature, 59 be used for local binary pattern block diagram), output layer exist 2 nodes for smile detection, and The instruction of 2 nodes is for the prediction probability smiling or do not smile, and the number of nodes in hidden layer is tuner parameters and by instructing Practice regulation to determine.
All nodes (being known as neuron) in MLP can be similar.MLP can input when from front layer in it is several Node takes output valve and response is passed to the neuron in next layer.With the training weight of each node to from before The value summation (adding bias term) of layer retrieval, and summation is converted using activation primitive f.
Activation primitive f is typically s type functions, such as f (x)=e-xa/(1+e-xa).Range of the output of the function 0 to 1 It is interior.In each node, calculating be from front layer input vector and weight factor between vector product:Y=f (wx), Middle w is weight factor and x is input vector.To, calculating can easily by single-instruction multiple-data instruction (SIMD) or Other accelerators and accelerate.
MLP is used as the Weak Classifier of each regional area.The region each selected is associated with a MLP grader.Finally Be classified based on following simple aggregation rule.For given test sample x, for the regional area k of each selection, extraction should Local feature x at regionk.Then weak MLP graders C is usedk(xk) predicted to carry out.Last output is polymerization result
Referring next to Fig. 3, filming apparatus feeding is received in frame 32.In frame 52, face detection can be used in lists of persons mark It is combined with identification.That is, filming apparatus 16 can be used to record the owner of viewing content (such as TV programme).So Afterwards, video content analysis can be used to identify the spectators for being actively watching and describing in the video flowing.Again, in one embodiment In, face can be recorded in setup phase with identifier.
Then video Expression analysis can be used for determining which of user of viewing program actually in the given time Quarter likes given program, is such as indicated in frame 54.Over time, the spectators for each video identification can be developed Favorite program list such as indicates in frame 56.Then in frame 58, the program of the facial expression of the COMPUTER DETECTION based on user pushes away Good friend can be pushed to by social networking tool (including such as website, tweet, text message or Email) by recommending.
The reference of " one embodiment " or " embodiment " is meant to describe together with embodiment in the whole instruction specific Feature, structure or characteristic are included in comprising at least one realization within the present invention.To, phrase " one embodiment " or " in embodiment " appearance not necessarily refers to identical embodiment.In addition, a particular feature, structure, or characteristic can be used except diagram Other suitable forms of particular implementation exception and set up and all such forms may include in claims hereof It is interior.
While the present invention has been described with respect to a limited number of embodiments, those skilled in that art will recognize from wherein Many modifications and change.Regulation the appended claims cover all such modifications and change, they fall into the present invention's In true spirit and range.

Claims (21)

1. a kind of amusement related system, it may have access to that source is associated to be made with the internet of TV, filming apparatus and streamcast video At least partly related to the interactive game that can play via Internet with, the streamcast video, the streamcast video will be through It is shown by the TV, the amusement related system includes:
Processor;
Storage device, for storing the program instruction that will be at least partly executed by the processor, described program instruction ought be at least Part promotes the amusement related system to be able to carry out operation when being executed by the processor, including:
Following item is at least partially based on to execute face recognition relevant treatment:(1)At least partly with by the TV exist User the filming apparatus capture the associated facial image information of face-image;And(2)The face stored before Image data, the face recognition relevant treatment are at least partly associated with user identity determination with associated with user's login To use;
The facial image information is at least partially based on to execute facial expression analysis, the facial expression analysis will be used for Small part determine at the TV existing for the user at least one facial expression;
Capture includes at least part of video clipping data of the streamcast video shown via the TV for social activity Net distribution;
The currently viewing correlated activation information of the user is for the social networks point existing for providing at the TV Hair;
Wherein:
The amusement related system can be associated with multiple possible users;
The multiple possible user can face data corresponding to what is stored before it is associated;
The user identity is determined to:Be at least partially based on the facial image information and it is described before store it is corresponding Face data is which of the multiple possible user come the user existing for identifying at the TV.
2. amusement related system as described in claim 1, wherein:
During the setting of the amusement related system, the corresponding face data will be stored in the storage device;
The filming apparatus is detached with the TV, and will couple to the interface of the amusement related system;And/or
The amusement related system will also be permitted the user based on password and be logged in.
3. amusement related system as described in claim 1, wherein:
The amusement related system can via the social networks distribute provide user likes information, user does not like information, User interest information and/or user lose interest in information;And/or
The amusement related system further includes the filming apparatus.
4. amusement related system as described in claim 1, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by the amusement related system.
5. amusement related system as described in claim 1, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by remote server.
6. a kind of method at least partly realized using amusement related system, the amusement related system is filled with TV, shooting Set to may have access to source with the internet of streamcast video and be associated and use, the streamcast video at least partly with via Internet Playable interactive game is related, and the streamcast video will be shown via the TV, the method includes:
Following item is at least partially based on to execute face recognition relevant treatment:(1)At least partly with by the TV exist User the filming apparatus capture the associated facial image information of face-image;And(2)The face stored before Image data, the face recognition relevant treatment are at least partly associated with user identity determination with associated with user's login To use;
The facial image information is at least partially based on to execute facial expression analysis, the facial expression analysis will be used for Small part determine at the TV existing for the user at least one facial expression;
Capture includes at least part of video clipping data of the streamcast video shown via the TV for social activity Net distribution;
The currently viewing correlated activation information of the user is for the social networks point existing for providing at the TV Hair;
Wherein:
The amusement related system can be associated with multiple possible users;
The multiple possible user can face data corresponding to what is stored before it is associated;
The user identity is determined to:Be at least partially based on the facial image information and it is described before store it is corresponding Face data is which of the multiple possible user come the user existing for identifying at the TV.
7. method as claimed in claim 6, wherein:
During the setting of the amusement related system, the corresponding face data will be stored in depositing for the amusement related system In storage device;
The filming apparatus is detached with the TV, and will couple to the interface of the amusement related system;And/or
The amusement related system will also be permitted the user based on password and be logged in.
8. method as claimed in claim 6, wherein:
The amusement related system can via the social networks distribute provide user likes information, user does not like information, User interest information and/or user lose interest in information;And/or
The amusement related system further includes the filming apparatus.
9. method as claimed in claim 6, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by the amusement related system.
10. method as claimed in claim 6, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by remote server.
11. a kind of amusement related system, it may have access to that source is associated to be made with the internet of TV, filming apparatus and streamcast video At least partly related to the interactive game that can play via Internet with, the streamcast video, the streamcast video will be through It is shown by the TV, the amusement related system includes:
The component of face recognition relevant treatment is executed for being at least partially based on following item:(1)At least partly with by described The associated facial image information of face-image that the filming apparatus of user existing at TV captures;And(2)Before The face image data of storage, the face recognition relevant treatment at least partly will it is associated with user identity determination with user Login is associated to use;
The component of facial expression analysis, the facial expression analysis are executed for being at least partially based on the facial image information At least one facial expression of the user existing for being used at least partly determine at the TV;
For capture include at least part of video clipping data of the streamcast video shown via the TV for The component of social networks distribution;
Currently viewing correlated activation information for the user existing for providing at the TV is for the social network The component of network distribution;
Wherein:
The amusement related system can be associated with multiple possible users;
The multiple possible user can face data corresponding to what is stored before it is associated;
The user identity is determined to:Be at least partially based on the facial image information and it is described before store it is corresponding Face data is which of the multiple possible user come the user existing for identifying at the TV.
12. amusement related system as claimed in claim 11, wherein:
During the setting of the amusement related system, the corresponding face data will be stored in depositing for the amusement related system In storage device;
The filming apparatus is detached with the TV, and will couple to the interface of the amusement related system;And/or
The amusement related system will also be permitted the user based on password and be logged in.
13. amusement related system as claimed in claim 11, wherein:
The amusement related system can via the social networks distribute provide user likes information, user does not like information, User interest information and/or user lose interest in information;And/or
The amusement related system further includes the filming apparatus.
14. amusement related system as claimed in claim 11, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by the amusement related system.
15. amusement related system as claimed in claim 11, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by remote server.
16. a kind of amusement related system, it may have access to that source is associated to be made with the internet of TV, filming apparatus and streamcast video With the streamcast video is related to the interactive game that can play via Internet, and the streamcast video will be via the electricity Depending on showing, the amusement related system includes:
Processor;
Storage device, for storing the program instruction that will be at least partly executed by the processor, described program instruction ought be at least Part promotes the amusement related system to be able to carry out operation when being executed by the processor, including:
Face recognition relevant treatment is executed based on following item:(1)With the shooting by the user existing for the TV The associated facial image information of face-image that device captures;And(2)The face image data stored before, the face Identification relevant treatment associated with user identity determination will be used with user's login associated;
Facial expression analysis is executed based on the facial image information, the facial expression analysis will be used for determining described At least one facial expression of the user existing at TV;
Capture includes at least part of video clipping data of the streamcast video shown via the TV for social activity Net distribution;
The currently viewing correlated activation information of the user is for the social networks point existing for providing at the TV Hair;
Wherein:
The amusement related system can be associated with multiple possible users;
The multiple possible user can face data corresponding to what is stored before it is associated;
The user identity is determined to:Based on the facial image information and the corresponding face data stored before It is which of the multiple possible user come the user existing for identifying at the TV.
17. amusement related system as claimed in claim 16, wherein:
During the setting of the amusement related system, the corresponding face data will be stored in the storage device;
The filming apparatus is detached with the TV, and will couple to the interface of the amusement related system;And/or
The amusement related system will also be permitted the user based on password and be logged in.
18. amusement related system as claimed in claim 16, wherein:
The amusement related system can via the social networks distribute provide user likes information, user does not like information, User interest information and/or user lose interest in information;And/or
The amusement related system further includes the filming apparatus.
19. amusement related system as claimed in claim 16, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by the amusement related system.
20. amusement related system as claimed in claim 16, wherein:
The face recognition relevant treatment and the facial expression analysis are at least partly executed by remote server.
21. at least one computer-readable medium, store instruction, described instruction promote institute when being executed by amusement related system It states amusement related system and is able to carry out method as described in any one of claim 6 to 10.
CN201710769645.5A 2011-09-12 2012-06-15 Promote the interaction based on TV with social networking tool Pending CN108574875A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/CN2011/001544 WO2013037078A1 (en) 2011-09-12 2011-09-12 Facilitating television based interaction with social networking tools
CNPCT/CN2011/001544 2011-09-12
CN201280047610.6A CN103842992A (en) 2011-09-12 2012-06-15 Facilitating television based interaction with social networking tools

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201280047610.6A Division CN103842992A (en) 2011-09-12 2012-06-15 Facilitating television based interaction with social networking tools

Publications (1)

Publication Number Publication Date
CN108574875A true CN108574875A (en) 2018-09-25

Family

ID=50804808

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201280047610.6A Pending CN103842992A (en) 2011-09-12 2012-06-15 Facilitating television based interaction with social networking tools
CN201710769645.5A Pending CN108574875A (en) 2011-09-12 2012-06-15 Promote the interaction based on TV with social networking tool

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201280047610.6A Pending CN103842992A (en) 2011-09-12 2012-06-15 Facilitating television based interaction with social networking tools

Country Status (1)

Country Link
CN (2) CN103842992A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3285222A1 (en) 2011-09-12 2018-02-21 INTEL Corporation Facilitating television based interaction with social networking tools
CN111601168B (en) * 2020-05-21 2021-07-16 广州欢网科技有限责任公司 Television program market performance analysis method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020072952A1 (en) * 2000-12-07 2002-06-13 International Business Machines Corporation Visual and audible consumer reaction collection
US20030118974A1 (en) * 2001-12-21 2003-06-26 Pere Obrador Video indexing based on viewers' behavior and emotion feedback
US20070092220A1 (en) * 2005-10-20 2007-04-26 Funai Electric Co., Ltd. System for reproducing video
US20070168543A1 (en) * 2004-06-07 2007-07-19 Jason Krikorian Capturing and Sharing Media Content
US20080004951A1 (en) * 2006-06-29 2008-01-03 Microsoft Corporation Web-based targeted advertising in a brick-and-mortar retail establishment using online customer information
CN101286201A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Information automatic asking control method and device
CN101958892A (en) * 2010-09-16 2011-01-26 汉王科技股份有限公司 Electronic data protection method, device and system based on face recognition
WO2011041088A1 (en) * 2009-09-29 2011-04-07 General Instrument Corporation Digital rights management protection for content identified using a social tv service
CN102098567A (en) * 2010-11-30 2011-06-15 深圳创维-Rgb电子有限公司 Interactive television system and control method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030066071A1 (en) * 2001-10-03 2003-04-03 Koninklijke Philips Electronics N.V. Program recommendation method and system utilizing a viewing history of commercials
CN1725822A (en) * 2004-07-22 2006-01-25 上海乐金广电电子有限公司 Device for set-up user liked program and its using method
CN201226568Y (en) * 2008-03-25 2009-04-22 赵力 Intelligent television terminal with monitoring and information retrieval
US20090326970A1 (en) * 2008-06-30 2009-12-31 Microsoft Corporation Awarding users for discoveries of content based on future popularity in a social network
US20100057546A1 (en) * 2008-08-30 2010-03-04 Yahoo! Inc. System and method for online advertising using user social information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020072952A1 (en) * 2000-12-07 2002-06-13 International Business Machines Corporation Visual and audible consumer reaction collection
US20030118974A1 (en) * 2001-12-21 2003-06-26 Pere Obrador Video indexing based on viewers' behavior and emotion feedback
US20070168543A1 (en) * 2004-06-07 2007-07-19 Jason Krikorian Capturing and Sharing Media Content
US20070092220A1 (en) * 2005-10-20 2007-04-26 Funai Electric Co., Ltd. System for reproducing video
US20080004951A1 (en) * 2006-06-29 2008-01-03 Microsoft Corporation Web-based targeted advertising in a brick-and-mortar retail establishment using online customer information
CN101286201A (en) * 2008-05-30 2008-10-15 北京中星微电子有限公司 Information automatic asking control method and device
WO2011041088A1 (en) * 2009-09-29 2011-04-07 General Instrument Corporation Digital rights management protection for content identified using a social tv service
CN101958892A (en) * 2010-09-16 2011-01-26 汉王科技股份有限公司 Electronic data protection method, device and system based on face recognition
CN102098567A (en) * 2010-11-30 2011-06-15 深圳创维-Rgb电子有限公司 Interactive television system and control method thereof

Also Published As

Publication number Publication date
CN103842992A (en) 2014-06-04

Similar Documents

Publication Publication Date Title
US10939165B2 (en) Facilitating television based interaction with social networking tools
US11936720B2 (en) Sharing digital media assets for presentation within an online social network
US8804999B2 (en) Video recommendation system and method thereof
US9116924B2 (en) System and method for image selection using multivariate time series analysis
US20190289359A1 (en) Intelligent video interaction method
JP5934653B2 (en) Image classification device, image classification method, program, recording medium, integrated circuit, model creation device
CN104813674B (en) System and method for optimizing video
US8706675B1 (en) Video content claiming classifier
US20150293928A1 (en) Systems and Methods for Generating Personalized Video Playlists
US11729478B2 (en) System and method for algorithmic editing of video content
Kannan et al. What do you wish to see? A summarization system for movies based on user preferences
CA3021193A1 (en) System, method, and device for analyzing media asset data
CN110879944A (en) Anchor recommendation method, storage medium, equipment and system based on face similarity
WO2021212089A1 (en) Systems and methods for processing and presenting media data to allow virtual engagement in events
US8270731B2 (en) Image classification using range information
Husa et al. HOST-ATS: automatic thumbnail selection with dashboard-controlled ML pipeline and dynamic user survey
CN108574875A (en) Promote the interaction based on TV with social networking tool
CN116261009A (en) Video detection method, device, equipment and medium for intelligently converting video audience
US20210074044A1 (en) Method, server, and recording medium for creating composite image
US20190095468A1 (en) Method and system for identifying an individual in a digital image displayed on a screen
KR20230010928A (en) method of providing suggestion of video contents by use of viewer identification by face recognition and candidate extraction by genetic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180925