CN104969563A - Smart device having user interface based on human emotions or inclinations, and user interface method - Google Patents
Smart device having user interface based on human emotions or inclinations, and user interface method Download PDFInfo
- Publication number
- CN104969563A CN104969563A CN201480006469.4A CN201480006469A CN104969563A CN 104969563 A CN104969563 A CN 104969563A CN 201480006469 A CN201480006469 A CN 201480006469A CN 104969563 A CN104969563 A CN 104969563A
- Authority
- CN
- China
- Prior art keywords
- user
- disposition
- smart machine
- emotion
- user interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42201—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Social Psychology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Neurosurgery (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a smart device having a user interface based on human emotions or inclinations, and a user interface method, and has the effects of providing a user with contents; allowing the user to change channels; allowing the user to drive an application program; and providing the user with advertisements in light of the user's emotion or inclination.
Description
Technical field
The present invention relates to user's interface device and method for user interface, more specifically, relate to and a kind of there is smart machine based on the emotion of people or the user interface of disposition and method for user interface.
Background technology
TV, after late nineteenth century comes out first, along with picture display process or design etc. develop constantly, stably establishes the position of the information transmission equipment of most popularity after twentieth century.But existing TV exists following shortcoming, spectators are the unidirectional information that transmits from TV station of unidirectional reception only.
In order to solve the problem of this one-way communication, intelligent television (Smart TV) comes out.Intelligent television refers at the multi-functional TV being combined interconnection network function with TV, arranging various application program (application: application program) and the various functions such as utilization surfing on the net and video request program, social networking service (Social Networking Service claims SNS below), game.
The maximum feature of intelligent television is have the function that user and TV can transmit reception information mutually.Compared with the existing TV of a described one-way transmission information, this is maximum difference.
Along with intelligent television rises, have developed and relate to the technology that user and TV transmit the mode of reception information mutually, representational in this technology is voice recognition technology.That is, current, when user wants that controlling TV changes channel, employ remote controller etc., but gradually together with sound, be provided with user friendly interface.
But there are the following problems, only by technology such as voice recognitions, required content etc. cannot be provided by the disposition of user or emotion to user.
Summary of the invention
The technical problem that invention will solve
The present invention researches and develops to solve prior art problem described above, object of the present invention is for providing a kind of smart machine and user interface mode, the emotion state of the disposition of user or the active user of use smart machine is reacted, and provides applicable content etc. to user.
For the technical scheme of dealing with problems
The present invention obtains the facial image information of user by video camera etc., and from information such as the emotion of the Computer image genration user obtained or disposition, according to generated information, the picture needed for output or audio frequency.
The effect of invention
The present invention has following effect, can by the current emotion state of the disposition of user or user the content etc. that is applicable to be supplied to user.
Accompanying drawing explanation
Fig. 1 is the structure chart that display the present invention can identify an embodiment of the smart machine of the emotion/disposition of user;
Fig. 2, for utilizing smart machine of the present invention, shows the flow chart of an embodiment of the interface method of the emotion of user;
Fig. 3, for utilizing smart machine of the present invention, shows the flow chart of an embodiment of the interface method of the disposition of user.
Preferred embodiment
Smart machine of the present invention comprises: camera unit, obtains the facial image information of user; Identification part, utilizes the image obtained by described camera unit, generates the relevant information of at least one in the emotion of user or disposition; Control part, controls described camera unit and identification part, based on the information transmitted by described identification part, selects to export which kind of picture or audio frequency; And efferent, according to the selection of described control part, export picture or audio frequency.
The facial image information of the user that the method for user interface of smart machine of the present invention comprises the steps: to utilize the camera unit by being positioned at smart machine and obtains, identifies at least one in the emotion state of user or the disposition of user; The thering is provided of the content corresponding to the result of described identification is provided, changes channel, driver application, provide in advertisement any one.
Embodiment
Fig. 1 is the structure chart that display the present invention can identify an embodiment of the smart machine of the emotion/disposition of user.
With reference to Fig. 1, can identify that the smart machine (10) of the emotion/disposition of user comprises: camera unit (11), identification part (12), control part (13) and efferent (14).
In the present invention, the electronic equipment of smart machine (10) perhaps application program in driving for the control according to user, the robot comprising intelligent television, smart mobile phone, intelligent advertisement display unit, notebook computer and can play/educate.
Be formed at the facial image information of camera unit (11) the Real-time Obtaining active user of smart machine of the present invention (10).Preferably, camera unit (11) is installed on the front of smart machine (10), easily can obtain the facial image information of the user using smart machine (10).
The facial image information of the user obtained by camera unit (11) is transferred to identification part (12).The face image of identification part (12) to the user obtained by camera unit (11) is analyzed, and derives the current emotion of user or disposition.Such as, generally, the emotion that people has is divided into amimia, glad, sad, angry and surprised five kinds.Further, the disposition of people is divided into the people of internally-oriented people, export-oriented people, two kinds of disposition mixing.
Facial image information in identification part (12) to the user of being transmitted by camera unit (11) is analyzed, and judges that user is current and has which kind of emotion state.That is, the expression of user by transmitting by camera unit (11) in identification part (12), judge user current whether be amimia state, happiness state, sad state, angry state or surprised state.
In addition, in identification part (12) by the face's change from the expressionless state of user, happiness state, sad state, angry state, surprised state, judge whether user is internally-oriented people, people that export-oriented people or two kinds of disposition mix.
As mentioned above, by the emotion/disposition information transmission of the user of identification part (12) real-time judge to control part (13).Control part (13) based on the information transmitted by identification part, the application program (application software) that perhaps will drive or advertisement in selecting will to represent to user.
Such as, when the current state of user is identified as happiness state, control part (13) controls to make to be exported and comedy routine by efferent (14), or the corresponding content of amusement works.In addition, when the current state of user is identified as angry or surprised state, control part (13) controls the good music made by efferent (14) stable output mood, or TV play or advertisement etc.Further, when the current state of user is identified as sad state, control to make to export by efferent (14) film or game application or advertisement etc. that give to move.
And, when user is identified as internally-oriented people, control part (13) controls the advertisement (such as, classic performance advertisement) etc. making to be exported the disposition of contents such as giving moved film or music and/or applicable user by efferent (14).In addition, when user is identified as export-oriented people, control part (13) controls the advertisement (such as, motion advertisement) etc. making the disposition being exported the contents such as motion and/or applicable user by efferent (14).
Efferent (14) by control part (13) control and export picture and/or audio frequency.
Fig. 2, for utilizing smart machine of the present invention, shows the flow chart of an embodiment of the interface method of the emotion of user.
When user is in order to use smart machine (10) to appear at before smart machine, the appearance (S21) of smart machine identification user, in order to whether open smart machine (10) to user's query, display problem (S22).Such as, " can TV be turned on? " etc. problem.To this, user uses sound or remote controller etc., and when smart machine (10) are opened in order, smart machine (10) runs (S23).
When smart machine (10) runs, identification part (12) facial image information to the user of being transmitted by camera unit (11) is analyzed, and judges the current emotion state (S24) of user.Smart machine (10), according to the emotion state of user, provides the contents list being judged as playing help to user, or provides channel list or the application list (S25).Now, together with described list, or also can provide and judge to play the service of help or the relevant advertisement of commodity to the emotion state of user.
Now, smart machine (10) does not provide list to user, also can directly select required content, channel, application program, and export to user.
When user selects any one from the content/channel/application program of being recommended by smart machine (10) (S26), smart machine (10) performs the content of recommending, or setting contents of channel, or driver application (S27).
Fig. 3, for utilizing smart machine of the present invention, shows the flow chart of an embodiment of the interface method of the disposition of user.
Smart machine (10) runs and by content representation to (S31) during user, smart machine (10) obtains according to represented content the information (S32) of the expression shape change of user.
Smart machine (10) based on the expression shape change of user information and judge the disposition (S33) of user.Such as, the disposition of user can be judged to be export-oriented, or is judged to be internally-oriented or is judged to be two kinds of disposition mixed types.
Thus, the user profile of judgement is stored in the memory cell (S34) of smart machine (10).Smart machine (10) uses the current emotion state information of the disposition information of the user stored and/or the user of Real-time Obtaining and recommends content/channel/application program (S35) of being applicable to user.Now, based on disposition and/or the emotion state information of user, can provide and the advertisement that it is correspondingly served or commodity are relevant.
When user to choose in recommended content/channel/application program any one (S36), export the content chosen, or setting contents of channel, or driver application (S37).
Industrial applicibility
The present invention can be used in the interface of electronic equipment.
Claims (5)
1. have the smart machine based on the emotion of people or the user interface of disposition, the feature of this smart machine is,
Comprise:
Camera unit, obtains the facial image information of user;
Identification part, utilizes the image obtained by described camera unit, generates the information relevant at least one in the emotion of user or disposition;
Control part, controls described camera unit and identification part, and based on the information transmitted from described identification part, selects to export which kind of picture or audio frequency; And
Efferent, according to the selection of described control part, exports picture or audio frequency.
2. the smart machine had based on the emotion of people or the user interface of disposition according to claim 1, is characterized in that,
Described picture or audio frequency by means of perform in content, channel or the application program that can be provided by described smart machine any one.
3. the smart machine had based on the emotion of people or the user interface of disposition according to claim 2, is characterized in that,
The information of the emotion of described user be amimia, glad, sad, angry, surprised in any one.
4. the smart machine had based on the emotion of people or the user interface of disposition according to claim 2, is characterized in that,
The information of the disposition of described user is any one in export-oriented disposition, internally-oriented disposition, mixing disposition.
5. a method for user interface for smart machine, is characterized in that,
Comprise the steps:
Utilizing the camera unit by being positioned at smart machine and the facial image information of user that obtains, identifying at least one in the emotion state of user or the disposition of user; And
The thering is provided of the content corresponding to the result of described identification is provided, changes channel, driver application, provide in advertisement any one.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130009568 | 2013-01-29 | ||
KR10-2013-0009568 | 2013-01-29 | ||
KR1020130012809A KR20140096935A (en) | 2013-01-29 | 2013-02-05 | Smart device having a user interface based on human emotion or tendency and user interface method |
KR10-2013-0012809 | 2013-02-05 | ||
PCT/KR2014/000792 WO2014119900A1 (en) | 2013-01-29 | 2014-01-28 | Smart device having user interface based on human emotions or inclinations, and user interface method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104969563A true CN104969563A (en) | 2015-10-07 |
Family
ID=51744680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201480006469.4A Pending CN104969563A (en) | 2013-01-29 | 2014-01-28 | Smart device having user interface based on human emotions or inclinations, and user interface method |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20140096935A (en) |
CN (1) | CN104969563A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274211A (en) * | 2017-05-25 | 2017-10-20 | 深圳天瞳科技有限公司 | A kind of advertisement play back device and method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1395798A (en) * | 2000-11-22 | 2003-02-05 | 皇家菲利浦电子有限公司 | Method and apparatus for generating recommendations based on current mood of user |
KR20050025552A (en) * | 2004-01-17 | 2005-03-14 | 주식회사 헬스피아 | Digital cellular phone |
US20110134026A1 (en) * | 2009-12-04 | 2011-06-09 | Lg Electronics Inc. | Image display apparatus and method for operating the same |
CN102129644A (en) * | 2011-03-08 | 2011-07-20 | 北京理工大学 | Intelligent advertising system having functions of audience characteristic perception and counting |
CN102427553A (en) * | 2011-09-23 | 2012-04-25 | Tcl集团股份有限公司 | Method and system for playing television programs, television set and server |
CN102654913A (en) * | 2011-03-04 | 2012-09-05 | 谢韬 | Method for selectively delivering advertisement |
CN102799265A (en) * | 2012-06-26 | 2012-11-28 | 宇龙计算机通信科技(深圳)有限公司 | Advertisement playing method, intelligent advertisement terminal, server and system |
-
2013
- 2013-02-05 KR KR1020130012809A patent/KR20140096935A/en active Application Filing
-
2014
- 2014-01-28 CN CN201480006469.4A patent/CN104969563A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1395798A (en) * | 2000-11-22 | 2003-02-05 | 皇家菲利浦电子有限公司 | Method and apparatus for generating recommendations based on current mood of user |
KR20050025552A (en) * | 2004-01-17 | 2005-03-14 | 주식회사 헬스피아 | Digital cellular phone |
US20110134026A1 (en) * | 2009-12-04 | 2011-06-09 | Lg Electronics Inc. | Image display apparatus and method for operating the same |
CN102654913A (en) * | 2011-03-04 | 2012-09-05 | 谢韬 | Method for selectively delivering advertisement |
CN102129644A (en) * | 2011-03-08 | 2011-07-20 | 北京理工大学 | Intelligent advertising system having functions of audience characteristic perception and counting |
CN102427553A (en) * | 2011-09-23 | 2012-04-25 | Tcl集团股份有限公司 | Method and system for playing television programs, television set and server |
CN102799265A (en) * | 2012-06-26 | 2012-11-28 | 宇龙计算机通信科技(深圳)有限公司 | Advertisement playing method, intelligent advertisement terminal, server and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274211A (en) * | 2017-05-25 | 2017-10-20 | 深圳天瞳科技有限公司 | A kind of advertisement play back device and method |
Also Published As
Publication number | Publication date |
---|---|
KR20140096935A (en) | 2014-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI656523B (en) | Voice control device, system and control method | |
US10158829B2 (en) | Information processing apparatus, information processing method, program, and server | |
CN115145529B (en) | Voice control device method and electronic device | |
US9898850B2 (en) | Support and complement device, support and complement method, and recording medium for specifying character motion or animation | |
KR102071579B1 (en) | Method for providing services using screen mirroring and apparatus thereof | |
CN106804076B (en) | A kind of lighting system of smart home | |
KR102546599B1 (en) | Display apparatus, server and the control method thereof | |
CN111372109B (en) | Intelligent television and information interaction method | |
CN113591523B (en) | Display device and experience value updating method | |
US11120290B2 (en) | Display apparatus and operating method of the same | |
CN113950687A (en) | Media presentation device control based on trained network model | |
CN110213504B (en) | Video processing method, information sending method and related equipment | |
US9108110B2 (en) | Information processing apparatus, information processing method, and program to allow conversation between a plurality of appliances and a user | |
KR20160130288A (en) | A lock screen method and mobile terminal | |
CN109614470A (en) | Answer processing method, device, terminal and the readable storage medium storing program for executing of information | |
KR102011868B1 (en) | Apparatus for providing singing service | |
CN112839254A (en) | Display apparatus and content display method | |
US20150135070A1 (en) | Display apparatus, server apparatus and user interface screen providing method thereof | |
CN104969563A (en) | Smart device having user interface based on human emotions or inclinations, and user interface method | |
WO2014119900A1 (en) | Smart device having user interface based on human emotions or inclinations, and user interface method | |
CN110178376A (en) | Display device | |
CN101959034A (en) | Method for switching theme mode in digital television and digital television system | |
CN101562711A (en) | Method for realizing interesting human-machine interaction function of digital TV set | |
CN117631909A (en) | Service recommendation method and electronic equipment | |
CN113497884B (en) | Dual-system camera switching control method and display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20151007 |
|
WD01 | Invention patent application deemed withdrawn after publication |