CN109669535A - Audio controlling method and system - Google Patents

Audio controlling method and system Download PDF

Info

Publication number
CN109669535A
CN109669535A CN201811399930.3A CN201811399930A CN109669535A CN 109669535 A CN109669535 A CN 109669535A CN 201811399930 A CN201811399930 A CN 201811399930A CN 109669535 A CN109669535 A CN 109669535A
Authority
CN
China
Prior art keywords
user
emotional
information
emotional status
improving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811399930.3A
Other languages
Chinese (zh)
Inventor
桂小乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201811399930.3A priority Critical patent/CN109669535A/en
Publication of CN109669535A publication Critical patent/CN109669535A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a kind of audio controlling method and system, control method includes determining that user enters predeterminable area;Judge the emotional status of the user;The counter-measure for improving the mood of the user is made according to the emotional status, to play the role of adjusting the emotional status of user in time, is increased the intelligence of the sound equipment, is promoted user experience.

Description

Audio controlling method and system
Technical field
The invention belongs to technical field of intelligent equipment more particularly to a kind of audio controlling method and systems.
Background technique
At this stage, with the development of artificial intelligence technology, intelligent players (such as intelligent sound) be widely used to In family life, such as juvenile's early education, old man's company, Internet of Things control and unmarried company.
Although these intelligent players have preferable voice dialogue, control household electrical appliances and playing function etc., cannot It is matched according to the emotional status of user and makes accurate reply movement and reaction.Such as in real life, the elderly is excited Shi Ruo is adjusted be possible to will cause palindromia or sudden death not in time, if the excited adjustment not in time of young man's mood may be done Some aggressive behaviors out.
Therefore, a kind of intelligent players of the emotional status of adjustable user urgently propose.
Summary of the invention
In view of this, being broadcast the embodiment of the invention provides a kind of audio controlling method and system for intelligence at this stage Put the technical issues of device is not adjustable the emotional status of user.
The embodiment of the invention provides a kind of audio controlling methods comprising:
Determine that user enters predeterminable area;
Judge the emotional status of the user;
The counter-measure for improving the mood of the user is made according to the emotional status.
Further, the determining user enters predeterminable area and specifically includes:
Enter infrared ray overlay area using infrared induction technical monitoring user to determine that user enters predeterminable area.
Further, the emotional status of the judgement user specifically includes:
Obtain the user information of the user;
Receive the emotional parameters of the user;
The emotional status of the user is judged according to the user information and the emotional parameters.
Further, the counter-measure for making the mood for improving the user according to the emotional status is specifically wrapped It includes:
The audio for improving the user emotion is played according to the emotional status;And/or
The movement for improving the user emotion is made according to the emotional status.
Further, the user information includes the age segment information of user and the facial expression information of user, described to obtain The user information of the user is taken to specifically include:
The age segment information of the user is obtained by the entirety that visual sensor grabs the user;
The facial expression information of the user is obtained by the face that visual sensor grabs the user.
Further, the emotional parameters for receiving the user specifically include:
Receive the user intelligent wearable device send include blood pressure, heart rate, body temperature and Skin Resistance at least A kind of emotional parameters.
The embodiment of the invention provides a kind of audible control systems comprising sound equipment, for determining that user enters preset areas Domain, and the emotional status for judging the user are also used to make the feelings for improving the user according to the emotional status The counter-measure of thread.
Further, the system also includes the intelligent wearable devices connecting with the sound communication;
The intelligent wearable device is worn on the user's body, includes blood pressure, heart rate, body for obtain the user The emotional parameters of at least one of warm and Skin Resistance;
The sound equipment is used to obtain the user information of the user and receives the emotional parameters, and according to the use Family information and the emotional parameters judge the emotional status of the user.
Further, the sound equipment is used to play the audio for improving the user emotion according to the emotional status;With/ Or, making the movement for improving the user emotion according to the emotional status.
Further, the sound equipment is equipped with visual sensor, and the user information includes the age segment information and use of user The facial expression information at family;
The sound equipment is used to obtain the age bracket of the user by the entirety that the visual sensor grabs the user Information;And the facial expression information of the user is obtained by the face that the visual sensor grabs the user.
Then audio controlling method provided in an embodiment of the present invention and system are sentenced by determining that user enters predeterminable area Break the emotional status of the user, and the counter-measure for improving the mood of the user is made further according to the emotional status, to rise To the effect of the timely emotional status for adjusting user, increase the intelligence of the sound equipment, promotes user experience.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of method flow diagram for audio controlling method that first embodiment of the invention provides;
Fig. 2 is a kind of method flow diagram for audio controlling method that second embodiment of the invention provides;
Fig. 3 is a kind of method flow diagram for audio controlling method that third embodiment of the invention provides;
Fig. 4 is a kind of method flow diagram for audio controlling method that fourth embodiment of the invention provides;
Fig. 5 is a kind of method flow diagram for audio controlling method that fifth embodiment of the invention provides;
Fig. 6 is a kind of method flow diagram for audio controlling method that sixth embodiment of the invention provides;
Fig. 7 is a kind of structural schematic diagram for audible control system that seventh embodiment of the invention provides.
Specific embodiment
Carry out the embodiment that the present invention will be described in detail below in conjunction with accompanying drawings and embodiments, how the present invention is applied whereby Technological means solves technical problem and reaches the realization process of technical effect to fully understand and implement.
As used some vocabulary to censure specific components in the specification and claims.Those skilled in the art answer It is understood that hardware manufacturer may call the same component with different nouns.This specification and claims are not with name The difference of title is as the mode for distinguishing component, but with the difference of component functionally as the criterion of differentiation.Such as logical The "comprising" of piece specification and claim mentioned in is an open language, therefore should be construed to " include but do not limit In "." substantially " refer within the acceptable error range, those skilled in the art can solve technology within a certain error range Problem basically reaches technical effect.In addition, " coupling " or " electric connection " word includes any directly and indirectly electrical herein Coupling means.Therefore, if it is described herein that a first device is coupled to a second device, then representing first device can directly electrical coupling It is connected to second device, or is electrically coupled to second device indirectly by other devices or coupling means.Specification subsequent descriptions To implement better embodiment of the invention, so description is being not limited to for the purpose of illustrating rule of the invention The scope of the present invention.Protection scope of the present invention is as defined by the appended claims.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the system that include a series of elements not only include those elements, but also to wrap Include the other elements being not explicitly listed, or further include for this process, method, commodity or system intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including element There is also other identical elements in process, method, commodity or system.
Referring to FIG. 1, a kind of method flow diagram of the audio controlling method provided for first embodiment of the invention, the side Method includes:
Step S100 determines that user enters predeterminable area;
Step S200 judges the emotional status of the user;
Step S300 makes the counter-measure for improving the mood of the user according to the emotional status.
In the step s 100, to determine that user enters predeterminable area, predeterminable area here includes the overlay area of sound equipment, It can be set by the user.Specifically, referring to FIG. 2, a kind of method of the audio controlling method provided for second embodiment of the invention Flow chart, the step S100 are specifically included:
Step S110 enters infrared ray overlay area using infrared induction technical monitoring user to determine that it is default that user enters Region.
Herein, the sound equipment is equipped with infrared sensor, and the moment to the transmitting infrared-ray, and then formed infrared Line overlay area, i.e., the described predeterminable area can be by institutes when user is close to the sound equipment and enters the infrared ray overlay area It states infrared sensor to sense, and then determines that user enters the predeterminable area, it is pointed out that all skills of the invention Art scheme is the case where completing after user enters the predeterminable area, do not enter the predeterminable area for user, originally Invention is not inquired into.
In addition, for the setting of the predeterminable area can be transmission power by adjusting the infrared sensor and Unlatching quantity of the infrared sensor etc. realizes that the infrared ray overlay area is adjusted, and then realizes to the predeterminable area Setting.
Furthermore, it should be pointed out that it is only given in second embodiment of the invention with example property with infrared monitoring Method determine user enter the predeterminable area, but this and do not constitute a limitation of the invention, in first embodiment of the invention The step S100 only emphasizes that user to be determined enters predeterminable area, it is contemplated that in second embodiment of the invention The method of design equally falls into protection of the invention.
Above-mentioned steps S100 is accepted to want after being determined that the user enters the predeterminable area in step s 200 Judge the emotional status of the user, i.e., when user is close to the sound equipment, the sound equipment will be to the emotional status of the user Judged.
Specifically, referring to FIG. 3, a kind of method flow of the audio controlling method provided for third embodiment of the invention Figure, wherein the step S200 includes:
Step S210 obtains the user information of the user;
Step S220 receives the emotional parameters of the user;
Step S230 judges the emotional status of the user according to the user information and the emotional parameters.
Specifically, in step S210, the sound equipment is equipped with visual sensor, and visual sensor described herein includes But it is not limited to be camera, by the user information of the available user of the camera, being taken the photograph by described here As the method that head obtains the user information of the user includes but is not limited to shoot photo to analyze photo, or record The user is to the close video of the sound equipment and analyzes it.
Specifically, the user information includes the age segment information of user and the facial expression information of user, please refers to figure 4, for a kind of method flow diagram for audio controlling method that fourth embodiment of the invention provides, the step S210 obtains the use The method of the user information at family specifically includes:
Step S211 obtains the age segment information of the user by the entirety that visual sensor grabs the user;
Step S212 obtains the facial expression information of the user by the face that visual sensor grabs the user.
In step S211, as described in 3rd embodiment, the sound equipment is equipped with visual sensor, such as camera, in institute It states user and determines and enter the predeterminable area, the entirety that the user is grabbed by the visual sensor obtains the user Age segment information, herein refer to the photo for the entire body for shooting the user by camera, captured photo carried out Analysis, obtains the corresponding age segment information of the photo, age bracket information here includes the elderly, young man and children Deng.
In step S212, as described in 3rd embodiment, the sound equipment is equipped with visual sensor, such as camera, in institute It states user and determines and enter the predeterminable area, the face that the user is grabbed by the visual sensor obtains the user Facial expression information, herein refer to the face for shooting the user by camera, captured photo analyzed, obtain The corresponding facial expression information of the photo, facial expression information here includes pleasure, anger, sorrow, happiness etc..
In addition, obtaining the gender information of the user by the face that the visual sensor grabs the user, specifically Ground is also the mug shot that the user is shot by camera, carries out analysis to photo and obtains the corresponding gender letter of the user Breath, gender information here include male, female.
In step S220, receive the user emotional parameters refer to receive other equipment transmission about the user Expression parameter, other equipment here can be the smart machine of user wearing or other are logical with the sound equipment Believe the emotional parameters detection device etc. of connection.
Referring to FIG. 4, a kind of method flow diagram of the audio controlling method provided for fifth embodiment of the invention, the step The emotional parameters that rapid S220 receives the user specifically include:
Step S221, receive the intelligent wearable device transmission of the user includes blood pressure, heart rate, body temperature and skin resistance At least one of anti-emotional parameters.
Specifically, intelligent wearable device, such as Intelligent bracelet are connected with the sound communication, the intelligent wearable device is worn It wears in the body of the user, is detected for parameters such as blood pressure, heart rate, body temperature and skin resistances to the user, And these expression parameters that will test are sent to the sound equipment, in case the sound equipment receives.It needs exist for, it is emphasized that described Sound equipment needs to receive at least one of these parameters, i.e., the described expression parameter includes blood pressure, heart rate, body temperature and skin resistance At least one of.
Above-mentioned steps S220 is accepted, in step S230, after getting the user information and the emotional parameters, The emotional status of the user is obtained according to these user informations and emotional parameters, emotional status here includes happy high Emerging, crackled with excitement, frightened sad and angry agitation etc..
Specifically, a kind of judgment criteria of the emotional status of user is given in following table 1, but these are only thing Example property, and do not constitute a limitation of the invention.
Table 1
Accepting above-mentioned steps S200, be according to institute after the emotional status for judging the user in step S300 It states emotional status and makes the counter-measure for improving the mood of the user.Specifically, the sound equipment can be according to the institute of the user It states emotional status and makes targeted counter-measure, these counter-measures can play the work for improving the mood of the user With.
Herein, referring to FIG. 6, a kind of method flow of the audio controlling method provided for sixth embodiment of the invention Figure, the step S300 are specifically included according to the counter-measure that the emotional status makes the mood for improving the user:
Step S310 plays the audio for improving the user emotion according to the emotional status;And/or
Step S320 makes the movement for improving the user emotion according to the emotional status.
It needs exist for, it is emphasized that playing the audio for improving the user emotion and making improves the dynamic of the user emotion Making the two can individually carry out, and can also carry out simultaneously.
Specifically, in step s310, following table 2 is please referred to, is shown respectively for different age group in different moods The case where audio for improving the user emotion is played under situation, only example property, is not constituted to of the invention here It limits.
Table 2
In step s 320, the sportswear entirely or partially moved with the sound equipment can be equipped on the sound equipment It sets, the sound equipment drives sound equipment entirely or partially to make movement by the telecontrol equipment after judging the emotional status Movement, athletic performance here are including but not limited to rotation, advance, retrogressing and swing of part etc..
So by play improve the audio of the user emotion and/or make the movement for improving the user emotion so that The emotional status for obtaining the user is improved, such as calms down the mood of user's sadness, keeps user no longer excited excited, so that sound It sounds to the effect of the timely emotional status for adjusting user, increases the intelligence of sound equipment, promote user experience.
Referring to FIG. 7, a kind of structural schematic diagram of the audible control system provided for seventh embodiment of the invention, the sound Ringing control system includes sound equipment 10.Wherein, the sound equipment 10 is used to determine that user to enter predeterminable area, and described for judging The emotional status of user is also used to make the counter-measure for improving the mood of the user according to the emotional status.
Further, the audible control system further includes the intelligent wearable device 20 with the sound equipment 10 communication connection; Wherein, the intelligent wearable device 20 is worn on the user's body, includes blood pressure, heart rate, body for obtain the user The emotional parameters of at least one of warm and Skin Resistance;
Further, the sound equipment 10 is used to obtain the user information of the user and receives the emotional parameters, with And the emotional status of the user is judged according to the user information and the emotional parameters.
Further, the sound equipment 10 is used to play the audio for improving the user emotion according to the emotional status;With/ Or, making the movement for improving the user emotion according to the emotional status.
Further, the sound equipment 10 be equipped with visual sensor 110, the visual sensor 110 here include but It is not limited to be camera, be located on the shell of the sound equipment 10, the user information includes the age segment information and use of user The facial expression information at family;The sound equipment 10 is used to grab the whole of the user by the visual sensor 110 and obtains institute State the age segment information of user;And obtain the user's by the face that the visual sensor 110 grabs the user Facial expression information.
In addition, it is necessary to it is pointed out that first is the embodiment of the method for audio controlling method of the present invention to sixth embodiment, Seven embodiments are the constructive embodiments of audible control system of the present invention, if being described unclear place in above seven embodiments It can mutually refer to.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation The method of certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (10)

1. a kind of audio controlling method characterized by comprising
Determine that user enters predeterminable area;
Judge the emotional status of the user;
The counter-measure for improving the mood of the user is made according to the emotional status.
2. audio controlling method according to claim 1, which is characterized in that it is specific that the determining user enters predeterminable area Include:
Enter infrared ray overlay area using infrared induction technical monitoring user to determine that user enters predeterminable area.
3. audio controlling method according to claim 1, which is characterized in that the emotional status tool of the judgement user Body includes:
Obtain the user information of the user;
Receive the emotional parameters of the user;
The emotional status of the user is judged according to the user information and the emotional parameters.
4. audio controlling method according to claim 1, which is characterized in that described to make improvement according to the emotional status The counter-measure of the mood of the user specifically includes:
The audio for improving the user emotion is played according to the emotional status;And/or
The movement for improving the user emotion is made according to the emotional status.
5. audio controlling method according to claim 3, which is characterized in that the user information includes the age bracket of user The facial expression information of information and user, the user information for obtaining the user specifically include:
The age segment information of the user is obtained by the entirety that visual sensor grabs the user;
The facial expression information of the user is obtained by the face that visual sensor grabs the user.
6. audio controlling method according to claim 3, which is characterized in that the emotional parameters for receiving the user are specifically wrapped It includes:
The intelligent wearable device transmission for receiving the user includes at least one of blood pressure, heart rate, body temperature and Skin Resistance Emotional parameters.
7. a kind of audible control system, which is characterized in that including sound equipment, for determining that user enters predeterminable area, and be used for The emotional status for judging the user, the reply for being also used to make the mood for improving the user according to the emotional status are arranged It applies.
8. audible control system according to claim 7, which is characterized in that further include the intelligence being connect with the sound communication It can wearable device;
The intelligent wearable device is worn on the user's body, for obtain the user include blood pressure, heart rate, body temperature with And the emotional parameters of at least one of Skin Resistance;
The sound equipment is used to obtain the user information of the user and receives the emotional parameters, and is believed according to the user Breath and the emotional parameters judge the emotional status of the user.
9. audible control system according to claim 7, which is characterized in that the sound equipment is used for according to the emotional status Play the audio for improving the user emotion;And/or the movement for improving the user emotion is made according to the emotional status.
10. audible control system according to claim 8, which is characterized in that the sound equipment is equipped with visual sensor, described User information includes the age segment information of user and the facial expression information of user;
The sound equipment is used to obtain the age segment information of the user by the entirety that the visual sensor grabs the user; And the facial expression information of the user is obtained by the face that the visual sensor grabs the user.
CN201811399930.3A 2018-11-22 2018-11-22 Audio controlling method and system Pending CN109669535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811399930.3A CN109669535A (en) 2018-11-22 2018-11-22 Audio controlling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811399930.3A CN109669535A (en) 2018-11-22 2018-11-22 Audio controlling method and system

Publications (1)

Publication Number Publication Date
CN109669535A true CN109669535A (en) 2019-04-23

Family

ID=66142123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811399930.3A Pending CN109669535A (en) 2018-11-22 2018-11-22 Audio controlling method and system

Country Status (1)

Country Link
CN (1) CN109669535A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123851A (en) * 2019-11-11 2020-05-08 珠海格力电器股份有限公司 Method, device and system for controlling electric equipment according to user emotion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425247A (en) * 2013-06-04 2013-12-04 深圳市中兴移动通信有限公司 User reaction based control terminal and information processing method thereof
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN108170401A (en) * 2018-02-26 2018-06-15 域通全球成都科技有限责任公司 A kind of audio frequency broadcast system based on recognition of face
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device
CN108604246A (en) * 2016-12-29 2018-09-28 华为技术有限公司 A kind of method and device adjusting user emotion
CN108846049A (en) * 2018-05-30 2018-11-20 郑州易通众联电子科技有限公司 Stereo set control method and stereo set control device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425247A (en) * 2013-06-04 2013-12-04 深圳市中兴移动通信有限公司 User reaction based control terminal and information processing method thereof
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN108604246A (en) * 2016-12-29 2018-09-28 华为技术有限公司 A kind of method and device adjusting user emotion
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device
CN108170401A (en) * 2018-02-26 2018-06-15 域通全球成都科技有限责任公司 A kind of audio frequency broadcast system based on recognition of face
CN108846049A (en) * 2018-05-30 2018-11-20 郑州易通众联电子科技有限公司 Stereo set control method and stereo set control device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123851A (en) * 2019-11-11 2020-05-08 珠海格力电器股份有限公司 Method, device and system for controlling electric equipment according to user emotion

Similar Documents

Publication Publication Date Title
JP6815486B2 (en) Mobile and wearable video capture and feedback platform for the treatment of mental illness
US11222632B2 (en) System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
US11468894B2 (en) System and method for personalizing dialogue based on user's appearances
JP4481682B2 (en) Information processing apparatus and control method thereof
US8898344B2 (en) Utilizing semantic analysis to determine how to measure affective response
Tinwell et al. Perception of psychopathy and the Uncanny Valley in virtual characters
JP7424285B2 (en) Information processing system, information processing method, and recording medium
US20150058327A1 (en) Responding to apprehension towards an experience with an explanation indicative of similarity to a prior experience
KR20170085422A (en) Apparatus and method for operating personal agent
WO2019067783A1 (en) Production and control of cinematic content responsive to user emotional state
CA2673644A1 (en) Situated simulation for training, education, and therapy
CN112379780B (en) Multi-mode emotion interaction method, intelligent device, system, electronic device and medium
US20150004576A1 (en) Apparatus and method for personalized sensory media play based on the inferred relationship between sensory effects and user's emotional responses
KR100580617B1 (en) Object growth control system and method
CN109669535A (en) Audio controlling method and system
CN111063346A (en) Cross-media star emotion accompany interaction system based on machine learning
CN116935480B (en) Emotion recognition method and device
Churamani et al. Affect-driven modelling of robot personality for collaborative human-robot interactions
EP3956748B1 (en) Headset signals to determine emotional states
CN116503841A (en) Mental health intelligent emotion recognition method
US11935140B2 (en) Initiating communication between first and second users
JP7414735B2 (en) Method for controlling multiple robot effectors
Dobre et al. Direct gaze triggers higher frequency of gaze change: An automatic analysis of dyads in unstructured conversation
JP2022031617A (en) Advice system and advice method
Asteriadis et al. Does your profile say it all? Using demographics to predict expressive head movement during gameplay

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190423

RJ01 Rejection of invention patent application after publication