CN106803423A - Man-machine interaction sound control method, device and vehicle based on user emotion state - Google Patents
Man-machine interaction sound control method, device and vehicle based on user emotion state Download PDFInfo
- Publication number
- CN106803423A CN106803423A CN201611229157.7A CN201611229157A CN106803423A CN 106803423 A CN106803423 A CN 106803423A CN 201611229157 A CN201611229157 A CN 201611229157A CN 106803423 A CN106803423 A CN 106803423A
- Authority
- CN
- China
- Prior art keywords
- user
- setting
- emotional state
- state
- setting user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
Abstract
The present invention discloses a kind of man-machine interaction sound control method based on user emotion state, device and vehicle, wherein, method includes:Monitoring sets expression, voice or the action of user;Expression, voice or action according to the setting user determine the current emotional state of the setting user;The Voice command mode of vehicle is determined according to the current emotional state of the setting user;Voice command mode according to the determination carries out vehicle man machine's interaction.Method disclosed by the invention, device and vehicle can be according to the driving behaviors of user, the word speed intonation spoken, the emotions current to calculate user such as the expression of face, intelligence system can play change of voice of suitable music or adjustment navigation etc. according to the current emotional state of user and carry out man-machine interaction with user, the mood of user is adjusted, makes the driving of user safer.
Description
Technical field
The present invention relates to artificial intelligence field, can specifically be related to Vehicular intelligent control field or field of human-computer interaction, especially
It is related to a kind of man-machine interaction sound control method based on user emotion state, device and vehicle.
Background technology
With the fast development of society, automobile is increasingly popularized in life;Although the concept of Vehicular automatic driving is proposed
For a long time, but not yet popularize;Presently, the control of driver is still in determining status in vehicle travel process.But,
May be influenceed by various moods in startup procedure as the people of driver, and some moods may then have a strong impact on and drive
Sail safety.
Therefore, it is necessary to provide a kind of method or vehicle that can analyze driver's mood.
The content of the invention
The invention solves the problems that a technical problem be how for vehicle environmental provide it is a kind of can be based on user feelings
Thread carries out the method for man-machine interaction, Intelligent control system and vehicle, can adjust the mood of user, and then the mood based on user
State carries out vehicle man machine's interactive controlling so as to ensure that user drives the safety of vehicle.
The present invention provides a kind of man-machine interaction sound control method based on user emotion state, including:Monitoring setting is used
The expression at family, voice or action;Expression, voice or action according to the setting user determine the current feelings of the setting user
Not-ready status;The Voice command mode of vehicle is determined according to the current emotional state of the setting user;According to the language of the determination
Sound control mode carries out vehicle man machine's interaction.
Further, it is described to determine that the setting user is current according to the expression for setting user, voice or action
Include before emotional state:The emotional state data of the multiple users of statistics form general user's emotional state database, the feelings
Not-ready status database includes expression, voice or the action of user and the relation of emotional state;According to general user's emotional state number
According to storehouse, the emotional state of multiple users is analyzed in big data mode, the emotional state that user is demarcated in the action according to user is really
Determine the factor, the user emotion state includes the cheerful and light-hearted state, state of indignation, sad state, the state of pain, exciting
State.
Further, it is described to determine that the setting user is current according to the expression for setting user, voice or action
Emotional state includes:Expression, voice or dynamic according to general user's emotional state database and the setting user for monitoring
Assess and determine to set the current emotional state of user.
Further, also including setting the control model that the current emotional state of user determines vehicle according to described.
Further, the current mood of the setting user is determined according to the expression for setting user, voice or action
State also includes:Obtain word speed, intonation, the change of sound size of the voice of the setting user;According to setting user speech
Word speed, intonation, the average of sound size determine the setting current emotional state of user.
Further, the language message included in the sound in analysis setting user setting time section, judges that setting is used
Whether specific words sentence is had in the language performance at family;Specific words sentence according to occurring in setting user currently expression determines setting
The emotional state of user.
Further, the action of user is set in analysis setting time section;According to setting user everyday actions, it is current when
Between action judge the setting current emotional state of user.
Further, the driving behavior of user is set in analysis setting time section;According to the drive routine row of setting user
For the driving behavior of, current time judges the current emotional state of setting user.
Further, whether the word speed of the voice of the setting user of analysis setting time section, intonation, sound size exceed and set
Determine threshold value;If the word speed for setting user becomes uprised more than setting intonation threshold near more than setting word speed threshold value, He or intonation
Value and or sound become greater to more than setting sound threshold value, then explanation setting user be in relatively excited state.
Further, the emotional state current according to the setting user determines the Voice command mode of vehicle, bag
Include:If setting user is currently at the state of indignation, selection is gentle, the audio database of comfort carries out Voice command,
Pacify the mood of setting user.
Further, the man-machine interaction sound effect of the emotional state selection setting according to setting user enters with setting user
Row interaction.
Further, it is determined that after the emotional state of setting user, the emotional state according to setting user selects setting
Music sets the mood of user to pacify to ensure driving safety.
Further, if being currently located in driving navigation state, setting can be selected according to the emotional state of setting user
Navigation language play with pacify setting user mood to ensure driving safety.
The present invention also provides a kind of man-machine interaction phonetic controller based on user emotion state, including:Monitoring module,
Expression, voice or action for monitoring setting user;Emotional state analysis module, is connected with the monitoring module, is used for
Expression, voice or action according to the setting user determine the current emotional state of the setting user;Processing module, with institute
State emotional state analysis module to be connected, the Voice command for determining vehicle according to the current emotional state of the setting user
Mode;Performing module, is connected with the processing module, and vehicle man machine is carried out for the Voice command mode according to the determination
Interaction.
Further, also including emotional state DBM, the emotional state data for counting multiple users are formed
General user's emotional state database, the emotional state database includes expression, voice or action and the emotional state of user
Relation;According to general user's emotional state database, the emotional state of multiple users is analyzed in big data mode, according to user
Action demarcate user emotional state certainty factor, the user emotion state include cheerful and light-hearted state, indignation state,
Sad state, the state of pain, exciting state.
Further, the emotional state analysis module is additionally operable to according to general user's emotional state database and prison
The expression of the setting user controlled, voice or action are assessed and determine to set the emotional state of user.
Further, processing module is additionally operable to determine according to the current emotional state of the setting user control mould of vehicle
Formula.
Further, the emotional state analysis module is additionally operable to:Obtain word speed, the language of the voice of the setting user
Tune, the change of sound size;Word speed, intonation, the average of sound size according to setting user speech determine that setting user is current
Emotional state.
Further, the language message that the voice in analysis setting user setting time section is included, judges setting user
Language performance in whether have specific words sentence;Specific words sentence according to occurring in setting user currently expression determines that setting is used
The emotional state at family.
Further, the action of user is set in analysis setting time section;According to setting user everyday actions, it is current when
Between action judge the setting current emotional state of user.
Further, the driving behavior of user is set in analysis setting time section;According to the drive routine row of setting user
For the driving behavior of, current time judges the current emotional state of setting user.
Further, whether the word speed of the voice of the setting user of analysis setting time section, intonation, sound size exceed and set
Determine threshold value;If the word speed for setting user becomes uprised more than setting intonation threshold near more than setting word speed threshold value, He or intonation
Value and or sound become greater to more than setting sound threshold value, then explanation setting user be in relatively excited state.
Further, if processing module is additionally operable to set the state that user is currently at indignation, selection tenderness, peace
The audio database consoled carries out Voice command, pacifies the mood of setting user.
Further, performing module is additionally operable to the man-machine interaction sound effect of the emotional state selection setting according to setting user
Fruit interacts with setting user.
Further, performing module be additionally operable to it is determined that setting user emotional state after, according to setting user mood
The music of condition selecting setting sets the mood of user to pacify to ensure driving safety.
Further, if performing module is additionally operable to be currently located in driving navigation state, can be according to the feelings of setting user
The navigation language of not-ready status selection setting plays the mood to pacify setting user to ensure driving safety.
The present invention provides a kind of vehicle, including as above any described man-machine interaction voice control based on user emotion state
Device processed.
Method, device and vehicle that the present invention is provided, can be according to the driving behavior of user, the word speed intonation spoken, face
The emotion current to calculate user such as the expression in portion, such as relatively worry, relatively more exciting, relatively angry, relatively sadder etc., intelligence
System can play change of voice of suitable music or adjustment navigation etc. and user according to the current emotional state of user
Man-machine interaction is carried out, the mood of user is adjusted, makes the driving of user safer.
Brief description of the drawings
Fig. 1 shows the flow of the man-machine interaction sound control method based on user emotion state of one embodiment of the invention
Figure.
Fig. 2 shows a kind of man-machine interaction phonetic controller based on user emotion state of one embodiment of the invention
Structured flowchart.
Fig. 3 shows the structured flowchart of the vehicle of one embodiment of the invention.
Specific embodiment
The present invention is described more fully with reference to the accompanying drawings, wherein illustrating exemplary embodiment of the invention.
Fig. 1 shows the flow of the man-machine interaction sound control method based on user emotion state of one embodiment of the invention
Figure, shown in reference picture 1, methods described includes:
Step 101, monitoring sets expression, voice or the action of user.
In one embodiment, expression, the language for setting user can be monitored or detected by the combination of multiple sensors
Sound or action.
For example, can be by the expression of the fatigue driving camera head monitor user of built-in vehicle, dynamic in vehicle
Make etc.;The voice situation of setting user can be detected by the microphone of built-in vehicle.
Step 102, expression, voice or action according to the setting user determine the current emotional state of setting user.
In one embodiment, the emotional state data that can count multiple users form general user's emotional state data
Storehouse, the emotional state determines according to the expression of user, voice or action;According to general user's emotional state database, with big
The emotional state of the multiple users of data mode analysis, the emotional state according to user demarcates cheerful and light-hearted, angry, sadness, pain
The certainty factor of bitter, exciting emotional state;Used according to general user's emotional state database and the setting for monitoring
The emotional state of the expression, voice or action assessment user at family.
In one embodiment, the mood of user can be summarised as several moods of pleasure, anger, sorrow, happiness with broad sense.In actual life
In work, the driving influence of the state of happiness or the state of pleasure on user relatively may be smaller;But, the emotional state of anger or sorrow
Then to the driving behavior producing ratio large effect of user, for example, the road anger family of driving to it is current drive user or
Other users on road are likely to result in very big potential safety hazard, actual life intermediate frequency take place frequently it is raw because driver in a great rage and
Caused other car, parking are hit the person and various traffic accidents etc..
In one embodiment, influence shadow of the moods such as anger or sorrow to user's driving condition can be set more than happiness or happy
Emotional state, the emotional state of key monitoring setting user's anger or sorrow and takes corresponding controlling party in interactive voice
Method is adjusting the mood of user.
In one embodiment, can be that people is handed over Vehicular intelligent system using interactive voice for environment inside car
Mutual important means, in title is crossed in man-machine interaction, intelligence system can set the voice of user by microphone monitor, and acquisition sets
Determine word speed, intonation, the change of sound size of the voice of user;According to the setting word speed of user speech, intonation, sound size
Average determines the current emotional state of setting user.
In one specifically embodiment, word speed, intonation, the sound of the voice of the setting user of setting time section can be analyzed
Whether sound size exceedes given threshold;If the word speed for setting user becomes uprised near more than setting word speed threshold value or intonation
It is become greater to more than setting sound threshold value more than setting intonation threshold value, sound, then illustrates that setting user is in relatively excited
State.
In one embodiment, whether there is specific words sentence in analysis setting time section in the language performance of setting user;
Specific words sentence according to occurring in setting user currently expression determines the emotional state of setting user.For example, Ke Yifen
If whether having specific words sentence such as bad language, glad expression in the language performance of setting user in analysis setting time section.
If for example, the sentence for obscene word being frequently occurred in the language performance of user, being sworn at people, it may be said that bright user may be in
The state of relatively exciting or indignation;If again for example, occurring in that expression sentence or laugh of happiness etc. in the language performance of user
Sound, then can illustrate that user may currently be in relatively glad state.
In one embodiment, specific or commonly used word words and phrases word speeds, intonation, the change of sound size can also be analyzed
Determine the current emotional state of user.For example, in vehicle man machine's interaction, may have some conventional words sentences as waken up
Word, intelligence system can be according to user's expression such as " you are good, small intelligence " or, " Hi, small intelligence " etc. wakes up the expression way of word, analysis use
Voice, intonation, the change of sound size of the family when these everyday expressions are expressed, so as to analyze the emotional state of active user.
For example, when analysis user expresses these everyday words, if with emotional states such as happiness, anger, grief and joy.
In one embodiment, analysis setting time section in setting user action, according to setting user everyday actions,
The action of current time judges the current emotional state of setting user.
In one embodiment, the driving behavior of user is set in analysis setting time section;According to the daily of setting user
Driving behavior, the driving behavior of current time judge the current emotional state of setting user.For example, the driving of user is analyzed
Behavior, if user occur violence trample throttle, violence flap-wise disk, other garage for etc., can determine that user currently compares
It is angry.
Step 103, the Voice command mode of vehicle is determined according to the current emotional state of the setting user.
Step 104, the Voice command mode according to the determination carries out vehicle man machine's interaction.
In one embodiment, if setting user is currently at the state of indignation, selection is gentle, the sound of comfort
Sound carries out Voice command in database, pacifies the mood of setting user;What the emotional state selection according to setting user set
Man-machine interaction sound effect is interacted with setting user.
In one embodiment, in the scene particularly in vehicle, it is one that user is interacted by voice and vehicle
Important function.Interaction statistics according to daily machine and people simultaneously draw the interactive voice data that user emotion state is interacted
Storehouse, can analyse the factor of influence with setting happiness, anger, grief and joy scientifically when database is set up;And the user drawn according to analysis works as
Preceding mood determines the man-machine interaction mode of vehicle to encourage, pacify or point out current driver's, to cause that driver can be with
Calm down, prevent driver because the unhealthy emotion in driving influences driving safety.
For example, if user is currently located in angry state, the angry state voice interactive database of selection user is used
The database can pacify driver so that driver's mood can be settled out, and prevent driver influences to drive because of indignation
Sail.
For example, if user is currently located in the state of sadness, selection user's sadness state voice interactive database is used
The database can comfort that driver prevents driver and influences to drive because of sad.
In one embodiment, it is determined that after the mood of user, intelligence system can be selected according to the emotional state of user
The music of setting sets the mood of user to pacify to ensure driving safety.For example, user is currently located in cheerful and light-hearted mood
State, can select to play some brisk fine music;If user is in the state of anger, can play what some were calmed the nerves
Light music etc..
In one embodiment, it is determined that after the mood of user, if being currently located in driving navigation state, intelligence system can
Played with the navigation language of the emotional state selection setting according to setting user and ensure to drive to pacify the mood of setting user
Safety.
For example, if the emotional state of active user is to lose sadness relatively, for male driver, Vehicular intelligent system can
To select gentle female voice to carry out man-machine interaction with user;For female driver, vehicle can select the magnetic male voice to be carried out with user
Man-machine interaction.
As an example, can be using the information knot in existing face recognition database when expression recognition is carried out
Specific camera position carries out recognition of face in closing vehicle.First, the first step is Face datection, is exactly true in vehicle camera lens
Positioning is put, and finds face location;Second step is critical point detection, at the face location having determined, finds looks, ear nose etc. accurately
The key point of face mask, carries out recognition of face and then identifies setting user;3rd step, is the face based on large-scale data
Identification, it is determined that the information of setting user;4th step, finds the facial expression data storehouse of setting user, based on setting user's
The current emotional state of facial expression information database identifying user.
In one embodiment, intelligence system determines the control mould of vehicle according to the current emotional state of the setting user
Formula.For example, can include improve ADAS systems warning value can such as be automatically increased safe following distance, automatically control vehicle with
The vehicle in other tracks keeps longer safe distance, when shortening the reaction of user set in advance in active safety systems of vehicles
Between, while can be alerted to user with automatic tightening safety belt and by vibration of steering wheel, prompting user notes driving safety, this
Sample can improve the security of system.
In one embodiment, can be after the emotional state that user is determined, based on the current emotional state choosing of user
Select the control mode of vehicle.For example, if user is currently at angry state, vehicle with automatic tightening safety belt, and can pass through
Vehicle audio system is alerted, and is pacified user and is pointed out user to note driving safety;If user is in angry state,
Even occur in that Bie Che or attempt by others' action of Vehicular impact, Vehicular intelligent operating system can then temporarily take over vehicle
Control, and allow the vehicle to pulling over observing;Prevent driver and the situation hurted sb.'s feelings or feel sorrow for oneself occur because of indignation.Certainly i.e.
Vehicle is automatically selected pulling over observing, also can carry out comprehensive analysis according to current situation determines how parking, prevent because automatic
Drive and danger occur.
In one embodiment, the action of the setting user can include the driving behavior of setting, can extract setting
The driving behavior of user is contrasted with the related data in driving behavior assessment database, by the current behavior pair of the user
Driving behavior more normal than user, can specifically analyze civilized index, technology index, the love car in the current driving behavior of user
Index, smoothing index, save electrical index whether substantial deviation set user average value, if substantial deviation average value, may
Illustrate that the current emotional state driven of user there may be problem.
In particular it is required that the driving behavior of analysis can include:To electric switch, gas pedal, brake trample dynamics, if
In the presence of steering wheel behavior is hit, drive on the horn, random Kaiyuan City's light is not played lamp and turned to, frequent lane change in driving, in driving by force simultaneously
Road, robs track, makes a dash across the red light, driving behavior that driving does not use of seat belts etc.;By these current driving behaviors of user and use
The daily driving behavior at family is contrasted, the emotional state of comprehensive analysis assessment setting user.
If in general, the relative state in indignation of people, may clap suddenly loudspeaker, action it is relative can roughly or
It is abnormal, if at this moment Vehicular intelligent system monitoring has the violent behavior clapped loudspeaker, drive on the horn to user, judge that user may
It is in exciting, indignation or has the state of unhealthy emotion;Now, intelligence system need adjust vehicle interactive voice mode, with
Family interacts to pacify driver.
In one embodiment, if user is present hits steering wheel behavior, drive on the horn, random Kaiyuan City's light is not played lamp and turned
To, frequent lane change in driving, in driving by force and road, track is robbed, made a dash across the red light, the driving behavior that driving does not use of seat belts, then
Vehicle can also automatically point out driver by voice, and good driving is formed to correct the bad habit of driver, education user
Custom.
If as an example, in people's normal driving, it is gentleer normal to beat steering wheel, trample electric switch, throttle or
The action for trampling brake is more moderate, but setting time exists suddenly and hits steering wheel, suddenly step on the gas, brake in driving
Behavior may then illustrate that user is currently located in unconventional state, can judge the current feelings of user according to the current action of user
Not-ready status, the emotional state control vehicle according to setting changes interactive voice mode or plays music to pacify user.
Man-machine interaction sound control method based on user emotion state provided in an embodiment of the present invention, can be according to user
Driving behavior (as and drive routine behavior difference), the word speed intonation spoken, or even the expression etc. of face calculates user
Current emotion, such as relatively worries, relatively more exciting, relatively angry, and relatively sadder etc., intelligence system can be current according to user
Emotional state plays change of voice of suitable music or adjustment navigation etc. and carries out man-machine interaction with user, adjustment user's
Mood, makes the driving of user safer.
Fig. 2 shows a kind of man-machine interaction phonetic controller based on user emotion state of one embodiment of the invention
Structured flowchart, shown in reference picture 2, the device 200 includes:Monitoring module 201, for monitor the setting expression of user, voice or
Action;Emotional state analysis module 202, is connected with the monitoring module 201, for according to it is described setting user expression,
Voice or action determine the current emotional state of the setting user;Processing module 203, with the emotional state analysis module
202 are connected, the Voice command mode for determining vehicle according to the current emotional state of the setting user;Performing module
204, it is connected with the processing module 203, carry out vehicle man machine's interaction for the Voice command mode according to the determination.
In one embodiment, the device also includes:Emotional state DBM 205, for counting multiple users'
Emotional state data form general user's emotional state database, and the emotional state database includes expression, the voice of user
Or the relation of action and emotional state;According to general user's emotional state database, analyze multiple users' in big data mode
The certainty factor of the emotional state of user is demarcated in emotional state, the action according to user, and the user emotion state includes cheerful and light-hearted
State, the state of indignation, sad state, the state of pain, exciting state;
In one embodiment, the emotional state analysis module is additionally operable to according to general user's emotional state data
The expression of storehouse and the setting user for monitoring, voice or action are assessed and determine to set the emotional state of user.
In one embodiment, the emotional state analysis module is additionally operable to:Obtain the language of the voice of the setting user
Speed, intonation, the change of sound size;Word speed, intonation, the average of sound size according to setting user speech determine setting user
Current emotional state.
In one embodiment, the emotional state analysis module is additionally operable to:In analysis setting user setting time section
Whether the language message that voice is included, judges there is specific words sentence in the language performance for set user;Worked as according to setting user
The specific words sentence occurred in preceding expression determines the emotional state of setting user.
In one embodiment, the emotional state analysis module is additionally operable to:Setting user in analysis setting time section
Action;The action of everyday actions, current time according to setting user judges the current emotional state of setting user.
In one embodiment, the emotional state analysis module is additionally operable to:Setting user in analysis setting time section
Driving behavior;The driving behavior of drive routine behavior, current time according to setting user judges the current mood of setting user
State.
In one embodiment, the emotional state analysis module is additionally operable to:The setting user's of analysis setting time section
Whether the word speed of voice, intonation, sound size exceed given threshold;If the word speed for setting user becomes near more than setting word speed
Threshold value and or intonation uprise more than setting intonation threshold value and or sound become greater to more than setting sound threshold value, then explanation set
Determine user and be in relatively excited state.
In one embodiment, if processing module is additionally operable to set the state that user is currently at indignation, selection temperature
The audio database of soft, comfort carries out Voice command, pacifies the mood of setting user.
In one embodiment, performing module is additionally operable to the man-machine interaction of the emotional state selection setting according to setting user
Sound effect is interacted with setting user.
In one embodiment, performing module be additionally operable to it is determined that setting user emotional state after, according to setting user
Emotional state selection setting music with pacify setting user mood to ensure driving safety.
In one embodiment, if performing module is additionally operable to be currently located in driving navigation state, can be used according to setting
The navigation language of the emotional state selection setting at family plays the mood to pacify setting user to ensure driving safety.
The present invention also provides a kind of vehicle, and the vehicle includes the man-machine interaction language based on user emotion state as described above
Sound control device.
In one embodiment, vehicle of the invention can in real time connect the server positioned at high in the clouds, and service is called in time
The data of device carry out user emotion state analysis.
Fig. 3 shows the structured flowchart of the vehicle of one embodiment of the invention, as shown in figure 3, the vehicle can include:Middle control
Module, instrument board 310, drive recorder 311, HUD (Head Up Display, head-up display) HUD 312, intelligence
Can vehicle-mounted information and entertainment system 313, intelligent driving module 313.
Instrument board 310 has 12.3 cun of LCD display devices, and the instrument board can be using the J6CPU of TI;The operation of instrument board
System can be based on QNX embedded systems, and instrument board is displayed for vehicle-state, map, vehicle navigation information, vehicle and broadcasts
Put the music on, the car status information is including speed, rotating speed, electricity, tire pressure, vehicle parking, gear etc..HUD comes back and shows
Device 312 can show GPS navigation information, navigation route information, temporal information etc..
In one embodiment, intelligent driving module 313 can be used for the treatment operation related to intelligent driving, intelligently drive
Sailing module 313 can include senior DAS (Driver Assistant System) (Advanced Driver Assistance Systems, ADAS), master
Dynamic security system, notice accessory system (Attention Assist System, AAS), tired warning system (Fatigue
Warning System, FWS), Vehicular intelligent acoustical alarm system (Acoustic Vehicle Alerting System,
AVAS) etc..Vehicle can combine ADAS systems etc. and carry out intelligent driving, and the intelligent driving can be nobody driving complete,
Can be that driver carries out the senior auxiliary driving function such as auxiliary doubling, the lane shift of Driving control.
Control device can be made up of multiple modules, can mainly include:Mainboard 301;SATA(Serial Advanced
Technology Attachment, Serial Advanced Technology Attachment) module 302, the storage device such as SSD303 is connected to, can be with
For data storage information;AM (Amplitude Modulation, amplitude modulation)/FM (Frequency Modulation, frequency modulation)
Module 304, the function of radio is provided for vehicle;Power amplifier module 305, for acoustic processing;WIFI(Wireless-
Fidelity, Wireless Fidelity)/Bluetooth modules 306, the service of WIFI/Bluetooth is provided for vehicle;LTE(Long
Term Evolution, Long Term Evolution) communication module 307, for vehicle provides the communication function with telecom operators;Power module
308, power module 308 provides power supply for the control device;Switch interconnecting modules 309, the Switch interconnecting modules 309 can be with
Multiple sensors are connected as a kind of expansible interface, for example, is passed if necessary to addition night vision function sensor, PM2.5 functions
Sensor, can be connected to the mainboard of control device by the Switch interconnecting modules 309, so that the processor of control device is carried out
Data processing, and transfer data to middle control display.
In one embodiment, the vehicle also includes looking around camera, ADAS cameras, night vision cam, millimeter wave thunder
Up to, the sensor such as ultrasonic radar, ESR radars.Vehicle hardware is after manufacture the above-mentioned intelligent driving related hardware of carry, later stage
Can be upgraded by OTA and improve automatic Pilot correlation function using above-mentioned hardware.
Description of the invention is given for the sake of example and description, and is not exhaustively or by the present invention
It is limited to disclosed form.Many modifications and variations are for the ordinary skill in the art obvious.Select and retouch
State embodiment and be to more preferably illustrate principle of the invention and practical application, and one of ordinary skill in the art is managed
The solution present invention is suitable to the various embodiments with various modifications of special-purpose so as to design.
Claims (11)
1. a kind of man-machine interaction sound control method based on user emotion state, it is characterised in that including:
Monitoring sets expression, voice or the action of user;
Expression, voice or action according to the setting user determine the current emotional state of the setting user;
The Voice command mode of vehicle is determined according to the current emotional state of the setting user;
Voice command mode according to the determination carries out vehicle man machine's interaction.
2. method according to claim 1, it is characterised in that:
It is described to be determined to be wrapped before the current emotional state of the setting user according to the expression for setting user, voice or action
Include:
The emotional state data of the multiple users of statistics form general user's emotional state database, the emotional state database bag
Include expression, voice or the action of user and the relation of emotional state;
According to general user's emotional state database, the emotional state of multiple users is analyzed in big data mode, according to user's
The certainty factor of the emotional state of user is demarcated in action, and the user emotion state includes cheerful and light-hearted state, state, the compassion of indignation
The state of wound, the state of pain, exciting state;
With or
It is described to determine that the current emotional state of the setting user includes according to the expression for setting user, voice or action:
Expression, voice or action assessment according to general user's emotional state database and the setting user for monitoring is simultaneously true
Surely the current emotional state of user is set;
With or
The control model of vehicle is determined according to the current emotional state of the setting user.
3. method according to claim 1 and 2, it is characterised in that expression, voice or action according to the setting user
Determine that the current emotional state of the setting user also includes:
Obtain word speed, intonation, the change of sound size of the voice of the setting user;According to setting user speech word speed,
Intonation, the average of sound size determine the current emotional state of setting user;
With or
The language message included in sound in analysis setting user setting time section, in the language performance of judgement setting user
Whether specific words sentence is had;
Specific words sentence according to occurring in setting user currently expression determines the emotional state of setting user;
With or
The action of setting user in analysis setting time section;
The action of everyday actions, current time according to setting user judges the current emotional state of setting user;
With or
The driving behavior of setting user in analysis setting time section;
The driving behavior of drive routine behavior, current time according to setting user judges the current emotional state of setting user.
4. method according to claim 3, it is characterised in that also include:
Whether the word speed of the voice of the setting user of analysis setting time section, intonation, sound size exceed given threshold;
If the word speed for setting user become near more than setting word speed threshold value, He or intonation uprise more than setting intonation threshold value,
With or sound become greater to more than setting sound threshold value, then explanation setting user be in relatively excited state.
5. method according to claim 1, it is characterised in that the emotional state current according to the setting user is true
Determine the Voice command mode of vehicle, including:
If setting user is currently at the state of indignation;
Selection is gentle, the audio database of comfort carries out Voice command, pacifies the mood of setting user;
With or
The man-machine interaction sound effect of the emotional state selection setting according to setting user is interacted with setting user;
With or
It is determined that after the emotional state of setting user;
The music of the emotional state selection setting according to setting user ensures to drive peace to pacify the mood of setting user
Entirely;
With or
If being currently located in driving navigation state;
The navigation language of the emotional state selection setting according to setting user plays the mood to pacify setting user to ensure to drive
Sail safety.
6. a kind of man-machine interaction phonetic controller based on user emotion state, it is characterised in that including:
Monitoring module, expression, voice or action for monitoring setting user;
Emotional state analysis module, is connected with the monitoring module, for the expression according to the setting user, voice or dynamic
Make to determine the emotional state that the setting user is current;
Processing module, is connected with the emotional state analysis module, for according to the current emotional state of the setting user
Determine the Voice command mode of vehicle;
Performing module, is connected with the processing module, and vehicle man machine is carried out for the Voice command mode according to the determination
Interaction.
7. device according to claim 6, it is characterised in that:
Also include emotional state DBM, the emotional state data for counting multiple users form general user's mood shape
State database, the emotional state database includes expression, voice or the action of user and the relation of emotional state;According to general
User emotion slip condition database, the emotional state of multiple users is analyzed in big data mode, and user is demarcated in the action according to user
Emotional state certainty factor, the user emotion state includes cheerful and light-hearted state, the state of indignation, sad state, pain
Bitter state, exciting state;
With or
The emotional state analysis module is additionally operable to be used according to general user's emotional state database and the setting for monitoring
The expression at family, voice or action are assessed and determine to set the emotional state of user;
With or
Processing module is additionally operable to determine according to the current emotional state of the setting user control model of vehicle.
8. the device according to claim 6 or 7, it is characterised in that the emotional state analysis module is additionally operable to:
Obtain word speed, intonation, the change of sound size of the voice of the setting user;According to setting user speech word speed,
Intonation, the average of sound size determine the current emotional state of setting user;
With or
The language message that voice in analysis setting user setting time section is included, be in the language performance for judging setting user
It is no to have specific words sentence;Specific words sentence according to occurring in setting user currently expression determines the emotional state of setting user;
With or
The action of setting user in analysis setting time section;
The action of everyday actions, current time according to setting user judges the current emotional state of setting user;
With or
The driving behavior of setting user in analysis setting time section;According to the setting drive routine behavior of user, current time
Driving behavior judges the current emotional state of setting user.
9. device according to claim 8, it is characterised in that including:
Whether the word speed of the voice of the setting user of analysis setting time section, intonation, sound size exceed given threshold;
If the word speed for setting user become near more than setting word speed threshold value, He or intonation uprise more than setting intonation threshold value,
With or sound become greater to more than setting sound threshold value, then explanation setting user be in relatively excited state.
10. device according to claim 6, it is characterised in that including:
If processing module is additionally operable to set the state that user is currently at indignation, selection is gentle, the voice data of comfort
Storehouse carries out Voice command, pacifies the mood of setting user;
With or
Performing module is additionally operable to man-machine interaction sound effect and the setting user of the emotional state selection setting according to setting user
Interact;
With or
Performing module is additionally operable to it is determined that after the emotional state of setting user, the emotional state according to setting user selects setting
Music sets the mood of user to pacify to ensure driving safety;
With or
If performing module is additionally operable to be currently located in driving navigation state;
The navigation language of the emotional state selection setting according to setting user plays the mood to pacify setting user to ensure to drive
Sail safety.
11. a kind of vehicles, it is characterised in that including the people based on user emotion state as described in any in claim 6-10
Machine interactive voice control device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611229157.7A CN106803423B (en) | 2016-12-27 | 2016-12-27 | Man-machine interaction voice control method and device based on user emotion state and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611229157.7A CN106803423B (en) | 2016-12-27 | 2016-12-27 | Man-machine interaction voice control method and device based on user emotion state and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106803423A true CN106803423A (en) | 2017-06-06 |
CN106803423B CN106803423B (en) | 2020-09-04 |
Family
ID=58985118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611229157.7A Active CN106803423B (en) | 2016-12-27 | 2016-12-27 | Man-machine interaction voice control method and device based on user emotion state and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106803423B (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107230384A (en) * | 2017-06-21 | 2017-10-03 | 深圳市盛路物联通讯技术有限公司 | Based on the stopping guide system and method for expecting parking duration and Weather information |
CN108010512A (en) * | 2017-12-05 | 2018-05-08 | 广东小天才科技有限公司 | The acquisition methods and recording terminal of a kind of audio |
CN108052016A (en) * | 2017-12-29 | 2018-05-18 | 南京工程学院 | A kind of interactive intelligent mirror |
CN108549720A (en) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification |
CN108664123A (en) * | 2017-12-15 | 2018-10-16 | 蔚来汽车有限公司 | People's car mutual method, apparatus, vehicle intelligent controller and system |
CN108682419A (en) * | 2018-03-30 | 2018-10-19 | 京东方科技集团股份有限公司 | Sound control method and equipment, computer readable storage medium and equipment |
CN108710821A (en) * | 2018-03-30 | 2018-10-26 | 斑马网络技术有限公司 | Vehicle user state recognition system and its recognition methods |
CN108896061A (en) * | 2018-05-11 | 2018-11-27 | 京东方科技集团股份有限公司 | A kind of man-machine interaction method and onboard navigation system based on onboard navigation system |
CN108984229A (en) * | 2018-07-24 | 2018-12-11 | 广东小天才科技有限公司 | A kind of the starting control method and private tutor's equipment of application program |
CN109190459A (en) * | 2018-07-20 | 2019-01-11 | 上海博泰悦臻电子设备制造有限公司 | A kind of car owner's Emotion identification and adjusting method, storage medium and onboard system |
CN109243438A (en) * | 2018-08-24 | 2019-01-18 | 上海擎感智能科技有限公司 | A kind of car owner's emotion adjustment method, system and storage medium |
CN109346079A (en) * | 2018-12-04 | 2019-02-15 | 北京羽扇智信息科技有限公司 | Voice interactive method and device based on Application on Voiceprint Recognition |
CN109532653A (en) * | 2018-10-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Method, apparatus, computer equipment and the storage medium linked up with front vehicle |
CN109599094A (en) * | 2018-12-17 | 2019-04-09 | 海南大学 | The method of sound beauty and emotion modification |
CN109616109A (en) * | 2018-12-04 | 2019-04-12 | 北京蓦然认知科技有限公司 | A kind of voice awakening method, apparatus and system |
CN109669661A (en) * | 2018-12-20 | 2019-04-23 | 广东小天才科技有限公司 | A kind of control method and electronic equipment of dictation progress |
CN109712646A (en) * | 2019-02-20 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | Voice broadcast method, device and terminal |
CN110085225A (en) * | 2019-04-24 | 2019-08-02 | 北京百度网讯科技有限公司 | Voice interactive method, device, intelligent robot and computer readable storage medium |
US10381005B2 (en) | 2017-11-28 | 2019-08-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for determining user frustration when using voice control |
CN110164427A (en) * | 2018-02-13 | 2019-08-23 | 阿里巴巴集团控股有限公司 | Voice interactive method, device, equipment and storage medium |
CN110215683A (en) * | 2019-07-11 | 2019-09-10 | 龙马智芯(珠海横琴)科技有限公司 | A kind of electronic game system of role playing game |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
CN110534135A (en) * | 2019-10-18 | 2019-12-03 | 四川大学华西医院 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
CN110534091A (en) * | 2019-08-16 | 2019-12-03 | 广州威尔森信息科技有限公司 | A kind of people-car interaction method identified based on microserver and intelligent sound |
CN110641476A (en) * | 2019-08-16 | 2020-01-03 | 广汽蔚来新能源汽车科技有限公司 | Interaction method and device based on vehicle-mounted robot, controller and storage medium |
CN110689906A (en) * | 2019-11-05 | 2020-01-14 | 江苏网进科技股份有限公司 | Law enforcement detection method and system based on voice processing technology |
CN110825216A (en) * | 2018-08-10 | 2020-02-21 | 北京魔门塔科技有限公司 | Method and system for man-machine interaction of driver during driving |
CN111329498A (en) * | 2020-03-09 | 2020-06-26 | 郑州大学 | Multi-modal driver emotion auxiliary adjusting method |
CN111402925A (en) * | 2020-03-12 | 2020-07-10 | 北京百度网讯科技有限公司 | Voice adjusting method and device, electronic equipment, vehicle-mounted system and readable medium |
CN111605556A (en) * | 2020-06-05 | 2020-09-01 | 吉林大学 | Road rage prevention recognition and control system |
CN111666444A (en) * | 2020-06-02 | 2020-09-15 | 中国科学院计算技术研究所 | Audio push method and system based on artificial intelligence, and related method and equipment |
CN111976732A (en) * | 2019-05-23 | 2020-11-24 | 上海博泰悦臻网络技术服务有限公司 | Vehicle control method and system based on vehicle owner emotion and vehicle-mounted terminal |
CN112009395A (en) * | 2019-05-28 | 2020-12-01 | 北京车和家信息技术有限公司 | Interaction control method, vehicle-mounted terminal and vehicle |
CN112035034A (en) * | 2020-08-27 | 2020-12-04 | 芜湖盟博科技有限公司 | Vehicle-mounted robot interaction method |
CN112185422A (en) * | 2020-09-14 | 2021-01-05 | 五邑大学 | Prompt message generation method and voice robot thereof |
CN112562661A (en) * | 2019-09-25 | 2021-03-26 | 上海汽车集团股份有限公司 | Vehicle-mounted man-machine interaction system and motor vehicle |
CN113012717A (en) * | 2021-02-22 | 2021-06-22 | 上海埃阿智能科技有限公司 | Emotional feedback information recommendation system and method based on voice recognition |
CN113221611A (en) * | 2020-02-05 | 2021-08-06 | 丰田自动车株式会社 | Emotion estimation device, method, program, and vehicle |
CN113287117A (en) * | 2019-01-04 | 2021-08-20 | 塞伦妮经营公司 | Interactive system and method |
EP3831636A3 (en) * | 2020-06-09 | 2021-09-01 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for regulating user emotion, device, and readable storage medium |
CN113658580A (en) * | 2021-06-24 | 2021-11-16 | 大众问问(北京)信息科技有限公司 | Voice prompt method and device, computer equipment and storage medium |
CN113780062A (en) * | 2021-07-26 | 2021-12-10 | 岚图汽车科技有限公司 | Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip |
CN114049677A (en) * | 2021-12-06 | 2022-02-15 | 中南大学 | Vehicle ADAS control method and system based on emotion index of driver |
WO2024002303A1 (en) * | 2021-11-01 | 2024-01-04 | 华人运通(江苏)技术有限公司 | Robotic arm control method and apparatus for vehicle-mounted screen, device, and vehicle |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005239117A (en) * | 2004-01-26 | 2005-09-08 | Nissan Motor Co Ltd | Driver feeling guiding device |
US20080231703A1 (en) * | 2007-03-23 | 2008-09-25 | Denso Corporation | Field watch apparatus |
US20080269958A1 (en) * | 2007-04-26 | 2008-10-30 | Ford Global Technologies, Llc | Emotive advisory system and method |
CN102874259A (en) * | 2012-06-15 | 2013-01-16 | 浙江吉利汽车研究院有限公司杭州分公司 | Automobile driver emotion monitoring and automobile control system |
CN105700682A (en) * | 2016-01-08 | 2016-06-22 | 北京乐驾科技有限公司 | Intelligent gender and emotion recognition detection system and method based on vision and voice |
CN106114516A (en) * | 2016-08-31 | 2016-11-16 | 合肥工业大学 | The angry driver behavior modeling of a kind of drive automatically people's characteristic and tampering devic |
CN206049658U (en) * | 2016-08-31 | 2017-03-29 | 合肥工业大学 | Angry driver behavior modeling and tampering devic based on drive automatically people's characteristic |
-
2016
- 2016-12-27 CN CN201611229157.7A patent/CN106803423B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005239117A (en) * | 2004-01-26 | 2005-09-08 | Nissan Motor Co Ltd | Driver feeling guiding device |
US20080231703A1 (en) * | 2007-03-23 | 2008-09-25 | Denso Corporation | Field watch apparatus |
US20080269958A1 (en) * | 2007-04-26 | 2008-10-30 | Ford Global Technologies, Llc | Emotive advisory system and method |
CN102874259A (en) * | 2012-06-15 | 2013-01-16 | 浙江吉利汽车研究院有限公司杭州分公司 | Automobile driver emotion monitoring and automobile control system |
CN105700682A (en) * | 2016-01-08 | 2016-06-22 | 北京乐驾科技有限公司 | Intelligent gender and emotion recognition detection system and method based on vision and voice |
CN106114516A (en) * | 2016-08-31 | 2016-11-16 | 合肥工业大学 | The angry driver behavior modeling of a kind of drive automatically people's characteristic and tampering devic |
CN206049658U (en) * | 2016-08-31 | 2017-03-29 | 合肥工业大学 | Angry driver behavior modeling and tampering devic based on drive automatically people's characteristic |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107230384A (en) * | 2017-06-21 | 2017-10-03 | 深圳市盛路物联通讯技术有限公司 | Based on the stopping guide system and method for expecting parking duration and Weather information |
CN107230384B (en) * | 2017-06-21 | 2020-09-25 | 深圳市盛路物联通讯技术有限公司 | Parking guidance system and method based on expected parking duration and weather information |
US10381005B2 (en) | 2017-11-28 | 2019-08-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for determining user frustration when using voice control |
CN108010512A (en) * | 2017-12-05 | 2018-05-08 | 广东小天才科技有限公司 | The acquisition methods and recording terminal of a kind of audio |
CN108010512B (en) * | 2017-12-05 | 2021-04-30 | 广东小天才科技有限公司 | Sound effect acquisition method and recording terminal |
CN108664123A (en) * | 2017-12-15 | 2018-10-16 | 蔚来汽车有限公司 | People's car mutual method, apparatus, vehicle intelligent controller and system |
WO2019114718A1 (en) * | 2017-12-15 | 2019-06-20 | 蔚来汽车有限公司 | Human-vehicle interaction method, device, and vehicle-mounted intelligent controller and system |
CN108052016A (en) * | 2017-12-29 | 2018-05-18 | 南京工程学院 | A kind of interactive intelligent mirror |
CN110164427A (en) * | 2018-02-13 | 2019-08-23 | 阿里巴巴集团控股有限公司 | Voice interactive method, device, equipment and storage medium |
CN108710821A (en) * | 2018-03-30 | 2018-10-26 | 斑马网络技术有限公司 | Vehicle user state recognition system and its recognition methods |
CN108682419A (en) * | 2018-03-30 | 2018-10-19 | 京东方科技集团股份有限公司 | Sound control method and equipment, computer readable storage medium and equipment |
WO2019205642A1 (en) * | 2018-04-24 | 2019-10-31 | 京东方科技集团股份有限公司 | Emotion recognition-based soothing method, apparatus and system, computer device, and computer-readable storage medium |
CN108549720A (en) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification |
US11498573B2 (en) | 2018-04-24 | 2022-11-15 | Beijing Boe Technology Development Co., Ltd. | Pacification method, apparatus, and system based on emotion recognition, computer device and computer readable storage medium |
CN108896061A (en) * | 2018-05-11 | 2018-11-27 | 京东方科技集团股份有限公司 | A kind of man-machine interaction method and onboard navigation system based on onboard navigation system |
CN109190459A (en) * | 2018-07-20 | 2019-01-11 | 上海博泰悦臻电子设备制造有限公司 | A kind of car owner's Emotion identification and adjusting method, storage medium and onboard system |
CN108984229A (en) * | 2018-07-24 | 2018-12-11 | 广东小天才科技有限公司 | A kind of the starting control method and private tutor's equipment of application program |
CN108984229B (en) * | 2018-07-24 | 2021-11-26 | 广东小天才科技有限公司 | Application program starting control method and family education equipment |
CN110825216A (en) * | 2018-08-10 | 2020-02-21 | 北京魔门塔科技有限公司 | Method and system for man-machine interaction of driver during driving |
CN109243438A (en) * | 2018-08-24 | 2019-01-18 | 上海擎感智能科技有限公司 | A kind of car owner's emotion adjustment method, system and storage medium |
CN109243438B (en) * | 2018-08-24 | 2023-09-26 | 上海擎感智能科技有限公司 | Method, system and storage medium for regulating emotion of vehicle owner |
CN109532653A (en) * | 2018-10-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Method, apparatus, computer equipment and the storage medium linked up with front vehicle |
US10889240B2 (en) | 2018-10-11 | 2021-01-12 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, computer device and storage medium for communicating with a rear vehicle |
CN109346079A (en) * | 2018-12-04 | 2019-02-15 | 北京羽扇智信息科技有限公司 | Voice interactive method and device based on Application on Voiceprint Recognition |
CN109616109A (en) * | 2018-12-04 | 2019-04-12 | 北京蓦然认知科技有限公司 | A kind of voice awakening method, apparatus and system |
CN109599094A (en) * | 2018-12-17 | 2019-04-09 | 海南大学 | The method of sound beauty and emotion modification |
CN109669661A (en) * | 2018-12-20 | 2019-04-23 | 广东小天才科技有限公司 | A kind of control method and electronic equipment of dictation progress |
CN113287117A (en) * | 2019-01-04 | 2021-08-20 | 塞伦妮经营公司 | Interactive system and method |
CN109712646A (en) * | 2019-02-20 | 2019-05-03 | 百度在线网络技术(北京)有限公司 | Voice broadcast method, device and terminal |
CN110085225A (en) * | 2019-04-24 | 2019-08-02 | 北京百度网讯科技有限公司 | Voice interactive method, device, intelligent robot and computer readable storage medium |
CN110085225B (en) * | 2019-04-24 | 2024-01-02 | 北京百度网讯科技有限公司 | Voice interaction method and device, intelligent robot and computer readable storage medium |
CN111976732A (en) * | 2019-05-23 | 2020-11-24 | 上海博泰悦臻网络技术服务有限公司 | Vehicle control method and system based on vehicle owner emotion and vehicle-mounted terminal |
CN112009395A (en) * | 2019-05-28 | 2020-12-01 | 北京车和家信息技术有限公司 | Interaction control method, vehicle-mounted terminal and vehicle |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
CN110334669B (en) * | 2019-07-10 | 2021-06-08 | 深圳市华腾物联科技有限公司 | Morphological feature recognition method and equipment |
CN110215683A (en) * | 2019-07-11 | 2019-09-10 | 龙马智芯(珠海横琴)科技有限公司 | A kind of electronic game system of role playing game |
CN110641476A (en) * | 2019-08-16 | 2020-01-03 | 广汽蔚来新能源汽车科技有限公司 | Interaction method and device based on vehicle-mounted robot, controller and storage medium |
CN110534091A (en) * | 2019-08-16 | 2019-12-03 | 广州威尔森信息科技有限公司 | A kind of people-car interaction method identified based on microserver and intelligent sound |
CN112562661A (en) * | 2019-09-25 | 2021-03-26 | 上海汽车集团股份有限公司 | Vehicle-mounted man-machine interaction system and motor vehicle |
CN110534135A (en) * | 2019-10-18 | 2019-12-03 | 四川大学华西医院 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
CN110689906A (en) * | 2019-11-05 | 2020-01-14 | 江苏网进科技股份有限公司 | Law enforcement detection method and system based on voice processing technology |
CN113221611B (en) * | 2020-02-05 | 2024-03-15 | 丰田自动车株式会社 | Emotion estimation device, method, program, and vehicle |
CN113221611A (en) * | 2020-02-05 | 2021-08-06 | 丰田自动车株式会社 | Emotion estimation device, method, program, and vehicle |
CN111329498A (en) * | 2020-03-09 | 2020-06-26 | 郑州大学 | Multi-modal driver emotion auxiliary adjusting method |
CN111402925B (en) * | 2020-03-12 | 2023-10-10 | 阿波罗智联(北京)科技有限公司 | Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium |
CN111402925A (en) * | 2020-03-12 | 2020-07-10 | 北京百度网讯科技有限公司 | Voice adjusting method and device, electronic equipment, vehicle-mounted system and readable medium |
CN111666444A (en) * | 2020-06-02 | 2020-09-15 | 中国科学院计算技术研究所 | Audio push method and system based on artificial intelligence, and related method and equipment |
CN111605556A (en) * | 2020-06-05 | 2020-09-01 | 吉林大学 | Road rage prevention recognition and control system |
EP3831636A3 (en) * | 2020-06-09 | 2021-09-01 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for regulating user emotion, device, and readable storage medium |
CN112035034A (en) * | 2020-08-27 | 2020-12-04 | 芜湖盟博科技有限公司 | Vehicle-mounted robot interaction method |
CN112185422A (en) * | 2020-09-14 | 2021-01-05 | 五邑大学 | Prompt message generation method and voice robot thereof |
CN113012717A (en) * | 2021-02-22 | 2021-06-22 | 上海埃阿智能科技有限公司 | Emotional feedback information recommendation system and method based on voice recognition |
CN113658580A (en) * | 2021-06-24 | 2021-11-16 | 大众问问(北京)信息科技有限公司 | Voice prompt method and device, computer equipment and storage medium |
CN113780062A (en) * | 2021-07-26 | 2021-12-10 | 岚图汽车科技有限公司 | Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip |
WO2024002303A1 (en) * | 2021-11-01 | 2024-01-04 | 华人运通(江苏)技术有限公司 | Robotic arm control method and apparatus for vehicle-mounted screen, device, and vehicle |
CN114049677A (en) * | 2021-12-06 | 2022-02-15 | 中南大学 | Vehicle ADAS control method and system based on emotion index of driver |
CN114049677B (en) * | 2021-12-06 | 2023-08-25 | 中南大学 | Vehicle ADAS control method and system based on driver emotion index |
Also Published As
Publication number | Publication date |
---|---|
CN106803423B (en) | 2020-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106803423A (en) | Man-machine interaction sound control method, device and vehicle based on user emotion state | |
EP3675121B1 (en) | Computer-implemented interaction with a user | |
US10192171B2 (en) | Method and system using machine learning to determine an automotive driver's emotional state | |
CN108369767B (en) | Session adjustment system and method based on user cognitive state and/or contextual state | |
US11003414B2 (en) | Acoustic control system, apparatus and method | |
CN112277955B (en) | Driving assistance method, device, equipment and storage medium | |
US20200310528A1 (en) | Vehicle system for providing driver feedback in response to an occupant's emotion | |
KR100860952B1 (en) | System and method for driver performance improvement | |
US20160236690A1 (en) | Adaptive interactive voice system | |
KR20030059193A (en) | Method of response synthesis in a driver assistance system | |
JPWO2006011310A1 (en) | Voice identification device, voice identification method, and program | |
Kashevnik et al. | Multimodal corpus design for audio-visual speech recognition in vehicle cabin | |
CN112735440A (en) | Vehicle-mounted intelligent robot interaction method, robot and vehicle | |
CN112071309B (en) | Network appointment vehicle safety monitoring device and system | |
JP6075577B2 (en) | Driving assistance device | |
CN110876047A (en) | Vehicle exterior projection method, device, equipment and storage medium | |
CN112215097A (en) | Method for monitoring driving state of vehicle, vehicle and computer readable storage medium | |
CN111329498A (en) | Multi-modal driver emotion auxiliary adjusting method | |
JP2018031918A (en) | Interactive control device for vehicle | |
WO2017189203A1 (en) | System and method for identifying and responding to passenger interest in autonomous vehicle events | |
CN113771859A (en) | Intelligent driving intervention method, device and equipment and computer readable storage medium | |
Jones et al. | Using paralinguistic cues in speech to recognise emotions in older car drivers | |
JP7235554B2 (en) | AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM | |
CN111816199A (en) | Environmental sound control method and system for intelligent cabin of automobile | |
CN115346527A (en) | Voice control method, device, system, vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |