CN106650633A - Driver emotion recognition method and device - Google Patents
Driver emotion recognition method and device Download PDFInfo
- Publication number
- CN106650633A CN106650633A CN201611070710.7A CN201611070710A CN106650633A CN 106650633 A CN106650633 A CN 106650633A CN 201611070710 A CN201611070710 A CN 201611070710A CN 106650633 A CN106650633 A CN 106650633A
- Authority
- CN
- China
- Prior art keywords
- driver
- data
- emotion
- mood
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Child & Adolescent Psychology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a driver emotion recognition method and device. The method comprises the following steps of: acquiring at least two data in image data of a driver, voice data of the driver, and travelling data of a driving vehicle; recognizing the emotion of the driver according to multiple acquired data respectively, so that an emotion recognition result in each data type is obtained; and, on the basis of the obtained emotion recognition result in each data type, judging the emotion of the driver according to a set emotion judgement policy. By means of the driver emotion recognition method and device disclosed by the invention, emotion recognition can be carried out respectively according to at least two data in the image data, the voice data and the vehicle travelling data; furthermore, the emotion state of the driver is comprehensively judged according to obtained multiple emotion recognition results; the emotion state recognition manner cannot be influenced by single road conditions, vehicle conditions, facial features, voice features and the like; the recognized motion of the driver accords with the practical driver emotion state well; and thus, the emotion recognition accuracy and the environmental suitability are improved.
Description
Technical field
The present invention relates to automobile intelligent interaction technique field, more particularly to a kind of driver's Emotion identification method and apparatus.
Background technology
With automobile globalization popularization and vehicle intellectualized starting, the mankind for the demand of automobile good experience,
So that it is desirable to automobile increasingly understands oneself, and can according to the personality of oneself and the state corresponding service content of customization,
It is desirable to it oneself is whom that automobile is known, the emotion and demand of oneself is understood, it is desirable to automobile master when service oneself is needed
Dynamic offer service.So for the Emotion identification of occupant will play a very important effect with identification, allow
Car can be better understood from people and provide more humane with accurate service.
In car steering, driving safety is mostly important, but most traffic accident is all caused by human factor, and
The mood of occupant is then the major reason for causing artificial traffic accident.When driving, because long-distance driving is easy
Cause driver fatigue sleepy, and block up, bad road conditions and other vehicles also result in indignation of occupant etc. no
Good mood.It is therefore desirable to the mood of occupant is identified, to prevent the traffic accident being likely to occur.
The Emotion identification of occupant at this stage is identified generally by single mode, however, single feelings
Thread recognition methods is unable to reach the effect for accurately identifying occupant's mood and identity, and single method is obtained in Emotion identification
The data for taking are limited, and judgment mechanism is single, thus exist identification the degree of accuracy is low, error big and easily affected etc. by extraneous factor to ask
Topic.
The content of the invention
In view of the above problems, it is proposed that the present invention is to provide a kind of driver's Emotion identification at least solving the above problems
Method and apparatus.
According to one aspect of the present invention, there is provided a kind of driver's Emotion identification method, including:
In the running data of the view data, the speech data of driver and driving vehicle of collection driver at least two
Plant data;
Respectively according to various data of collection, the mood of driver is identified, obtains the feelings under every kind of data type
Thread recognition result;
Based on the Emotion identification result under the every kind of data type for obtaining, according to the mood decision plan of setting, determine
The mood of driver.
Alternatively, in the method for the invention, the Emotion identification result includes:The type of emotion that identifies and identify
The confidence level of the type of emotion.
Alternatively, in the method for the invention, the Emotion identification result based under the every kind of data type for obtaining is pressed
According to the mood decision plan of setting, the mood of driver is determined, including:
When the type of emotion at least two Emotion identification results is identical and confidence level is respectively greater than equal to the correspondence of setting
During the first mood confidence threshold value of data type, using the type of emotion at least two Emotion identifications result as final
The mood of the driver for identifying;
When the confidence level of the type of emotion that there is an Emotion identification result in each Emotion identification result is more than or equal to setting
Corresponding data type the second mood confidence threshold value when, using the type of emotion in the Emotion identification result as final identification
The mood of the driver for going out;
Wherein, the first mood confidence threshold value under same data type is less than the second mood confidence threshold value.
Alternatively, in the method for the invention, after the mood for determining driver, also include:According to default
The confidence level of type of emotion and the corresponding relation of type of emotion rank, the mood level of the mood of the driver for finally being identified
Not.
Alternatively, it is described according to speech data in the method for the invention, the mood of driver is identified, specifically
Including:Extract the vocal print feature in speech data and recognize semanteme in the speech data, according to the vocal print feature and
The semanteme, is identified to the mood of driver.
Alternatively, the method for the invention also includes:
When view data or speech data is collected, according to described image data or speech data, identify and drive
The identity of the person of sailing;When view data and speech data is collected, respectively according to described image data and speech data, to driving
The identity of member is identified, and obtains two identification results under two kinds of data types, and is known based on two identity for obtaining
Other result, according to the judging identity strategy of setting, determines the identity of driver.
Alternatively, in the method for the invention, the identification result includes:The user that identifies and identify the use
The confidence level at family;
It is described based on obtain two identification results, according to the judging identity strategy of setting, determine driver's
Identity, including:
When the user identified in two identification results is identical and confidence level is respectively greater than equal to the corresponding number of setting
According to type the first identity confidence threshold value when, using the user that identifies jointly as final user identity identification result;
The confidence level of the user identified in having an identification result in two identification results is more than or equal to
During the second identity confidence threshold value of the corresponding data type of setting, the second confidence level identity is more than or equal to the confidence level of user
The corresponding user of threshold value, as final user identity identification result;
Wherein, the first identity confidence threshold value under same data type is less than the second identity confidence threshold value.
Alternatively, the method for the invention also includes:
Using the identity of the driver for obtaining, match and the driving in each user behavior custom model for pre-building
The corresponding user behavior of member is accustomed to model, and the emotional information of driver is input in the user behavior of matching custom model,
To carry out anticipation to the state of driver and/or behavior, and according to anticipation result, actively provide the clothes matched with anticipation result
Business.
Alternatively, it is described according to anticipation result in the method for the invention, the clothes matched with anticipation result are actively provided
Business, specifically includes:
Whether it is determined that the service matched with the anticipation result, issuing the user with needs the inquiry of the service, and
When determining that user needs, the service is provided a user with.
Alternatively, in the method for the invention, the service matched with anticipation result for providing a user with, including:Content
Service and/or equipment state control service;The equipment state control service includes:Control the intelligent sound equipment and/or
The equipment being connected with the intelligent sound equipment is to dbjective state.
According to one aspect of the present invention, there is provided a kind of driver's Emotion identification device, including:
Information acquisition module, for gathering the view data of driver, the speech data of driver and driving vehicle
At least two data in running data;
Emotion identification module, according to various data of collection, is identified for respectively to the mood of driver, obtains every
Plant the Emotion identification result under data type;
Mood determination module, for based on the Emotion identification result under the every kind of data type for obtaining, according to the feelings of setting
Thread decision plan, determines the mood of driver.
Alternatively, in device of the present invention, the Emotion identification result includes:The type of emotion that identifies and identify
The confidence level of the type of emotion.
Alternatively, in device of the present invention, the mood determination module, specifically for when at least two Emotion identifications knot
In fruit type of emotion is identical and confidence level be respectively greater than equal to setting corresponding data type the first mood confidence threshold value
When, using the type of emotion at least two Emotion identifications result as the driver for finally identifying mood;When each feelings
There is the confidence level of type of emotion of an Emotion identification result in thread recognition result more than or equal to the corresponding data type for setting
The second mood confidence threshold value when, using the type of emotion in the Emotion identification result as the driver for finally identifying feelings
Thread;Wherein, the first mood confidence threshold value under same data type is less than the second mood confidence threshold value.
Alternatively, in device of the present invention, mood determination module is additionally operable to after the mood for determining driver,
According to the confidence level and the corresponding relation of type of emotion rank of default type of emotion, the feelings of the driver for finally being identified
The degrees of emotion of thread.
Alternatively, in device of the present invention, the Emotion identification module, specifically in the speech data according to collection
When being identified to the mood of driver, extract the vocal print feature in speech data and recognize language in the speech data
Justice, according to the vocal print feature and the semanteme, is identified to the mood of driver.
Alternatively, device of the present invention, also includes:
Identification module, for when described information acquisition module collects view data or speech data, according to
Described image data or speech data, identify the identity of driver;When described information acquisition module collects view data
During with speech data, respectively according to described image data and speech data, the identity of driver is identified, obtains two kinds of numbers
According to two identification results under type;
Judging identity module, for obtaining the identification result under a kind of data type when the identification module
When, directly using the result as the identity of the driver for identifying;When the identification module is obtained under two kinds of data types
Two identification results when, based on the two identification results for obtaining, according to the judging identity strategy of setting, determine
The identity of driver.
Alternatively, in device of the present invention, the identification result includes:The user that identifies and identify the use
The confidence level at family;
The judging identity module, specifically for when the user identified in two identification results is identical and confidence level
When being respectively greater than equal to the first identity confidence threshold value of corresponding data type of setting, using the user that identifies jointly as most
Whole user identity identification result;The user's identified in having an identification result in two identification results puts
Reliability more than or equal to the corresponding data type of setting the second identity confidence threshold value when, with the confidence level of user more than or equal to the
The corresponding user of two confidence level identity threshold values, as final user identity identification result, wherein, under same data type
One identity confidence threshold value is less than the second identity confidence threshold value.
Alternatively, device of the present invention also includes:
Service recommendation module, for using the identity of the driver for obtaining, in each user behavior custom mould for pre-building
User behavior custom model corresponding with the driver is matched in type, and the emotional information of driver is input into the use of matching
In the behavioural habits model of family, to carry out anticipation to the state of driver and/or behavior, and according to anticipation result, actively provide with
The service that anticipation result matches.
Alternatively, in device of the present invention, the service recommendation module, specifically for determining and the anticipation result phase
The service of matching, whether need the inquiry of the service, and when determining that user needs, provide a user with institute if issuing the user with
State service.
Alternatively, in device of the present invention, the service matched with anticipation result for providing a user with, including:Content
Service and/or equipment state control service;The equipment state control service includes:Control the intelligent sound equipment and/or
The equipment being connected with the intelligent sound equipment is to dbjective state.
The present invention has the beneficial effect that:
First, the present invention can be respectively in view data, speech data and vehicle operation data at least two are carried out
Emotion identification, and according to the emotional state of the multiple Emotion identification result comprehensive descision drivers for obtaining, this emotional state is known
Other mode will not be affected by single condition of road surface, vehicle condition, facial characteristics and phonetic feature etc., the driver's for identifying
Mood more conforms to actual driver's emotional state, improves the accuracy and environmental suitability of Emotion identification.
Secondly, the present invention can also carry out the identification of driver, and root according to view data and speech data respectively
According to the identity of the two identification result comprehensive descision drivers for obtaining, the accuracy and environment that improve identification is adapted to
Property;
3rd, the present invention can be improved with according to the mood and identity of the driver of identification, carrying out the service recommendation of active
The experience of user.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And can be practiced according to the content of specification, and in order to allow the above and other objects of the present invention, feature and advantage can
Become apparent, below especially exemplified by the specific embodiment of the present invention.
Description of the drawings
By the detailed description for reading hereafter preferred embodiment, various other advantages and benefit is common for this area
Technical staff will be clear from understanding.Accompanying drawing is only used for illustrating the purpose of preferred embodiment, and is not considered as to the present invention
Restriction.And in whole accompanying drawing, it is denoted by the same reference numerals identical part.In the accompanying drawings:
A kind of flow chart of driver's Emotion identification method that Fig. 1 is provided for first embodiment of the invention;
A kind of structured flowchart of driver's Emotion identification device that Fig. 2 is provided for third embodiment of the invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should not be by embodiments set forth here
Limited.On the contrary, there is provided these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure
Complete conveys to those skilled in the art.
The embodiment of the present invention provides a kind of driver's Emotion identification method and apparatus, and the present invention passes through speech data, image
In the running data of data and driving vehicle at least two, are identified to the mood of driver, and according to the multiple of identification
As a result comprehensive judgement is carried out so that the mood of the driver of identification more conforms to the emotional state of actual driver, and
Will not be affected by single condition of road surface, vehicle condition, facial characteristics and phonetic feature etc., can more accurately identify in-car
The mood of personnel.So, Emotion identification scheme environmental suitability proposed by the present invention is higher, can be adapted to driving in more multi-environment
The Emotion identification of the person of sailing.Below with regard to being described in detail to the specific implementation process of the present invention by several specific embodiments.
In the first embodiment of the present invention, there is provided a kind of driver's Emotion identification method, as shown in figure 1, including as follows
Step:
Step S101, gathers the running data of the view data, the speech data of driver and driving vehicle of driver
In at least two data;
Step S102, respectively according to various data of collection, is identified to the mood of driver, obtains every kind of data class
Emotion identification result under type;
Step S103, based on the Emotion identification result under the every kind of data type for obtaining, according to the mood of setting plan is judged
Slightly, the mood of driver is determined.
Illustrated based on above-mentioned principle, several concrete and preferred embodiments are given below, to refine and optimize the present invention
The function of methods described, so that the enforcement of the present invention program is more convenient, accurately.It should be noted that in the case where not conflicting,
Following feature can be combined mutually.
In the embodiment of the present invention, described Emotion identification result includes:The type of emotion that identifies and identify the mood
The confidence level of type.Wherein, type of emotion is including but not limited to:It is glad, sad, angry, bored, tired, exciting and normal.
Further, in the embodiment of the present invention, the row of vehicle is driven by laying device sensor collection outer in the car
Data are sailed, the device sensor can include the combination of at least following one or more sensors:Acceleration transducer, speed
Sensor, infrared sensor, angular-rate sensor, laser range sensor, ultrasonic sensor etc..According to what is laid on vehicle
The information that these sensors are obtained, it is possible to obtain the attitude information of Current vehicle, current situation of remote, current traffic information,
Duration, vehicle drive trace information etc. are driven, so as to the emotional state that driver is carried out according to the running data of above vehicle is sentenced
It is disconnected.
In one particular embodiment of the present invention, the mood of driver is known according to the running data for driving vehicle
Not, including:From the related traveling extracting data correlation travelling characteristic, according to related travelling characteristic in default grader
Classified, the classification results in the grader recognize the driver mood corresponding to the related travelling characteristic,
Finally provide corresponding recognition confidence.Specifically, in the embodiment of the present invention, the training in running car in Preset Time is gathered
Driving information, and extract training travelling characteristic from the training driving information;Obtain for different training travelling characteristic marks
Different driver's moods, and different driver's moods of different training travelling characteristics marks are entered based on default sorting algorithm
Row study, training, form default grader.
Further, in the embodiment of the present invention, by image collecting device, such as camera, the picture number of driver is gathered
According to.In one particular embodiment of the present invention, according to the view data of collection, the mood of driver is identified, including:
Formerly need the off-line training for carrying out face, the off-line training using the database training face of face detector, while
The calibration marks point on face, according to the face mark point mark point fitting device is trained, also, by face mark point and feelings
The relation training mood grader of thread.(need to carry out Emotion identification according to view data when the on-line operation of face is carried out
When), face is detected in view data by human-face detector, then the mark that device is fitted on face is fitted by mark point
Point, mood grader judges the mood of current driver's according to face mark point, finally provides corresponding classification confidence.This
It is individual in bright embodiment, it is face of the mood grader in the face-image for obtaining based on the confidence level of the Emotion identification of image
Facial expression of the user's face expression from the user that first emotional training is obtained under different type of emotion obtained from mark point
The matching degree that model is matched, when matching degree (i.e. confidence level) reaches certain threshold value, is judged to identify the mood of user
Type, for example, if it is " pleasure " that the result of matching is testing result more than 90% (confidence level), then it is assumed that " this user is pleased
It is happy ".
Further, in the embodiment of the present invention, by audio collecting device, such as microphone, the sound number of driver is gathered
According to.In one particular embodiment of the present invention, according to the speech data of collection, the mood of driver is identified, including:
Formerly need the off-line training for carrying out voice, the off-line training of the voice, using speech database training of human sound detector, together
Shi Xunlian speech feature vectors extraction model is used to extract the sound of characteristic vector from voice, special using the voice demarcated
Levy the training set training mood grader of vector and mood.(need according to voice number when the on-line operation of voice is carried out
During according to carrying out Emotion identification), voice data are detected in the sound stream of input by people's sound detector, and carry from voice data
Speech feature vector is taken, finally the mood of active user is differentiated from speech feature vector using mood grader, and provides identification
Confidence level.Alternatively, in the embodiment of the present invention, also the semanteme in the speech data is identified.When special according to voice
When levying vector and carrying out Emotion identification, can carry out comprehensive identification and judge with reference to semantics recognition result, obtain based on speech data
Final recognition result.In the present embodiment, the confidence level of voice-based Emotion identification is the voice number that mood grader will be obtained
Speech vector model of the speech feature vector according in from the user for formerly having trained under different type of emotion is matched
Matching degree, when matching degree more than setting threshold value when, determine the mood of user, for example, if matching result (put for 80%
Reliability) more than testing result be " pleasure ", then it is assumed that " this user for please ".
Further, in the embodiment of the present invention, in order to the recognition result obtained according to different types of data carries out driver
Mood judge, to carry out the setting of mood confidence threshold value in advance according to data type.Specifically, setting and view data
The first corresponding mood confidence threshold value of type, the setting first mood confidence threshold value corresponding with speech data type,
And the first mood confidence threshold value that setting is corresponding with running data type.Wherein, the first feelings under different types of data
Thread confidence threshold value can be with identical, it is also possible to different, and occurrence can flexibly set according to demand.
In this regard, in the embodiment of the present invention, based on the Emotion identification result under the every kind of data type for obtaining, according to setting
Mood decision plan, determines the mood of driver, specifically includes:
Detect whether that the type of emotion at least two Emotion identification results is identical and confidence level is respectively greater than equal to setting
Corresponding data type the first mood confidence threshold value, and where it has, by least two Emotion identifications result
In type of emotion as the driver for finally identifying mood;
Alternatively, in the embodiment of the present invention, the first mood confidence threshold value under each data type can be set as two
Mood confidence threshold value P2 under mood confidence threshold value P1 and the second grade under individual grade, i.e. the first estate, wherein, it is same
P2 under data type is more than P1.Threshold value P1 of the first estate be used for three types data under recognition result judgement, second etc.
Threshold value P2 of level is used for the judgement of recognition result under two kinds of data types.Specifically, three under three kinds of data types are obtained
During Emotion identification result, if the type of emotion of three Emotion identification results is identical, and corresponding confidence level be all higher than it is corresponding
Threshold value P1 under the first estate, then the type of emotion for being identified using three Emotion identification results is used as the driver for finally identifying
Mood.When at least two Emotion identification result under obtaining at least two data types, if two Emotion identification knots
The type of emotion of fruit is identical, and corresponding confidence level is all higher than threshold value P2 under corresponding second grade, then directly with the two
Mood of the type of emotion that Emotion identification result is identified as the driver for finally identifying.
Further, it is contemplated that in some cases, the recognition confidence based on certain data type is very high, with very high
Credibility, at this point it is possible to directly by the use of the corresponding recognition result of the very high data type of confidence level as final recognition result,
When implementing, the confidence level that can detect the type of emotion that whether there is an Emotion identification result in each Emotion identification result is big
In the second mood confidence threshold value of the corresponding data type equal to setting, and where it has, by the Emotion identification result
In type of emotion as the driver for finally identifying mood.Wherein, the first mood confidence level under same data type
Threshold value is less than the second mood confidence threshold value.
Below by specific example, mood decision process set forth above is explained:
In this example, the Emotion identification confidence threshold value based on speech data is set as the 70%, feelings based on view data
Thread recognition confidence threshold value is 80%, the Emotion identification confidence level based on running data is 60%, then:
When the type of emotion identified based on speech data for excitement, and Emotion identification confidence level be " 70% " more than, base
The type of emotion identified in view data is excitement, and Emotion identification confidence level is more than " 80% " and based on running data
The type of emotion for identifying is excitement, and Emotion identification confidence level is:More than " 60% ", then judge that " mood is sharp to this user
It is dynamic ".
In this example, threshold value setting, such as Emotion identification confidence of the setting based on speech data can also be further carried out
Degree threshold value is 80%, is 85%, the Emotion identification confidence level based on running data based on the Emotion identification confidence level of view data
For 75%;Then:
When the type of emotion identified based on speech data for excitement, and Emotion identification confidence level be " 80% " more than, base
The type of emotion identified in view data is excitement, and Emotion identification confidence level is more than " 85% ", then directly to judge this
User's " mood is excitement ".Or, when the type of emotion identified based on speech data is for excitement, and Emotion identification confidence level
Emotion identification result more than " 80% ", based on running data is excitement, and Emotion identification confidence level is more than " 75% ", then directly
Connect and judge this user's " mood is excitement ".Or, when the type of emotion identified based on view data for excitement, and mood know
Other confidence level is that the Emotion identification result more than " 85% ", based on running data is for excitement, and Emotion identification confidence level
More than " 75% ", then this user's " mood is excitement " is directly judged.
In this example, threshold value setting, such as Emotion identification confidence of the setting based on speech data can also be further carried out
Degree threshold value is 95%, is 98%, the Emotion identification confidence level based on running data based on the Emotion identification confidence level of view data
For 90%, then:
When the type of emotion identified based on speech data is excitement, and Emotion identification confidence level is more than " 95% ", then
Directly judge this user's " mood is excitement ".Or, the Emotion identification result based on running data is excitement, and Emotion identification
Confidence level is more than " 90% ", then directly to judge this user's " mood is excitement ".Or, the feelings identified based on view data
Thread type is excitement, and Emotion identification confidence level is more than " 98% ", then directly to judge this user's " mood is excitement ".
It is pointed out that when being judged according to confidence threshold value, if there is the situation of judged result conflict, then
Current recognition result is abandoned, continuation is judged according to the data of Real-time Collection.For example, when the Emotion identification based on speech data
As a result the Emotion identification result that view data is based on for exciting and confidence level is 95% is 98% for glad and confidence level, this
When, need to abandon current recognition result, proceed to judge.
Further, in the embodiment of the present invention, after the mood for determining driver, also include:According to default feelings
The confidence level of thread type and the corresponding relation of type of emotion rank, the mood level of the mood of the driver for finally being identified
Not.Specifically, in the present embodiment, can pre-build that the confidence level of the type of emotion for identifying is right with the type of emotion rank
Should be related to, and when multiple Emotion identification results are obtained, the recognition confidence in recognition result matches current emotional type
Degrees of emotion (for example:It is exciting, very excited etc.).
In summary, the embodiment of the present invention carries out comprehensive identification by the data of at least two types to the mood of driver
Judge, improve stability and the degree of accuracy of Emotion identification, improve the experience of user.
In the second embodiment of the present invention, there is provided a kind of driver's Emotion identification method, continue as shown in figure 1, including
Following steps:
Step S101, gathers the running data of the view data, the speech data of driver and driving vehicle of driver
In at least two data;
Step S102, respectively according to various data of collection, is identified to the mood of driver, obtains every kind of data class
Emotion identification result under type;
Step S103, based on the Emotion identification result under the every kind of data type for obtaining, according to the mood of setting plan is judged
Slightly, the mood of driver is determined.
In the embodiment of the present invention, Emotion identification and decision process are identical with first embodiment, will not be described here.
In the embodiment of the present invention, the identification of driver is also carried out while Emotion identification is carried out, it is specific as follows:
When view data or speech data is collected, according to described image data or speech data, identify and drive
The identity of the person of sailing;
When view data and speech data is collected, respectively according to described image data and speech data, to driver
Identity be identified, obtain two identification results under two kinds of data types, and based on two identifications for obtaining
As a result, according to the judging identity strategy of setting, the identity of driver is determined.
In the embodiment of the present invention, described identification result includes:The user that identifies and identify putting for the user
Reliability.
Further, in the embodiment of the present invention, by image collecting device, such as camera, the picture number of driver is gathered
According to.In one particular embodiment of the present invention, according to the view data of collection, the identity of driver is identified, including:
Formerly need the off-line training for carrying out face, the off-line training using the database training face of face detector, while
The calibration marks point on face, according to the face mark point mark point fitting device is trained, also, by face mark point and body
The relation training identities device of part;When the on-line operation of face is carried out, detected in view data by human-face detector
Face, is then fitted the mark point that device is fitted on face by mark point, and identities device judges current according to face mark point
The identity of driver, finally provides corresponding classification confidence.In the present embodiment, it is based on the confidence level of the identification of image
Identities device carries out the face mark point in the face-image of acquisition with the face mark point of the known identities of first training
The matching degree of matching, when matching degree (i.e. confidence level) reaches certain threshold value, is judged to identify user identity, for example, if
It is user A with spending for testing result more than 85% (confidence level), then it is assumed that " this user is user A ".
Further, in the embodiment of the present invention, by audio collecting device, such as microphone, the sound number of driver is gathered
According to.In one particular embodiment of the present invention, according to the speech data of collection, the identity of driver is identified, including:
Formerly need the off-line training for carrying out voice, the off-line training of the voice, using speech database training of human sound detector, together
Shi Xunlian speech feature vectors extraction model is used to extract the sound of characteristic vector from voice, special using the voice demarcated
Levy the training set training identities device of vector and identity.When the on-line operation of voice is carried out, existed by people's sound detector
Voice data are detected in the sound stream of input, and from voice extracting data speech feature vector, finally using identities device
The identity of active user is differentiated from speech feature vector, and provides the confidence level of identification.In the present embodiment, voice-based identity
The confidence level of identification be identities device by the speech feature vector in the speech data of acquisition with formerly trained known to
The matching degree that the speech vector model of user is matched, when threshold value of the matching degree more than setting, determines the identity of user,
For example, if it is user A that the result of matching is testing result more than 85% (confidence level), then it is assumed that " this user is user A ".
Further, in the embodiment of the present invention, when view data and speech data is collected, in order to according to two kinds of data
The recognition result that type is obtained carries out the judging identity of driver, to carry out identity confidence threshold value in advance according to data type
Setting.Specifically, the setting first identity confidence threshold value corresponding with picture data type and setting and speech data
The first corresponding identity confidence threshold value of type.Wherein, the first mood confidence threshold value under different types of data can phase
Together, it is also possible to different, occurrence can flexibly set according to demand.
In this regard, in the embodiment of the present invention, based on the two identification results for obtaining, according to the judging identity plan of setting
Slightly, the identity of driver is determined, including:
When the user identified in two identification results is identical and confidence level is respectively greater than equal to the corresponding number of setting
According to type the first identity confidence threshold value when, using the user that identifies jointly as final user identity identification result;
The confidence level of the user identified in having an identification result in two identification results is more than or equal to
During the second identity confidence threshold value of the corresponding data type of setting, the second confidence level identity is more than or equal to the confidence level of user
The corresponding user of threshold value, as final user identity identification result;
Wherein, the first identity confidence threshold value under same data type is less than the second identity confidence threshold value.
Below by a specific example, judging identity process set forth above is explained:
In this example, body of the identification confidence threshold value based on speech data as 85%, based on view data is set
Part recognition confidence threshold value is 90%, then:When the identity identified based on speech data is user A, and identification confidence level
For more than 85%, the identity identified based on view data is user A, and identification confidence level is more than 90%, then judge
Go out this user for user A.
In this example, threshold value setting, such as identification confidence of the setting based on speech data can also be further carried out
Degree threshold value is 95%, and the identification confidence threshold value based on view data is 98%, then:When what is identified based on speech data
Identity is user A, and identification confidence level is more than 95%, then directly judge that driver identity is user A;Or, when
The identity identified based on view data is user A, and identification confidence level is 98%, then directly judge driver identity
For user A.
In summary, voice, the confidence level of image need simultaneously greater than first threshold, it is believed that be user A;Or, voice
At least one is needed to be more than Second Threshold with the confidence level of image, then it is assumed that to be user A, wherein Second Threshold is more than the first threshold
Value.
In a preferred embodiment of the present invention, after the identity for determining driver, also include:By the identity of identification
Big data recommended engine is sent to emotional information, the identity of the driver for obtaining is utilized by big data recommended engine, advance
User behavior custom model corresponding with the driver is matched in each user behavior custom model set up, and by driver's
Emotional information is input in the user behavior of matching custom model, to carry out anticipation, and root to the state of driver and/or behavior
According to anticipation result, the service matched with anticipation result is actively provided.
Wherein, according to anticipation result, the service matched with anticipation result is actively provided, is specifically included:
Whether it is determined that the service matched with the anticipation result, issuing the user with needs the inquiry of the service, and
When determining that user needs, the service is provided a user with.
In the present embodiment, the service matched with anticipation result for providing a user with, including:Content service and/or equipment
State control service;The equipment state control service includes:Control the intelligent sound equipment and/or with the intelligent sound
The equipment of equipment connection is to dbjective state.
The process that active provides service is illustrated below by several concrete application cases.
Case one:If the Emotion identification result based on speech data is " exciting 77% ", the Emotion identification based on image is tied
Fruit is " exciting 90% " and Emotion identification result based on vehicle correlation running data is " exciting 65% ", then can determine whether out
This user's " mood is excitement ";If the identification result simultaneously based on speech data is " Mrs Li 88% ", based on picture number
According to identification result be " Mrs Li 95% ", then can determine whether out this user be Mrs Li.So total mood, identification
As a result it is " Mrs's Li mood is excitement ".
It is that exciting recognition result is sent to big data recommended engine, big data recommended engine triggering language by Mrs's Li mood
Sound is interacted, and concurrent sending voice reports file and automatically initiates interactive voice to intelligent vehicle-carried middle control:
Intelligent vehicle-carried middle control:" just having learned a new joke, Mrs Li wants to be not desired to listen "
Mrs Li:" good, to go ahead "
Intelligent vehicle-carried middle control:" ultraman is raised one's hand on classroom, then teacher just in the dust ".
Case two:If the Emotion identification result based on speech data is " fatigue 84% ", the mood based on view data is known
Other result is " fatigue 93% ", then this user's " mood is fatigue " is can determine whether out, if while the identification based on speech data
As a result be " Mr. Zhang 88% ", identification result based on view data be " Mr. Zhang 95% ", then can determine whether out this user
For " Mr. Zhang ".So total mood, identification result are " Mr. Zhang's mood is fatigue ".
The recognition result that Mr. Zhang's mood is fatigue is sent into big data recommended engine, big data recommended engine triggering language
Sound is interacted, and concurrent sending voice reports file and plays music automatically to intelligent vehicle-carried middle control:
Intelligent vehicle-carried middle control:" Mr. Zhang, wants to put the song of an innervation to you, sees that you drive all tired ".
Zhang San:" good "
Intelligent vehicle-carried middle console keyboard opens music player, and plays light happy songs, and with reference to the music history number of user
According to the singer and music type that recommend Zhang San may like.
Case three:If the Emotion identification result based on view data is " indignation 80% ", the feelings based on vehicle operation data
Thread recognition result is " indignation 99% ", then this user's " mood is indignation " is can determine whether out, if while the identity based on speech data
Recognition result is " Mr. Zhou 88% ", the identification result based on view data is " Mr. Zhou 95% ", then can determine whether out this
User is " Mr. Zhou ".So total mood, identification result are " Mr.'s Zhou mood is indignation ".
The recognition result that Mr.'s Zhou mood is indignation is sent into big data recommended engine, big data recommended engine triggering language
Sound is interacted, and concurrent sending voice reports file and automatically initiates interactive voice to intelligent vehicle-carried middle control, strives for the feature operation of user
Approval:
Intelligent vehicle-carried middle control:" Mr. Zhou, sees how somewhat irritated you are today, wants to help you to lower the temperature, and opens air-conditioning
".
Mr. Zhou:" good, to open a bar "
Intelligent vehicle-carried middle control is received after voice messaging, opens air-conditioning.
In summary, a kind of method that the embodiment of the present invention proposes brand-new driver's Emotion identification and identification,
And identity, Emotion identification result are sent into big data recommended engine, by big data recommended engine according to the knowledge of identity and mood
Other result, recommends the service of matching, is including but not limited to UI displayings, and voice broadcast, content service is provided, in-vehicle device control
Deng it is achieved thereby that actively providing the user more humane service, improve the experience of user.
In the third embodiment of the present invention, there is provided a kind of driver's Emotion identification device, as shown in Fig. 2 specifically including:
Information acquisition module 210, for gathering the view data of driver, the speech data of driver and driving vehicle
Running data at least two data;
Emotion identification module 220, according to various data of collection, is identified for respectively to the mood of driver, obtains
Emotion identification result under every kind of data type;
Mood determination module 230, for based on the Emotion identification result under the every kind of data type for obtaining, according to setting
Mood decision plan, determines the mood of driver.
Based on said structure framework and implementation principle, several concrete and sides of being preferable to carry out under the above constitution are given below
Formula, to the function of refining and optimize device of the present invention, so that the enforcement of the present invention program is more convenient, accurately.Specifically relate to
And following content:
In the embodiment of the present invention, the Emotion identification result includes:The type of emotion that identifies and identify the mood class
The confidence level of type.Wherein, type of emotion is including but not limited to:It is glad, sad, angry, bored, tired, exciting and normal.
Further, in the embodiment of the present invention, information acquisition module 210 include laying outer in the car device sensor,
Lay image collecting device in the car and audio collecting device.
Specifically, in the present embodiment, the running data of vehicle is driven by laying device sensor collection outer in the car,
The device sensor can include the combination of at least following one or more sensors:Acceleration transducer, velocity sensor,
Infrared sensor, angular-rate sensor, laser range sensor, ultrasonic sensor etc..According to these biographies laid on vehicle
The information that sensor is obtained, it is possible to when obtaining attitude information, current situation of remote, current traffic information, the driving of Current vehicle
Length, vehicle drive trace information etc..Further, in the embodiment of the present invention, by image collecting device, such as camera, collection
The view data of driver;And by audio collecting device, such as microphone, gather the voice data of driver.
In one particular embodiment of the present invention, Emotion identification module 220 according to drive vehicle running data to driving
The mood of the person of sailing is identified, including:From the related traveling extracting data correlation travelling characteristic, according to related travelling characteristic
Classified in default grader, the classification results identification in the grader is relative to the related travelling characteristic
The driver's mood answered, finally provides corresponding recognition confidence.Specifically, in the embodiment of the present invention, in collection Preset Time
Training driving information in running car, and extract training travelling characteristic from the training driving information;Obtain for difference
Different driver's moods of training travelling characteristic mark, and based on default sorting algorithm to different training travelling characteristic marks
Different driver's moods are learnt, are trained, and form default grader.
In one particular embodiment of the present invention, Emotion identification module 220 according to collection view data, to driver
Mood be identified, including:The off-line training for carrying out face, the off-line training is formerly needed to instruct using the database of face
Practice face detector, while on face calibration marks point, according to the face mark point train mark point fitting device, and
And, mood grader is trained by the relation of face mark point and mood.(basis is needed when the on-line operation of face is carried out
When view data carries out Emotion identification), face is detected in view data by human-face detector, then it is fitted by mark point
Mark point on device fitting face, mood grader judges the mood of current driver's according to face mark point, finally provides right
The classification confidence answered.
In one particular embodiment of the present invention, Emotion identification module 220 according to collection speech data, to driver
Mood be identified, including:Formerly need the off-line training for carrying out voice, the off-line training of the voice, using voice number
According to storehouse training of human sound detector, while training speech feature vector extraction model to be used to extract the sound of characteristic vector from voice
Sound, using the training set training mood grader of the speech feature vector and mood demarcated.When carrying out the online of voice
During operation (when needing to carry out Emotion identification according to speech data), people is detected in the sound stream of input by people's sound detector
Sound data, and from voice extracting data speech feature vector, finally differentiated from speech feature vector using mood grader and worked as
The mood of front user, and provide the confidence level of identification.Alternatively, in the embodiment of the present invention, also to the language in the speech data
Justice is identified.When Emotion identification is carried out according to speech feature vector, comprehensive identification can be carried out with reference to semantics recognition result
Judge, obtain based on the final recognition result of speech data.
Further, in the embodiment of the present invention, in order to the recognition result obtained according to different types of data carries out driver
Mood judge, to carry out the setting of mood confidence threshold value in advance according to data type.Specifically, setting and view data
The first corresponding mood confidence threshold value of type, the setting first mood confidence threshold value corresponding with speech data type,
And the first mood confidence threshold value that setting is corresponding with running data type.Wherein, the first feelings under different types of data
Thread confidence threshold value can be with identical, it is also possible to different, and occurrence can flexibly set according to demand.
In this regard, in the embodiment of the present invention, mood determination module 230 is based on the Emotion identification under the every kind of data type for obtaining
As a result, according to the mood decision plan of setting, the mood of driver is determined, is specifically included:
When the type of emotion at least two Emotion identification results is identical and confidence level is respectively greater than equal to the correspondence of setting
During the first mood confidence threshold value of data type, using the type of emotion at least two Emotion identifications result as final
The mood of the driver for identifying;
Further, it is contemplated that in some cases, the recognition confidence based on certain data type is very high, with very high
Credibility, at this point it is possible to directly by the use of the corresponding recognition result of the very high data type of confidence level as final recognition result,
When implementing, the confidence level that can detect the type of emotion that whether there is an Emotion identification result in each Emotion identification result is big
In the second mood confidence threshold value of the corresponding data type equal to setting, and where it has, by the Emotion identification result
In type of emotion as the driver for finally identifying mood;Wherein, the first mood confidence level under same data type
Threshold value is less than the second mood confidence threshold value.
Further, in the embodiment of the present invention, mood determination module 230, be additionally operable to determine driver mood it
Afterwards, according to the confidence level and the corresponding relation of type of emotion rank of default type of emotion, the driver for finally being identified
Mood degrees of emotion.
In a specific embodiment of the present invention, described device also includes:
Identification module 240, for collecting view data or speech data when described information acquisition module 210
When, according to described image data or speech data, identify the identity of driver;When described information acquisition module collects figure
During as data and speech data, respectively according to described image data and speech data, the identity of driver is identified, is obtained
Two identification results under two kinds of data types;
Judging identity module 250, for obtaining the identification under a kind of data type when the identification module 240
When as a result, directly using the result as the identity of the driver for identifying;When the identification module 240 obtains two kinds of data
During two identification results under type, based on the two identification results for obtaining, according to the judging identity strategy of setting,
Determine the identity of driver.
In the embodiment of the present invention, the identification result includes:The user for identifying and the confidence for identifying the user
Degree.
In one particular embodiment of the present invention, identification module 240 according to collection view data, to driver
Identity be identified, including:The off-line training for carrying out face, the off-line training is formerly needed to instruct using the database of face
Practice face detector, while on face calibration marks point, according to the face mark point train mark point fitting device, and
And, identities device is trained by the relation of face mark point and identity;When the on-line operation of face is carried out, examined by face
Survey device and face is detected in view data, then the mark point that device is fitted on face, identities device root are fitted by mark point
Judge the identity of current driver's according to face mark point, finally provide corresponding classification confidence.
In one particular embodiment of the present invention, identification module 240 according to collection speech data, to driver
Identity be identified, including:Formerly need the off-line training for carrying out voice, the off-line training of the voice, using voice number
According to storehouse training of human sound detector, while training speech feature vector extraction model to be used to extract the sound of characteristic vector from voice
Sound, using the training set training identities device of the speech feature vector and identity demarcated.When carrying out the online of voice
During operation, voice data are detected in the sound stream of input by people's sound detector, and from voice extracting data phonetic feature
Vector, finally differentiates the identity of active user using identities device from speech feature vector, and provides the confidence level of identification.
Further, in the embodiment of the present invention, when view data and speech data is collected, in order to according to two kinds of data
The recognition result that type is obtained carries out the judging identity of driver, to carry out identity confidence threshold value in advance according to data type
Setting.Specifically, the setting first identity confidence threshold value corresponding with picture data type and setting and speech data
The first corresponding identity confidence threshold value of type.Wherein, the first mood confidence threshold value under different types of data can phase
Together, it is also possible to different, occurrence can flexibly set according to demand.
In this regard, in the embodiment of the present invention, judging identity module 250 based on the two identification results for obtaining, according to setting
Fixed judging identity strategy, determines the identity of driver, specifically includes:
When the user identified in two identification results is identical and confidence level is respectively greater than equal to the corresponding number of setting
According to type the first identity confidence threshold value when, using the user that identifies jointly as final user identity identification result;
The confidence level of the user identified in having an identification result in two identification results is more than or equal to
During the second identity confidence threshold value of the corresponding data type of setting, the second confidence level identity is more than or equal to the confidence level of user
The corresponding user of threshold value, as final user identity identification result, wherein, the first identity confidence level under same data type
Threshold value is less than the second identity confidence threshold value.
In the still another embodiment of the present invention, described device also includes:
Big data recommended engine module 260, for using the identity of the driver for obtaining, in each user's row for pre-building
User behavior custom model corresponding with the driver is matched in be accustomed to model, and the emotional information of driver is input to
It is main to carry out anticipation to the state of driver and/or behavior, and according to anticipation result in the user behavior custom model of matching
It is dynamic that the service matched with anticipation result is provided.
Wherein, big data recommended engine module 260, specifically for determining the service matched with the anticipation result, to
User sends the inquiry for whether needing the service, and when determining that user needs, provides a user with the service.Wherein,
Determine whether user needs, text message can be obtained by carrying out speech recognition to user input voice, then by text message
Carry out semantics recognition to determine.
In the present embodiment, the service matched with anticipation result for providing a user with, including:Content service and/or equipment
State control service;The equipment state control service includes:Control the intelligent sound equipment and/or with the intelligent sound
The equipment of equipment connection is to dbjective state.
In summary, embodiment of the present invention described device can travel number according to view data, speech data and vehicle respectively
According at least two carry out Emotion identification, and according to the mood shape of the multiple Emotion identification result comprehensive descision drivers for obtaining
State, this emotional state RM will not be by shadows such as single condition of road surface, vehicle condition, facial characteristics and phonetic features
Ring, the mood of the driver for identifying more conforms to actual driver's emotional state, improve Emotion identification accuracy and
Environmental suitability.
In addition, embodiment of the present invention described device can also carry out driver's according to view data and speech data respectively
Identification, and according to the identity of the two identification result comprehensive descision drivers for obtaining, improve the standard of identification
True property and environmental suitability;
Furthermore, the present invention can be improved with according to the mood and identity of the driver of identification, carrying out the service recommendation of active
The experience of user.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment
Divide mutually referring to what each embodiment was stressed is its difference with other embodiment.Particularly with device
For embodiment, due to its basic simlarity and embodiment of the method, so, description it is fairly simple, related part is referring to method reality
Apply the part explanation of example.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
Completed with instructing the hardware of correlation by program, the program can be stored in a computer-readable recording medium, storage
Medium can include:ROM, RAM, disk or CD etc..
In a word, presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit protection scope of the present invention.
All any modification, equivalent substitution and improvements within the spirit and principles in the present invention, made etc., should be included in the present invention's
Within protection domain.
Claims (20)
1. a kind of driver's Emotion identification method, it is characterised in that include:
At least two numbers in the running data of the view data, the speech data of driver and driving vehicle of collection driver
According to;
Respectively according to various data of collection, the mood of driver is identified, the mood obtained under every kind of data type is known
Other result;
Based on the Emotion identification result under the every kind of data type for obtaining, according to the mood decision plan of setting, driving is determined
The mood of member.
2. the method for claim 1, it is characterised in that the Emotion identification result includes:The type of emotion for identifying
And identify the confidence level of the type of emotion.
3. method as claimed in claim 2, it is characterised in that the Emotion identification based under the every kind of data type for obtaining
As a result, according to the mood decision plan of setting, the mood of driver is determined, including:
When the type of emotion at least two Emotion identification results is identical and confidence level is respectively greater than equal to the corresponding data of setting
During the first mood confidence threshold value of type, using the type of emotion at least two Emotion identifications result as final identification
The mood of the driver for going out;
When the confidence level of the type of emotion that there is an Emotion identification result in each Emotion identification result is more than or equal to the right of setting
When answering the second mood confidence threshold value of data type, using type of emotion in the Emotion identification result as finally identifying
The mood of driver;
Wherein, the first mood confidence threshold value under same data type is less than the second mood confidence threshold value.
4. method as claimed in claim 2, it is characterised in that after the mood for determining driver, also include:According to
The confidence level of default type of emotion and the corresponding relation of type of emotion rank, the mood of the driver for finally being identified
Degrees of emotion.
5. the method for claim 1, it is characterised in that described according to speech data, knows to the mood of driver
Not, specifically include:The semanteme in the vocal print feature and the identification speech data in speech data is extracted, according to the sound
Line feature and the semanteme, are identified to the mood of driver.
6. the method as described in claim 1 to 5 any one, it is characterised in that also include:
When view data or speech data is collected, according to described image data or speech data, driver is identified
Identity;When view data and speech data is collected, respectively according to described image data and speech data, to driver's
Identity is identified, and obtains two identification results under two kinds of data types, and based on the two identifications knot for obtaining
Really, according to the judging identity strategy of setting, the identity of driver is determined.
7. method as claimed in claim 6, it is characterised in that the identification result includes:The user for identifying and knowledge
Do not go out the confidence level of the user;
It is described, according to the judging identity strategy of setting, to determine the identity of driver based on obtain two identification results,
Including:
When the user identified in two identification results is identical and confidence level is respectively greater than equal to the corresponding data class of setting
During the first identity confidence threshold value of type, using the user that identifies jointly as final user identity identification result;
The confidence level of the user identified in having an identification result in two identification results is more than or equal to setting
Corresponding data type the second identity confidence threshold value when, with the confidence level of user be more than or equal to the second confidence level identity threshold value
Corresponding user, as final user identity identification result;
Wherein, the first identity confidence threshold value under same data type is less than the second identity confidence threshold value.
8. method as claimed in claim 6, it is characterised in that methods described also includes:
Using the identity of the driver for obtaining, match and the driver couple in each user behavior custom model for pre-building
The user behavior custom model answered, and the emotional information of driver is input in the user behavior of matching custom model, with right
The state of driver and/or behavior carry out anticipation, and according to anticipation result, actively provide the service matched with anticipation result.
9. method as claimed in claim 8, it is characterised in that described according to anticipation result, actively provides and anticipation result phase
The service of matching, specifically includes:
Whether it is determined that the service matched with the anticipation result, issuing the user with needs the inquiry of the service, and it is determined that
When going out user and needing, the service is provided a user with.
10. method as claimed in claim 8 or 9, it is characterised in that the clothes matched with anticipation result for providing a user with
Business, including:Content service and/or equipment state control service;The equipment state control service includes:Control the intelligent language
Sound equipment and/or the equipment that is connected with the intelligent sound equipment are to dbjective state.
11. a kind of driver's Emotion identification devices, it is characterised in that include:
Information acquisition module, for gathering the view data of driver, the speech data of driver and driving the traveling of vehicle
At least two data in data;
Emotion identification module, according to various data of collection, is identified for respectively to the mood of driver, obtains every kind of number
According to the Emotion identification result under type;
Mood determination module, for based on the Emotion identification result under the every kind of data type for obtaining, sentencing according to the mood of setting
Fixed strategy, determines the mood of driver.
12. devices as claimed in claim 11, it is characterised in that the Emotion identification result includes:The mood class for identifying
Type and identify the confidence level of the type of emotion.
13. devices as claimed in claim 11, it is characterised in that the mood determination module, specifically for when at least two
Type of emotion in Emotion identification result is identical and confidence level be respectively greater than equal to setting corresponding data type the first mood
During confidence threshold value, using the type of emotion at least two Emotion identifications result as the driver for finally identifying feelings
Thread;When the confidence level of the type of emotion that there is an Emotion identification result in each Emotion identification result is more than or equal to the correspondence for setting
During the second mood confidence threshold value of data type, using the type of emotion in the Emotion identification result as driving for finally identifying
The mood of the person of sailing;Wherein, the first mood confidence threshold value under same data type is less than the second mood confidence threshold value.
14. devices as claimed in claim 12, it is characterised in that mood determination module, are additionally operable to determining driver's
After mood, according to confidence level and the corresponding relation of type of emotion rank of default type of emotion, finally identified
The degrees of emotion of the mood of driver.
15. devices as claimed in claim 11, it is characterised in that the Emotion identification module, specifically for according to collection
Speech data when being identified to the mood of driver, extract the vocal print feature in speech data and recognize the voice
Semanteme in data, according to the vocal print feature and the semanteme, is identified to the mood of driver.
16. devices as described in claim 11 to 15 any one, it is characterised in that also include:
Identification module, for when described information acquisition module collects view data or speech data, according to described
View data or speech data, identify the identity of driver;When described information acquisition module collects view data and language
During sound data, respectively according to described image data and speech data, the identity of driver is identified, obtains two kinds of data class
Two identification results under type;
Judging identity module, for obtaining when the identification module during identification result under a kind of data type, directly
Connect the identity using the result as the driver for identifying;When the identification module obtains two under two kinds of data types
During identification result, based on the two identification results for obtaining, according to the judging identity strategy of setting, driver is determined
Identity.
17. devices as claimed in claim 16, it is characterised in that the identification result includes:The user that identifies and
Identify the confidence level of the user;
The judging identity module, specifically for when the user identified in two identification results is identical and confidence level is distinguished
More than or equal to the corresponding data type of setting the first identity confidence threshold value when, using the user that identifies jointly as final
User identity identification result;The confidence level of the user identified in having an identification result in two identification results
More than or equal to setting corresponding data type the second identity confidence threshold value when, put more than or equal to second with the confidence level of user
The corresponding user of reliability identity threshold value, as final user identity identification result, wherein, the first body under same data type
Part confidence threshold value is less than the second identity confidence threshold value.
18. devices as claimed in claim 16, it is characterised in that also include:
Big data recommended engine module, for using the identity of the driver for obtaining, in each user behavior custom for pre-building
User behavior custom model corresponding with the driver is matched in model, and the emotional information of driver is input into matching
In user behavior custom model, to carry out anticipation to the state of driver and/or behavior, and according to anticipation result, actively provide
The service matched with anticipation result.
19. devices as claimed in claim 18, it is characterised in that the big data recommended engine module, specifically for determining
The service matched with the anticipation result, whether need the inquiry of the service, and determining that user need to if issuing the user with
When wanting, the service is provided a user with.
20. devices as described in claim 18 or 19, it is characterised in that the clothes matched with anticipation result for providing a user with
Business, including:Content service and/or equipment state control service;The equipment state control service includes:Control the intelligent language
Sound equipment and/or the equipment that is connected with the intelligent sound equipment are to dbjective state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611070710.7A CN106650633A (en) | 2016-11-29 | 2016-11-29 | Driver emotion recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611070710.7A CN106650633A (en) | 2016-11-29 | 2016-11-29 | Driver emotion recognition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106650633A true CN106650633A (en) | 2017-05-10 |
Family
ID=58814438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611070710.7A Pending CN106650633A (en) | 2016-11-29 | 2016-11-29 | Driver emotion recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650633A (en) |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107458381A (en) * | 2017-07-21 | 2017-12-12 | 陕西科技大学 | A kind of motor vehicle driving approval apparatus based on artificial intelligence |
CN107564541A (en) * | 2017-09-04 | 2018-01-09 | 南方医科大学南方医院 | A kind of Portable baby crying sound identifier and its recognition methods |
CN107705808A (en) * | 2017-11-20 | 2018-02-16 | 合光正锦(盘锦)机器人技术有限公司 | A kind of Emotion identification method based on facial characteristics and phonetic feature |
CN107729986A (en) * | 2017-09-19 | 2018-02-23 | 平安科技(深圳)有限公司 | Driving model training method, driver's recognition methods, device, equipment and medium |
CN108427916A (en) * | 2018-02-11 | 2018-08-21 | 上海复旦通讯股份有限公司 | A kind of monitoring system and monitoring method of mood of attending a banquet for customer service |
CN108664890A (en) * | 2018-03-28 | 2018-10-16 | 上海乐愚智能科技有限公司 | A kind of contradiction coordination approach, device, robot and storage medium |
CN108682419A (en) * | 2018-03-30 | 2018-10-19 | 京东方科技集团股份有限公司 | Sound control method and equipment, computer readable storage medium and equipment |
CN108694958A (en) * | 2018-04-26 | 2018-10-23 | 广州国音科技有限公司 | A kind of security alarm method and device |
CN108710821A (en) * | 2018-03-30 | 2018-10-26 | 斑马网络技术有限公司 | Vehicle user state recognition system and its recognition methods |
CN109190459A (en) * | 2018-07-20 | 2019-01-11 | 上海博泰悦臻电子设备制造有限公司 | A kind of car owner's Emotion identification and adjusting method, storage medium and onboard system |
CN109243490A (en) * | 2018-10-11 | 2019-01-18 | 平安科技(深圳)有限公司 | Driver's Emotion identification method and terminal device |
CN109240488A (en) * | 2018-07-27 | 2019-01-18 | 重庆柚瓣家科技有限公司 | A kind of implementation method of AI scene engine of positioning |
CN109362066A (en) * | 2018-11-01 | 2019-02-19 | 山东大学 | A kind of real-time Activity recognition system and its working method based on low-power consumption wide area network and capsule network |
CN109389766A (en) * | 2017-08-10 | 2019-02-26 | 通用汽车环球科技运作有限责任公司 | User's identifying system and method for autonomous vehicle |
CN109606386A (en) * | 2018-12-12 | 2019-04-12 | 北京车联天下信息技术有限公司 | Cockpit in intelligent vehicle |
CN109785861A (en) * | 2018-12-29 | 2019-05-21 | 惠州市德赛西威汽车电子股份有限公司 | Control method for playing multimedia, storage medium and terminal based on travelling data |
CN109829409A (en) * | 2019-01-23 | 2019-05-31 | 深兰科技(上海)有限公司 | Driver's emotional state detection method and system |
CN109995823A (en) * | 2017-12-29 | 2019-07-09 | 新华网股份有限公司 | Vehicle media information method for pushing and device, storage medium and processor |
CN110001652A (en) * | 2019-03-26 | 2019-07-12 | 深圳市科思创动科技有限公司 | Monitoring method, device and the terminal device of driver status |
CN110008879A (en) * | 2019-03-27 | 2019-07-12 | 深圳市尼欧科技有限公司 | Vehicle-mounted personalization audio-video frequency content method for pushing and device |
CN110134821A (en) * | 2019-05-06 | 2019-08-16 | 深圳市尼欧科技有限公司 | A kind of accurate method for pushing of intelligent vehicle-carried audio for driving congestion |
CN110555128A (en) * | 2018-05-31 | 2019-12-10 | 蔚来汽车有限公司 | music recommendation playing method and vehicle-mounted infotainment system |
CN110688885A (en) * | 2018-06-19 | 2020-01-14 | 本田技研工业株式会社 | Control device and control method |
CN110728206A (en) * | 2019-09-24 | 2020-01-24 | 捷开通讯(深圳)有限公司 | Fatigue driving detection method and device, computer readable storage medium and terminal |
CN110751381A (en) * | 2019-09-30 | 2020-02-04 | 东南大学 | Road rage vehicle risk assessment and prevention and control method |
CN110807899A (en) * | 2019-11-07 | 2020-02-18 | 交控科技股份有限公司 | Driver state comprehensive monitoring method and system |
CN111090769A (en) * | 2018-10-24 | 2020-05-01 | 百度在线网络技术(北京)有限公司 | Song recommendation method, device, equipment and computer storage medium |
CN111354377A (en) * | 2019-06-27 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Method and device for recognizing emotion through voice and electronic equipment |
CN111402925A (en) * | 2020-03-12 | 2020-07-10 | 北京百度网讯科技有限公司 | Voice adjusting method and device, electronic equipment, vehicle-mounted system and readable medium |
CN111976732A (en) * | 2019-05-23 | 2020-11-24 | 上海博泰悦臻网络技术服务有限公司 | Vehicle control method and system based on vehicle owner emotion and vehicle-mounted terminal |
CN112183457A (en) * | 2020-10-19 | 2021-01-05 | 上海汽车集团股份有限公司 | Control method, device and equipment for atmosphere lamp in vehicle and readable storage medium |
CN112307816A (en) * | 2019-07-29 | 2021-02-02 | 北京地平线机器人技术研发有限公司 | In-vehicle image acquisition method and device, electronic equipment and storage medium |
CN112455370A (en) * | 2020-11-24 | 2021-03-09 | 一汽奔腾轿车有限公司 | Emotion management and interaction system and method based on multidimensional data arbitration mechanism |
CN112617829A (en) * | 2019-09-24 | 2021-04-09 | 宝马股份公司 | Method and device for recognizing a safety-relevant emotional state of a driver |
CN112785837A (en) * | 2019-11-11 | 2021-05-11 | 上海博泰悦臻电子设备制造有限公司 | Method and device for recognizing emotion of user when driving vehicle, storage medium and terminal |
CN112820072A (en) * | 2020-12-28 | 2021-05-18 | 深圳壹账通智能科技有限公司 | Dangerous driving early warning method and device, computer equipment and storage medium |
CN112837552A (en) * | 2020-12-31 | 2021-05-25 | 北京梧桐车联科技有限责任公司 | Voice broadcasting method and device and computer readable storage medium |
CN112927721A (en) * | 2019-12-06 | 2021-06-08 | 观致汽车有限公司 | Human-vehicle interaction method, system, vehicle and computer readable storage medium |
CN113799717A (en) * | 2020-06-12 | 2021-12-17 | 广州汽车集团股份有限公司 | Fatigue driving relieving method and system and computer readable storage medium |
CN113815625A (en) * | 2020-06-19 | 2021-12-21 | 广州汽车集团股份有限公司 | Vehicle auxiliary driving control method and device and intelligent steering wheel |
WO2021253217A1 (en) * | 2020-06-16 | 2021-12-23 | 曾浩军 | User state analysis method and related device |
CN114516341A (en) * | 2022-04-13 | 2022-05-20 | 北京智科车联科技有限公司 | User interaction method and system and vehicle |
CN115047824A (en) * | 2022-05-30 | 2022-09-13 | 青岛海尔科技有限公司 | Digital twin multimodal device control method, storage medium, and electronic apparatus |
CN117115788A (en) * | 2023-10-19 | 2023-11-24 | 天津所托瑞安汽车科技有限公司 | Intelligent interaction method for vehicle, back-end server and front-end equipment |
WO2023239562A1 (en) * | 2022-06-06 | 2023-12-14 | Cerence Operating Company | Emotion-aware voice assistant |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014024606A1 (en) * | 2012-08-07 | 2014-02-13 | ソニー株式会社 | Information processing device, information processing method, and information processing system |
CN105303829A (en) * | 2015-09-11 | 2016-02-03 | 深圳市乐驰互联技术有限公司 | Vehicle driver emotion recognition method and device |
CN105700682A (en) * | 2016-01-08 | 2016-06-22 | 北京乐驾科技有限公司 | Intelligent gender and emotion recognition detection system and method based on vision and voice |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
-
2016
- 2016-11-29 CN CN201611070710.7A patent/CN106650633A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014024606A1 (en) * | 2012-08-07 | 2014-02-13 | ソニー株式会社 | Information processing device, information processing method, and information processing system |
CN105303829A (en) * | 2015-09-11 | 2016-02-03 | 深圳市乐驰互联技术有限公司 | Vehicle driver emotion recognition method and device |
CN105700682A (en) * | 2016-01-08 | 2016-06-22 | 北京乐驾科技有限公司 | Intelligent gender and emotion recognition detection system and method based on vision and voice |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107458381A (en) * | 2017-07-21 | 2017-12-12 | 陕西科技大学 | A kind of motor vehicle driving approval apparatus based on artificial intelligence |
CN109389766B (en) * | 2017-08-10 | 2021-07-27 | 通用汽车环球科技运作有限责任公司 | User identification system and method for autonomous vehicle |
CN109389766A (en) * | 2017-08-10 | 2019-02-26 | 通用汽车环球科技运作有限责任公司 | User's identifying system and method for autonomous vehicle |
CN107564541B (en) * | 2017-09-04 | 2018-11-02 | 南方医科大学南方医院 | A kind of Portable baby crying sound identifier and its recognition methods |
CN107564541A (en) * | 2017-09-04 | 2018-01-09 | 南方医科大学南方医院 | A kind of Portable baby crying sound identifier and its recognition methods |
CN107729986A (en) * | 2017-09-19 | 2018-02-23 | 平安科技(深圳)有限公司 | Driving model training method, driver's recognition methods, device, equipment and medium |
CN107705808A (en) * | 2017-11-20 | 2018-02-16 | 合光正锦(盘锦)机器人技术有限公司 | A kind of Emotion identification method based on facial characteristics and phonetic feature |
CN109995823A (en) * | 2017-12-29 | 2019-07-09 | 新华网股份有限公司 | Vehicle media information method for pushing and device, storage medium and processor |
CN108427916A (en) * | 2018-02-11 | 2018-08-21 | 上海复旦通讯股份有限公司 | A kind of monitoring system and monitoring method of mood of attending a banquet for customer service |
CN108664890A (en) * | 2018-03-28 | 2018-10-16 | 上海乐愚智能科技有限公司 | A kind of contradiction coordination approach, device, robot and storage medium |
CN108682419A (en) * | 2018-03-30 | 2018-10-19 | 京东方科技集团股份有限公司 | Sound control method and equipment, computer readable storage medium and equipment |
CN108710821A (en) * | 2018-03-30 | 2018-10-26 | 斑马网络技术有限公司 | Vehicle user state recognition system and its recognition methods |
CN108694958B (en) * | 2018-04-26 | 2020-11-13 | 广州国音科技有限公司 | Security alarm method and device |
CN108694958A (en) * | 2018-04-26 | 2018-10-23 | 广州国音科技有限公司 | A kind of security alarm method and device |
CN110555128A (en) * | 2018-05-31 | 2019-12-10 | 蔚来汽车有限公司 | music recommendation playing method and vehicle-mounted infotainment system |
CN110688885A (en) * | 2018-06-19 | 2020-01-14 | 本田技研工业株式会社 | Control device and control method |
CN110688885B (en) * | 2018-06-19 | 2022-12-06 | 本田技研工业株式会社 | Control device and control method |
CN109190459A (en) * | 2018-07-20 | 2019-01-11 | 上海博泰悦臻电子设备制造有限公司 | A kind of car owner's Emotion identification and adjusting method, storage medium and onboard system |
CN109240488A (en) * | 2018-07-27 | 2019-01-18 | 重庆柚瓣家科技有限公司 | A kind of implementation method of AI scene engine of positioning |
CN109243490A (en) * | 2018-10-11 | 2019-01-18 | 平安科技(深圳)有限公司 | Driver's Emotion identification method and terminal device |
CN111090769A (en) * | 2018-10-24 | 2020-05-01 | 百度在线网络技术(北京)有限公司 | Song recommendation method, device, equipment and computer storage medium |
CN109362066A (en) * | 2018-11-01 | 2019-02-19 | 山东大学 | A kind of real-time Activity recognition system and its working method based on low-power consumption wide area network and capsule network |
CN109362066B (en) * | 2018-11-01 | 2021-06-25 | 山东大学 | Real-time behavior recognition system based on low-power-consumption wide-area Internet of things and capsule network and working method thereof |
CN109606386A (en) * | 2018-12-12 | 2019-04-12 | 北京车联天下信息技术有限公司 | Cockpit in intelligent vehicle |
CN109785861B (en) * | 2018-12-29 | 2021-07-30 | 惠州市德赛西威汽车电子股份有限公司 | Multimedia playing control method based on driving data, storage medium and terminal |
CN109785861A (en) * | 2018-12-29 | 2019-05-21 | 惠州市德赛西威汽车电子股份有限公司 | Control method for playing multimedia, storage medium and terminal based on travelling data |
CN109829409A (en) * | 2019-01-23 | 2019-05-31 | 深兰科技(上海)有限公司 | Driver's emotional state detection method and system |
CN110001652A (en) * | 2019-03-26 | 2019-07-12 | 深圳市科思创动科技有限公司 | Monitoring method, device and the terminal device of driver status |
CN110001652B (en) * | 2019-03-26 | 2020-06-23 | 深圳市科思创动科技有限公司 | Driver state monitoring method and device and terminal equipment |
CN110008879A (en) * | 2019-03-27 | 2019-07-12 | 深圳市尼欧科技有限公司 | Vehicle-mounted personalization audio-video frequency content method for pushing and device |
CN110134821A (en) * | 2019-05-06 | 2019-08-16 | 深圳市尼欧科技有限公司 | A kind of accurate method for pushing of intelligent vehicle-carried audio for driving congestion |
CN111976732A (en) * | 2019-05-23 | 2020-11-24 | 上海博泰悦臻网络技术服务有限公司 | Vehicle control method and system based on vehicle owner emotion and vehicle-mounted terminal |
CN111354377A (en) * | 2019-06-27 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Method and device for recognizing emotion through voice and electronic equipment |
CN112307816A (en) * | 2019-07-29 | 2021-02-02 | 北京地平线机器人技术研发有限公司 | In-vehicle image acquisition method and device, electronic equipment and storage medium |
CN110728206A (en) * | 2019-09-24 | 2020-01-24 | 捷开通讯(深圳)有限公司 | Fatigue driving detection method and device, computer readable storage medium and terminal |
CN112617829A (en) * | 2019-09-24 | 2021-04-09 | 宝马股份公司 | Method and device for recognizing a safety-relevant emotional state of a driver |
CN110751381A (en) * | 2019-09-30 | 2020-02-04 | 东南大学 | Road rage vehicle risk assessment and prevention and control method |
CN110807899A (en) * | 2019-11-07 | 2020-02-18 | 交控科技股份有限公司 | Driver state comprehensive monitoring method and system |
CN112785837A (en) * | 2019-11-11 | 2021-05-11 | 上海博泰悦臻电子设备制造有限公司 | Method and device for recognizing emotion of user when driving vehicle, storage medium and terminal |
CN112927721A (en) * | 2019-12-06 | 2021-06-08 | 观致汽车有限公司 | Human-vehicle interaction method, system, vehicle and computer readable storage medium |
CN111402925B (en) * | 2020-03-12 | 2023-10-10 | 阿波罗智联(北京)科技有限公司 | Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium |
CN111402925A (en) * | 2020-03-12 | 2020-07-10 | 北京百度网讯科技有限公司 | Voice adjusting method and device, electronic equipment, vehicle-mounted system and readable medium |
CN113799717A (en) * | 2020-06-12 | 2021-12-17 | 广州汽车集团股份有限公司 | Fatigue driving relieving method and system and computer readable storage medium |
WO2021253217A1 (en) * | 2020-06-16 | 2021-12-23 | 曾浩军 | User state analysis method and related device |
CN113815625A (en) * | 2020-06-19 | 2021-12-21 | 广州汽车集团股份有限公司 | Vehicle auxiliary driving control method and device and intelligent steering wheel |
CN113815625B (en) * | 2020-06-19 | 2024-01-19 | 广州汽车集团股份有限公司 | Vehicle auxiliary driving control method and device and intelligent steering wheel |
CN112183457A (en) * | 2020-10-19 | 2021-01-05 | 上海汽车集团股份有限公司 | Control method, device and equipment for atmosphere lamp in vehicle and readable storage medium |
CN112455370A (en) * | 2020-11-24 | 2021-03-09 | 一汽奔腾轿车有限公司 | Emotion management and interaction system and method based on multidimensional data arbitration mechanism |
CN112820072A (en) * | 2020-12-28 | 2021-05-18 | 深圳壹账通智能科技有限公司 | Dangerous driving early warning method and device, computer equipment and storage medium |
CN112837552A (en) * | 2020-12-31 | 2021-05-25 | 北京梧桐车联科技有限责任公司 | Voice broadcasting method and device and computer readable storage medium |
CN114516341A (en) * | 2022-04-13 | 2022-05-20 | 北京智科车联科技有限公司 | User interaction method and system and vehicle |
CN115047824A (en) * | 2022-05-30 | 2022-09-13 | 青岛海尔科技有限公司 | Digital twin multimodal device control method, storage medium, and electronic apparatus |
WO2023239562A1 (en) * | 2022-06-06 | 2023-12-14 | Cerence Operating Company | Emotion-aware voice assistant |
CN117115788A (en) * | 2023-10-19 | 2023-11-24 | 天津所托瑞安汽车科技有限公司 | Intelligent interaction method for vehicle, back-end server and front-end equipment |
CN117115788B (en) * | 2023-10-19 | 2024-01-02 | 天津所托瑞安汽车科技有限公司 | Intelligent interaction method for vehicle, back-end server and front-end equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650633A (en) | Driver emotion recognition method and device | |
CN108995654B (en) | Driver state identification method and system | |
CN106874597B (en) | highway overtaking behavior decision method applied to automatic driving vehicle | |
KR102562227B1 (en) | Dialogue system, Vehicle and method for controlling the vehicle | |
CN109017797B (en) | Driver emotion recognition method and vehicle-mounted control unit implementing same | |
JP5375805B2 (en) | Driving support system and driving support management center | |
US9878583B2 (en) | Vehicle alertness control system | |
CN106682090A (en) | Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment | |
US20180364727A1 (en) | Methods, Protocol and System for Customizing Self-driving Motor Vehicles | |
JP5434912B2 (en) | Driving state determination method, driving state determination system and program | |
CN105637323B (en) | Navigation server, navigation system and air navigation aid | |
Jafarnejad et al. | Towards a real-time driver identification mechanism based on driving sensing data | |
CN109145719B (en) | Driver fatigue state identification method and system | |
DE102014203724A1 (en) | Method and system for selecting navigation routes and providing advertising on the route | |
CN105303829A (en) | Vehicle driver emotion recognition method and device | |
CN108803623B (en) | Method for personalized driving of automatic driving vehicle and system for legalization of driving | |
CN110281932A (en) | Travel controlling system, vehicle, drive-control system, travel control method and storage medium | |
CN112277953A (en) | Recognizing hands-off situations through machine learning | |
CN106101168A (en) | Car-mounted terminal, cloud service equipment, onboard system and information processing method and device | |
CN108932290B (en) | Location proposal device and location proposal method | |
CN110100153A (en) | Information providing system | |
CN113320537A (en) | Vehicle control method and system | |
CN110062937A (en) | Information providing system | |
CN107257913A (en) | Method and navigation system for updating the parking lot information in navigation system | |
CN110826433B (en) | Emotion analysis data processing method, device and equipment for test driving user and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |