CN107705808A - A kind of Emotion identification method based on facial characteristics and phonetic feature - Google Patents

A kind of Emotion identification method based on facial characteristics and phonetic feature Download PDF

Info

Publication number
CN107705808A
CN107705808A CN201711160533.6A CN201711160533A CN107705808A CN 107705808 A CN107705808 A CN 107705808A CN 201711160533 A CN201711160533 A CN 201711160533A CN 107705808 A CN107705808 A CN 107705808A
Authority
CN
China
Prior art keywords
mood
processing unit
emotion identification
result
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711160533.6A
Other languages
Chinese (zh)
Other versions
CN107705808B (en
Inventor
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
He Guang Zheng Jin (panjin) Robot Technology Co Ltd
Original Assignee
He Guang Zheng Jin (panjin) Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by He Guang Zheng Jin (panjin) Robot Technology Co Ltd filed Critical He Guang Zheng Jin (panjin) Robot Technology Co Ltd
Priority to CN201711160533.6A priority Critical patent/CN107705808B/en
Publication of CN107705808A publication Critical patent/CN107705808A/en
Application granted granted Critical
Publication of CN107705808B publication Critical patent/CN107705808B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Emotion identification method based on facial characteristics and phonetic feature, is realized by camera, microphone and mood processing unit, including:Camera gathers the video data of driver, and sends it to mood processing unit;Microphone gathers the speech data of driver, and sends it to mood processing unit;The mood of driver is identified using video data and speech data respectively for mood processing unit, obtain video Emotion identification result and voice mood recognition result, the video Emotion identification result, including micro- Expression Recognition result, micro- Expression Recognition result is obtained by micro- expression recognition method based on sports ground feature;Mood processing unit is merged the video Emotion identification result and voice mood recognition result, obtains merging Emotion identification result as final Emotion identification result.Method provided by the invention, it is possible to increase to the accuracy rate of the Emotion identification of driver.

Description

A kind of Emotion identification method based on facial characteristics and phonetic feature
Technical field
The present invention relates to Emotion identification field, more particularly to a kind of Emotion identification side based on facial characteristics and phonetic feature Method.
Background technology
Emotion identification is necessary in many scenes, in road transport scene, especially long-haul truck Easily occur fatigue driving in transit, and in short distance driving, the angry mood of driver also easily causes traffic accident Generation.It is therefore desirable to judge the mood of driver by Emotion identification technology, to judge whether driver is also suitable for driving Sail, when unsuitable mood occurs in driver, remind driver to stop driving, avoid potential traffic accident.
Current commercial face identification method often simply carries out textural characteristics and geometric properties carry out extraction and analysis, knows Other accuracy rate is not high.
The content of the invention
To solve problem above, the invention provides a kind of Emotion identification method based on facial characteristics and phonetic feature.
A kind of Emotion identification method based on facial characteristics and phonetic feature provided by the invention, by camera, microphone Realized with mood processing unit, including:
Camera gathers the video data of driver, and sends it to mood processing unit;
Microphone gathers the speech data of driver, and sends it to mood processing unit;
The mood of driver is identified using video data and speech data respectively for mood processing unit, obtains video Emotion identification result and voice mood recognition result, the video Emotion identification result, including micro- Expression Recognition result, it is described micro- Expression Recognition result is obtained by micro- expression recognition method based on sports ground feature;
Mood processing unit is merged the video Emotion identification result and voice mood recognition result, is merged Emotion identification result is as final Emotion identification result.
Preferably, the video Emotion identification result, in addition to macro sheet feelings recognition result.
Preferably, micro- expression recognition method based on sports ground feature, is embodied as:
Mood processing unit obtains the picture frame of the neutral expression of driver, and is stored as reference frame image;
Mood processing unit obtains the current frame image of the driver in video data;
Mood processing unit is contrasted according to the current frame image and reference frame image, obtains the motion between two frames ;
Mood processing unit obtains the strain pattern of the sports ground according to the sports ground between two frame;
Mood processing unit is determined micro- table of current frame image by default threshold value according to the strain pattern of the sports ground Feelings.
Preferably, the mood processing unit is contrasted according to the current frame image and reference frame image, obtains two Sports ground between frame, obtained for the method for feature based, be embodied as mood processing unit and the facial image of driver is carried out After feature recognition, segmentation, sports ground is determined by measuring the displacement of feature.
Preferably, the mood processing unit is melted the video Emotion identification result and voice mood recognition result Close, including:
Mood processing unit is merged micro- Expression Recognition result and macro sheet feelings recognition result, obtains video Emotion identification As a result, the video Emotion identification result includes video Emotion identification result weights;Wherein micro- Expression Recognition result and macro sheet feelings Recognition result is equipped with default weights, and the weights of micro- Expression Recognition result are more than the power of the macro sheet feelings recognition result Value;When micro- Expression Recognition result is consistent with macro sheet feelings recognition result, video Emotion identification result weights are default the One weights, when micro- Expression Recognition result and inconsistent macro sheet feelings recognition result, video Emotion identification result weights are pre- If the second weights, first weights are more than second weights;
Mood processing unit is merged video Emotion identification result and voice mood recognition result, the voice mood Recognition result includes default weights, and the result after the fusion is Emotion identification result.
Preferably, the microphone, its quantity are multiple, and the multiple microphone forms microphone array.
Preferably, the mood, including three kinds of moods, it is respectively:Neutral or happy mood, sad mood and angry mood Three kinds.
Preferably, the mood processing unit obtains the picture frame of the neutral expression of driver, including:
Mood processing unit obtains the video data of the driver of camera collection;
Mood processing unit intercepts the picture frame of predetermined number from the video data, and the interception is to cut from front to back Take;
Mood processing unit chooses a picture frame as driving according to default rule from the picture frame of predetermined number The picture frame of the neutral expression of member.
Some beneficial effects of the present invention can include:
A kind of Emotion identification method based on facial characteristics and phonetic feature provided by the invention, it is possible to increase to driver Emotion identification accuracy rate, so as to for remind driver rationally drive, avoid potential traffic accident from laying the first stone.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write Specifically noted structure is realized and obtained in book, claims and accompanying drawing.
Below by drawings and examples, technical scheme is described in further detail.
Brief description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and a part for constitution instruction, the reality with the present invention Apply example to be used to explain the present invention together, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart of the Emotion identification method based on facial characteristics and phonetic feature in the embodiment of the present invention.
Embodiment
The preferred embodiments of the present invention are illustrated below in conjunction with accompanying drawing, it will be appreciated that described herein preferred real Apply example to be merely to illustrate and explain the present invention, be not intended to limit the present invention.
Fig. 1 is a kind of flow chart of the Emotion identification method based on facial characteristics and phonetic feature in the embodiment of the present invention. As shown in figure 1, this method is realized by camera, microphone and mood processing unit, including:
Step S101, the video data of camera collection driver, and send it to mood processing unit;
Step S102, the speech data of microphone collection driver, and send it to mood processing unit;
Step S103, mood processing unit is known using video data and speech data to the mood of driver respectively Not, video Emotion identification result and voice mood recognition result, the video Emotion identification result, including micro- Expression Recognition are obtained As a result, micro- Expression Recognition result is obtained by micro- expression recognition method based on sports ground feature;
Step S104, mood processing unit is melted the video Emotion identification result and voice mood recognition result Close, obtain merging Emotion identification result as final Emotion identification result.
This method identifies the mood of driver by the micro- expression for being more difficult to cover, while by speech data to driving Member's mood is identified, and can more accurately identify the mood of driver.
In order to improve the accuracy to human pilot Emotion identification, while existing equipment can be made full use of, can be by taking the photograph Picture head obtains more data, in a preferred embodiment of the invention, the video Emotion identification result, in addition to macro sheet Feelings recognition result.Merged by macro sheet feelings recognition result with micro- Expression Recognition result, more accurately recognition result can be obtained.
It is any expression due to identifying that or micro- expression is difficult to differentiate between using grader, or to distant micro- expression Do not identify, otherwise need substantial amounts of test data, it is difficult to applied in micro- Expression Recognition of driver, in this hair In a bright preferred embodiment, micro- expression recognition method based on sports ground feature, it is embodied as:
Mood processing unit obtains the picture frame of the neutral expression of driver, and is stored as reference frame image;
Mood processing unit obtains the current frame image of the driver in video data;
Mood processing unit is contrasted according to the current frame image and reference frame image, obtains the motion between two frames ;
Mood processing unit obtains the strain pattern of the sports ground according to the sports ground between two frame;
Mood processing unit is determined micro- table of current frame image by default threshold value according to the strain pattern of the sports ground Feelings.
The light in driver's cabin is easily caused to change at any time due to driving environment, illumination condition is unstable, it is difficult to uses base The sports ground between two frames is obtained in the method for light stream, in a preferred embodiment of the invention, the mood processing unit Contrasted according to the current frame image and reference frame image, obtain the sports ground between two frames, be the method for feature based Obtain, be embodied as after mood processing unit carries out feature recognition, segmentation to the facial image of driver, by the position for measuring feature In-migration determines sports ground.
Due to there is multiple Emotion identification results, the reliability of different Emotion identification results also differs, in order to obtain more Accurate Emotion identification result, in a preferred embodiment of the invention, the mood processing unit is by the video mood Recognition result and voice mood recognition result are merged, including:
Mood processing unit is merged micro- Expression Recognition result and macro sheet feelings recognition result, obtains video Emotion identification As a result, the video Emotion identification result includes video Emotion identification result weights;Wherein micro- Expression Recognition result and macro sheet feelings Recognition result is equipped with default weights, and the weights of micro- Expression Recognition result are more than the power of the macro sheet feelings recognition result Value;When micro- Expression Recognition result is consistent with macro sheet feelings recognition result, video Emotion identification result weights are default the One weights, when micro- Expression Recognition result and inconsistent macro sheet feelings recognition result, video Emotion identification result weights are pre- If the second weights, first weights are more than second weights;
Mood processing unit is merged video Emotion identification result and voice mood recognition result, the voice mood Recognition result includes default weights, and the result after the fusion is Emotion identification result.
Due to easily being influenceed by speech recognition by ambient noise interference, the especially influence of the voice of other passengers It is even more serious, in order to only need the voice messaging of driver, it is necessary to which other direction acoustic filterings are fallen, and filter out ring as far as possible Border noise, in order to realize this effect, it is necessary to which microphone array receives voice messaging, in a preferred embodiment of the present invention In, the microphone, its quantity is multiple, and the multiple microphone forms microphone array.
In being identified in voice mood, the degree of accuracy of different Emotion identifications is widely different, wherein, to sad and indignation Recognition accuracy be far above to the discriminations of other moods, and for driver, sad and influence that indignation is to driving It is bigger, in a preferred embodiment of the invention, the mood, including three kinds of moods, it is respectively:Neutral or happy mood, Three kinds of sad mood and angry mood.
For driver, the neutral expression of driver is obtained by shooting the photo of the driver under tranquil mood Image not only time-consuming length, and driver is numerous, difficulty is very big, it is difficult to which it is tranquil when taking pictures to ensure each driver , in a preferred embodiment of the invention, the mood processing unit obtains the picture frame of the neutral expression of driver, bag Include:
Mood processing unit obtains the video data of the driver of camera collection;
Mood processing unit intercepts the picture frame of predetermined number from the video data, and the interception is to cut from front to back Take;
Mood processing unit chooses a picture frame as driving according to default rule from the picture frame of predetermined number The picture frame of the neutral expression of member.Wherein, default rule including the use of artificial intelligence identification predetermined number picture frame in most Close to the picture frame of neutral expression.
A kind of Emotion identification method based on facial characteristics and phonetic feature provided by the invention, it is possible to increase to driver Emotion identification accuracy rate, so as to for remind driver rationally drive, avoid potential traffic accident from laying the first stone.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or more The shape for the computer program product that usable storage medium is implemented on (including but is not limited to magnetic disk storage and optical memory etc.) Formula.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided The processors of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in individual square frame or multiple square frames.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the present invention to the present invention God and scope.So, if these modifications and variations of the present invention belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising including these changes and modification.

Claims (8)

  1. A kind of 1. Emotion identification method based on facial characteristics and phonetic feature, by camera, microphone and mood processing unit Realize, it is characterised in that including:
    Camera gathers the video data of driver, and sends it to mood processing unit;
    Microphone gathers the speech data of driver, and sends it to mood processing unit;
    The mood of driver is identified using video data and speech data respectively for mood processing unit, obtains video mood Recognition result and voice mood recognition result, the video Emotion identification result, including micro- Expression Recognition result, micro- expression Recognition result is obtained by micro- expression recognition method based on sports ground feature;
    Mood processing unit is merged the video Emotion identification result and voice mood recognition result, obtains merging mood Recognition result is as final Emotion identification result.
  2. 2. the method as described in claim 1, it is characterised in that the video Emotion identification result, in addition to grand Expression Recognition As a result.
  3. 3. method as claimed in claim 2, it is characterised in that micro- expression recognition method based on sports ground feature, it is real Shi Wei:
    Mood processing unit obtains the picture frame of the neutral expression of driver, and is stored as reference frame image;
    Mood processing unit obtains the current frame image of the driver in video data;
    Mood processing unit is contrasted according to the current frame image and reference frame image, obtains the sports ground between two frames;
    Mood processing unit obtains the strain pattern of the sports ground according to the sports ground between two frame;
    Mood processing unit is determined micro- expression of current frame image by default threshold value according to the strain pattern of the sports ground.
  4. 4. method as claimed in claim 3, it is characterised in that the mood processing unit is according to the current frame image and ginseng Examine two field picture to be contrasted, obtain the sports ground between two frames, obtained for the method for feature based, be embodied as mood processing unit After facial image progress feature recognition, segmentation to driver, sports ground is determined by measuring the displacement of feature.
  5. 5. method as claimed in claim 4, it is characterised in that the mood processing unit is by the video Emotion identification result Merged with voice mood recognition result, including:
    Mood processing unit is merged micro- Expression Recognition result and macro sheet feelings recognition result, obtains video Emotion identification knot Fruit, the video Emotion identification result include video Emotion identification result weights;Wherein micro- Expression Recognition result and macro sheet feelings are known Other result is equipped with default weights, and the weights of micro- Expression Recognition result are more than the weights of the macro sheet feelings recognition result; When micro- Expression Recognition result is consistent with macro sheet feelings recognition result, video Emotion identification result weights are the default first power Value, when micro- Expression Recognition result and inconsistent macro sheet feelings recognition result, video Emotion identification result weights are default Second weights, first weights are more than second weights;
    Mood processing unit is merged video Emotion identification result and voice mood recognition result, the voice mood identification As a result default weights are included, the result after the fusion is Emotion identification result.
  6. 6. the method as described in claim 1, it is characterised in that the microphone, its quantity are multiple, the multiple microphone Form microphone array.
  7. 7. the method as described in claim 1, it is characterised in that the mood, including three kinds of moods, be respectively:It is neutral or fast Three kinds of happy mood, sad mood and angry mood.
  8. 8. method as claimed in claim 3, it is characterised in that the mood processing unit obtains the neutral expression's of driver Picture frame, including:
    Mood processing unit obtains the video data of the driver of camera collection;
    Mood processing unit intercepts the picture frame of predetermined number from the video data, and the interception is to intercept from front to back;
    Mood processing unit chooses a picture frame as driver's according to default rule from the picture frame of predetermined number The picture frame of neutral expression.
CN201711160533.6A 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features Expired - Fee Related CN107705808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711160533.6A CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711160533.6A CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Publications (2)

Publication Number Publication Date
CN107705808A true CN107705808A (en) 2018-02-16
CN107705808B CN107705808B (en) 2020-12-25

Family

ID=61180438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711160533.6A Expired - Fee Related CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Country Status (1)

Country Link
CN (1) CN107705808B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN108958699A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Voice pick-up method and Related product
CN109243490A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Driver's Emotion identification method and terminal device
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN109858330A (en) * 2018-12-15 2019-06-07 深圳壹账通智能科技有限公司 Expression analysis method, apparatus, electronic equipment and storage medium based on video
CN110001652A (en) * 2019-03-26 2019-07-12 深圳市科思创动科技有限公司 Monitoring method, device and the terminal device of driver status
CN110096600A (en) * 2019-04-16 2019-08-06 上海图菱新能源科技有限公司 Artificial intelligence mood improves interactive process and method
CN110110653A (en) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 The Emotion identification method, apparatus and storage medium of multiple features fusion
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
RU2711976C1 (en) * 2018-11-08 2020-01-23 Инна Юрьевна Жовнерчук Method for remote recognition and correction using a virtual reality of a psychoemotional state of a human
CN110826433A (en) * 2019-10-23 2020-02-21 上海能塔智能科技有限公司 Method, device and equipment for processing emotion analysis data of pilot driving user and storage medium
CN111145282A (en) * 2019-12-12 2020-05-12 科大讯飞股份有限公司 Virtual image synthesis method and device, electronic equipment and storage medium
CN111382664A (en) * 2018-12-28 2020-07-07 本田技研工业株式会社 Information processing apparatus and computer-readable storage medium
CN111401198A (en) * 2020-03-10 2020-07-10 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN111754761A (en) * 2019-07-31 2020-10-09 广东小天才科技有限公司 Traffic safety alarm prompting method and electronic equipment
CN112078590A (en) * 2019-05-27 2020-12-15 郑州宇通客车股份有限公司 Driving behavior monitoring method and system
CN112562267A (en) * 2020-11-27 2021-03-26 深圳腾视科技有限公司 Vehicle-mounted safety robot and safe driving assistance method
CN112699802A (en) * 2020-12-31 2021-04-23 青岛海山慧谷科技有限公司 Driver micro-expression detection device and method
CN112927721A (en) * 2019-12-06 2021-06-08 观致汽车有限公司 Human-vehicle interaction method, system, vehicle and computer readable storage medium
CN113646838A (en) * 2019-04-05 2021-11-12 华为技术有限公司 Method and system for providing mood modification during video chat
CN113808623A (en) * 2021-09-18 2021-12-17 武汉轻工大学 Emotion recognition glasses for blind people

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293539A (en) * 2004-03-08 2005-10-20 Matsushita Electric Works Ltd Facial expression recognizing device
CN106650633A (en) * 2016-11-29 2017-05-10 上海智臻智能网络科技股份有限公司 Driver emotion recognition method and device
CN106897706A (en) * 2017-03-02 2017-06-27 上海帆煜自动化科技有限公司 A kind of Emotion identification device
CN206639274U (en) * 2016-12-24 2017-11-14 惠州市云鼎科技有限公司 A kind of safe driving monitoring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293539A (en) * 2004-03-08 2005-10-20 Matsushita Electric Works Ltd Facial expression recognizing device
CN106650633A (en) * 2016-11-29 2017-05-10 上海智臻智能网络科技股份有限公司 Driver emotion recognition method and device
CN206639274U (en) * 2016-12-24 2017-11-14 惠州市云鼎科技有限公司 A kind of safe driving monitoring device
CN106897706A (en) * 2017-03-02 2017-06-27 上海帆煜自动化科技有限公司 A kind of Emotion identification device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
洪鸥: "麦克风阵列语音增强技术及其应用", 《微计算机信息》 *
王建超: "微表情数据库的建立和微表情检测技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN108958699A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Voice pick-up method and Related product
CN109243490A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Driver's Emotion identification method and terminal device
RU2711976C1 (en) * 2018-11-08 2020-01-23 Инна Юрьевна Жовнерчук Method for remote recognition and correction using a virtual reality of a psychoemotional state of a human
CN109858330A (en) * 2018-12-15 2019-06-07 深圳壹账通智能科技有限公司 Expression analysis method, apparatus, electronic equipment and storage medium based on video
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
CN111382664A (en) * 2018-12-28 2020-07-07 本田技研工业株式会社 Information processing apparatus and computer-readable storage medium
CN110001652B (en) * 2019-03-26 2020-06-23 深圳市科思创动科技有限公司 Driver state monitoring method and device and terminal equipment
CN110001652A (en) * 2019-03-26 2019-07-12 深圳市科思创动科技有限公司 Monitoring method, device and the terminal device of driver status
CN113646838B (en) * 2019-04-05 2022-10-11 华为技术有限公司 Method and system for providing mood modification during video chat
CN113646838A (en) * 2019-04-05 2021-11-12 华为技术有限公司 Method and system for providing mood modification during video chat
CN110096600A (en) * 2019-04-16 2019-08-06 上海图菱新能源科技有限公司 Artificial intelligence mood improves interactive process and method
CN110110653A (en) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 The Emotion identification method, apparatus and storage medium of multiple features fusion
CN112078590A (en) * 2019-05-27 2020-12-15 郑州宇通客车股份有限公司 Driving behavior monitoring method and system
CN112078590B (en) * 2019-05-27 2022-04-05 宇通客车股份有限公司 Driving behavior monitoring method and system
CN111754761A (en) * 2019-07-31 2020-10-09 广东小天才科技有限公司 Traffic safety alarm prompting method and electronic equipment
CN111754761B (en) * 2019-07-31 2022-09-20 广东小天才科技有限公司 Traffic safety alarm prompting method and electronic equipment
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
CN110826433A (en) * 2019-10-23 2020-02-21 上海能塔智能科技有限公司 Method, device and equipment for processing emotion analysis data of pilot driving user and storage medium
CN112927721A (en) * 2019-12-06 2021-06-08 观致汽车有限公司 Human-vehicle interaction method, system, vehicle and computer readable storage medium
CN111145282A (en) * 2019-12-12 2020-05-12 科大讯飞股份有限公司 Virtual image synthesis method and device, electronic equipment and storage medium
CN111145282B (en) * 2019-12-12 2023-12-05 科大讯飞股份有限公司 Avatar composition method, apparatus, electronic device, and storage medium
CN111401198B (en) * 2020-03-10 2024-04-23 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN111401198A (en) * 2020-03-10 2020-07-10 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN112562267A (en) * 2020-11-27 2021-03-26 深圳腾视科技有限公司 Vehicle-mounted safety robot and safe driving assistance method
CN112699802A (en) * 2020-12-31 2021-04-23 青岛海山慧谷科技有限公司 Driver micro-expression detection device and method
CN113808623A (en) * 2021-09-18 2021-12-17 武汉轻工大学 Emotion recognition glasses for blind people

Also Published As

Publication number Publication date
CN107705808B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN107705808A (en) A kind of Emotion identification method based on facial characteristics and phonetic feature
US11321385B2 (en) Visualization of image themes based on image content
CN107004287B (en) Avatar video apparatus and method
EP3338217B1 (en) Feature detection and masking in images based on color distributions
CN105373218B (en) Scene analysis for improving eyes tracking
US8879847B2 (en) Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
CN105659200B (en) For showing the method, apparatus and system of graphic user interface
WO2017015949A1 (en) Emotion augmented avatar animation
DE102018125629A1 (en) DEEP learning-based real-time detection and correction of compromised sensors in autonomous machines
KR20210142177A (en) Methods and devices for detecting children's conditions, electronic devices, memory
CN112041891A (en) Expression enhancing system
US11775054B2 (en) Virtual models for communications between autonomous vehicles and external observers
US9979894B1 (en) Modifying images with simulated light sources
CN109961037A (en) A kind of examination hall video monitoring abnormal behavior recognition methods
WO2007020789A1 (en) Face image display, face image display method, and face image display program
KR102161052B1 (en) Method and appratus for segmenting an object in an image
CN107024989A (en) A kind of husky method for making picture based on Leap Motion gesture identifications
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
US10755087B2 (en) Automated image capture based on emotion detection
KR101820456B1 (en) Method And Apparatus for Generating Depth MAP
KR101802062B1 (en) Method and system to measure stereoscopic 3d content-induced visual discomfort based facial expression recognition for sensitivity tv
KR20200049936A (en) Biometric device and method
Zhao et al. Improved AdaBoost Algorithm for Robust Real-Time Multi-face Detection.
Revathi Sign Language Recognition Using Principal Component Analysis
CN117755315A (en) Method and device for controlling equipment operation based on user gesture and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201225

Termination date: 20211120