CN107705808B - Emotion recognition method based on facial features and voice features - Google Patents

Emotion recognition method based on facial features and voice features Download PDF

Info

Publication number
CN107705808B
CN107705808B CN201711160533.6A CN201711160533A CN107705808B CN 107705808 B CN107705808 B CN 107705808B CN 201711160533 A CN201711160533 A CN 201711160533A CN 107705808 B CN107705808 B CN 107705808B
Authority
CN
China
Prior art keywords
emotion
recognition result
processing unit
video
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711160533.6A
Other languages
Chinese (zh)
Other versions
CN107705808A (en
Inventor
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heguang Zhengjin Panjin Robot Technology Co ltd
Original Assignee
Heguang Zhengjin Panjin Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heguang Zhengjin Panjin Robot Technology Co ltd filed Critical Heguang Zhengjin Panjin Robot Technology Co ltd
Priority to CN201711160533.6A priority Critical patent/CN107705808B/en
Publication of CN107705808A publication Critical patent/CN107705808A/en
Application granted granted Critical
Publication of CN107705808B publication Critical patent/CN107705808B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an emotion recognition method based on facial features and voice features, which is realized by a camera, a microphone and an emotion processing unit and comprises the following steps: the camera collects video data of a driver and sends the video data to the emotion processing unit; the microphone collects voice data of a driver and sends the voice data to the emotion processing unit; the emotion processing unit respectively uses the video data and the voice data to identify the emotion of the driver to obtain a video emotion identification result and a voice emotion identification result, wherein the video emotion identification result comprises a micro-expression identification result, and the micro-expression identification result is obtained by a micro-expression identification method based on the characteristics of the motion field; and the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result to obtain a fused emotion recognition result serving as a final emotion recognition result. The method provided by the invention can improve the accuracy of emotion recognition of the driver.

Description

Emotion recognition method based on facial features and voice features
Technical Field
The invention relates to the field of emotion recognition, in particular to an emotion recognition method based on facial features and voice features.
Background
Emotion recognition is necessary in many scenarios, in road transportation scenarios, particularly long-distance trucks, fatigue driving is likely to occur during transportation, and in short-distance driving, driver's angry emotion also tends to cause traffic accidents. Therefore, it is necessary to determine the emotion of the driver by using an emotion recognition technology to determine whether the driver is still suitable for driving, and when the driver has an inappropriate emotion, the driver is reminded to stop driving, so as to avoid a potential traffic accident.
The current commercial face recognition method only extracts and analyzes texture features and geometric features, and the recognition accuracy is not high.
Disclosure of Invention
In order to solve the above problems, the present invention provides an emotion recognition method based on facial features and voice features.
The invention provides an emotion recognition method based on facial features and voice features, which is realized by a camera, a microphone and an emotion processing unit and comprises the following steps:
the camera collects video data of a driver and sends the video data to the emotion processing unit;
the microphone collects voice data of a driver and sends the voice data to the emotion processing unit;
the emotion processing unit respectively uses the video data and the voice data to identify the emotion of the driver to obtain a video emotion identification result and a voice emotion identification result, wherein the video emotion identification result comprises a micro-expression identification result, and the micro-expression identification result is obtained by a micro-expression identification method based on the characteristics of the motion field;
and the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result to obtain a fused emotion recognition result serving as a final emotion recognition result.
Preferably, the video emotion recognition result further includes a macro expression recognition result.
Preferably, the micro expression recognition method based on the motion field features is implemented as follows:
the emotion processing unit acquires an image frame of neutral expression of the driver and stores the image frame as a reference frame image;
the emotion processing unit acquires a current frame image of a driver in the video data;
comparing the current frame image with the reference frame image by the emotion processing unit to obtain a motion field between the two frames;
the emotion processing unit obtains a strain image of the motion field according to the motion field between the two frames;
and the emotion processing unit determines the micro expression of the current frame image according to the strain image of the motion field and a preset threshold value.
Preferably, the emotion processing unit compares the current frame image with the reference frame image to obtain a motion field between the two frames, and the motion field is obtained by a feature-based method, and is implemented by determining the motion field by measuring the displacement of features after the emotion processing unit performs feature recognition and segmentation on the face image of the driver.
Preferably, the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, and includes:
the emotion processing unit fuses the micro expression recognition result and the macro expression recognition result to obtain a video emotion recognition result, wherein the video emotion recognition result comprises a video emotion recognition result weight; the micro expression recognition result and the macro expression recognition result are both provided with preset weights, and the weight of the micro expression recognition result is greater than that of the macro expression recognition result; when the micro expression recognition result is consistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset first weight, and when the micro expression recognition result is inconsistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset second weight, wherein the first weight is greater than the second weight;
and the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, wherein the voice emotion recognition result comprises a preset weight, and the fused result is an emotion recognition result.
Preferably, the number of the microphones is multiple, and the multiple microphones form a microphone array.
Preferably, the emotions include three emotions, which are respectively: neutral or happy emotion, sad emotion and angry emotion.
Preferably, the emotion processing unit acquires an image frame of a neutral expression of the driver, and includes:
the emotion processing unit acquires video data of a driver, which is acquired by the camera;
the emotion processing unit intercepts a preset number of image frames from the video data, wherein the interception is from front to back;
the emotion processing unit selects one image frame from a preset number of image frames according to a preset rule as an image frame of a neutral expression of the driver.
Some of the benefits of the present invention may include:
the emotion recognition method based on the facial features and the voice features can improve the emotion recognition accuracy of the driver, so that a foundation is laid for reminding the driver of driving reasonably and avoiding potential traffic accidents.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an emotion recognition method based on facial features and speech features according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 is a flowchart of an emotion recognition method based on facial features and speech features according to an embodiment of the present invention. As shown in fig. 1, the method is implemented by a camera, a microphone and an emotion processing unit, and includes:
s101, a camera collects video data of a driver and sends the video data to an emotion processing unit;
s102, collecting voice data of a driver by a microphone and sending the voice data to an emotion processing unit;
step S103, the emotion processing unit respectively uses video data and voice data to identify the emotion of the driver to obtain a video emotion identification result and a voice emotion identification result, wherein the video emotion identification result comprises a micro-expression identification result, and the micro-expression identification result is obtained by a micro-expression identification method based on the characteristics of the motion field;
and step S104, the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result to obtain a fused emotion recognition result serving as a final emotion recognition result.
The method identifies the emotion of the driver through the micro expression which is more difficult to cover, and simultaneously identifies the emotion of the driver through the voice data, so that the emotion of the driver can be identified more accurately.
In order to improve the accuracy of emotion recognition of a driver and fully utilize the existing equipment, and more data can be acquired through a camera, in a preferred embodiment of the invention, the video emotion recognition result also comprises a macro expression recognition result. By fusing the macro expression recognition result and the micro expression recognition result, a more accurate recognition result can be obtained.
Since the use of a classifier to identify micro expressions is either difficult to distinguish which expressions are, or not very obvious, or requires a large amount of experimental data, which is difficult to apply to micro expression identification of car drivers, in a preferred embodiment of the present invention, the micro expression identification method based on motion field features is implemented as follows:
the emotion processing unit acquires an image frame of neutral expression of the driver and stores the image frame as a reference frame image;
the emotion processing unit acquires a current frame image of a driver in the video data;
comparing the current frame image with the reference frame image by the emotion processing unit to obtain a motion field between the two frames;
the emotion processing unit obtains a strain image of the motion field according to the motion field between the two frames;
and the emotion processing unit determines the micro expression of the current frame image according to the strain image of the motion field and a preset threshold value.
In a preferred embodiment of the present invention, the emotion processing unit compares the current frame image with the reference frame image to obtain a motion field between two frames, and performs feature recognition and segmentation on the face image of the driver for feature-based method, and then determines the motion field by measuring the displacement of the features.
In order to obtain a more accurate emotion recognition result, in a preferred embodiment of the present invention, the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, including:
the emotion processing unit fuses the micro expression recognition result and the macro expression recognition result to obtain a video emotion recognition result, wherein the video emotion recognition result comprises a video emotion recognition result weight; the micro expression recognition result and the macro expression recognition result are both provided with preset weights, and the weight of the micro expression recognition result is greater than that of the macro expression recognition result; when the micro expression recognition result is consistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset first weight, and when the micro expression recognition result is inconsistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset second weight, wherein the first weight is greater than the second weight;
and the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, wherein the voice emotion recognition result comprises a preset weight, and the fused result is an emotion recognition result.
Since the voice recognition is susceptible to the interference of the environmental noise, especially the voice of other passengers is more seriously affected, in order to only need the voice information of the driver, other directions of sounds need to be filtered, and the environmental noise needs to be filtered as much as possible, and in order to achieve the effect, a microphone array is needed to receive the voice information.
Since in speech emotion recognition the accuracy of recognition of different emotions varies widely, wherein the recognition accuracy of sadness and anger is much higher than that of other emotions, and sadness and anger have a greater influence on driving for the driver, in a preferred embodiment of the invention, the emotions include three emotions, respectively: neutral or happy emotion, sad emotion and angry emotion.
For the driver, it is not only long time for the driver to take the picture of the driver in calm mood to obtain the image of the neutral expression of the driver, but also the drivers are numerous and difficult, and it is difficult to ensure that each driver is calm when taking the picture, in a preferred embodiment of the invention, the mood processing unit obtains the image frame of the neutral expression of the driver, and comprises:
the emotion processing unit acquires video data of a driver, which is acquired by the camera;
the emotion processing unit intercepts a preset number of image frames from the video data, wherein the interception is from front to back;
the emotion processing unit selects one image frame from a preset number of image frames according to a preset rule as an image frame of a neutral expression of the driver. The preset rule comprises the step of identifying the image frame closest to the neutral expression in the preset number of image frames by using artificial intelligence.
The emotion recognition method based on the facial features and the voice features can improve the emotion recognition accuracy of the driver, so that a foundation is laid for reminding the driver of driving reasonably and avoiding potential traffic accidents.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (3)

1. A emotion recognition method based on facial features and voice features is realized by a camera, a microphone and an emotion processing unit, and is characterized by comprising the following steps:
the camera collects video data of a driver and sends the video data to the emotion processing unit;
the microphone collects voice data of a driver and sends the voice data to the emotion processing unit;
the emotion processing unit respectively uses the video data and the voice data to identify the emotion of the driver to obtain a video emotion identification result and a voice emotion identification result, wherein the video emotion identification result comprises a micro-expression identification result, and the micro-expression identification result is obtained by a micro-expression identification method based on the characteristics of the motion field;
the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result to obtain a fused emotion recognition result serving as a final emotion recognition result;
the number of the microphones is multiple, and the multiple microphones form a microphone array;
the video emotion recognition result also comprises a macro expression recognition result;
the micro expression recognition method based on the motion field characteristics is implemented as follows:
the emotion processing unit acquires an image frame of neutral expression of the driver and stores the image frame as a reference frame image;
the emotion processing unit acquires a current frame image of a driver in the video data;
comparing the current frame image with the reference frame image by the emotion processing unit to obtain a motion field between the two frames;
the emotion processing unit obtains a strain image of the motion field according to the motion field between the two frames;
the emotion processing unit determines the micro expression of the current frame image according to the strain image of the motion field and a preset threshold value;
the emotion processing unit compares the current frame image with the reference frame image to obtain a motion field between the two frames, wherein the motion field is obtained by a characteristic-based method, and is implemented by determining the motion field by measuring the displacement of the characteristic after the emotion processing unit performs characteristic recognition and segmentation on the face image of the driver;
the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, and the method comprises the following steps:
the emotion processing unit fuses the micro expression recognition result and the macro expression recognition result to obtain a video emotion recognition result, wherein the video emotion recognition result comprises a video emotion recognition result weight; the micro expression recognition result and the macro expression recognition result are both provided with preset weights, and the weight of the micro expression recognition result is greater than that of the macro expression recognition result; when the micro expression recognition result is consistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset first weight, and when the micro expression recognition result is inconsistent with the macro expression recognition result, the weight of the video emotion recognition result is a preset second weight, wherein the first weight is greater than the second weight;
and the emotion processing unit fuses the video emotion recognition result and the voice emotion recognition result, wherein the voice emotion recognition result comprises a preset weight, and the fused result is an emotion recognition result.
2. The method of claim 1, wherein the emotions, including three emotions, are: neutral or happy emotion, sad emotion and angry emotion.
3. The method of claim 1, wherein the emotion processing unit obtains image frames of neutral expressions of the driver, comprising:
the emotion processing unit acquires video data of a driver, which is acquired by the camera;
the emotion processing unit intercepts a preset number of image frames from the video data, wherein the interception is from front to back;
the emotion processing unit selects one image frame from a preset number of image frames according to a preset rule as an image frame of a neutral expression of the driver.
CN201711160533.6A 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features Expired - Fee Related CN107705808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711160533.6A CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711160533.6A CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Publications (2)

Publication Number Publication Date
CN107705808A CN107705808A (en) 2018-02-16
CN107705808B true CN107705808B (en) 2020-12-25

Family

ID=61180438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711160533.6A Expired - Fee Related CN107705808B (en) 2017-11-20 2017-11-20 Emotion recognition method based on facial features and voice features

Country Status (1)

Country Link
CN (1) CN107705808B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN108958699B (en) * 2018-07-24 2021-12-07 Oppo(重庆)智能科技有限公司 Voice pickup method and related product
CN109243490A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Driver's Emotion identification method and terminal device
RU2711976C1 (en) * 2018-11-08 2020-01-23 Инна Юрьевна Жовнерчук Method for remote recognition and correction using a virtual reality of a psychoemotional state of a human
CN109858330A (en) * 2018-12-15 2019-06-07 深圳壹账通智能科技有限公司 Expression analysis method, apparatus, electronic equipment and storage medium based on video
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data handling procedure, device, computer equipment and storage medium
JP6999540B2 (en) * 2018-12-28 2022-01-18 本田技研工業株式会社 Information processing equipment and programs
CN110001652B (en) * 2019-03-26 2020-06-23 深圳市科思创动科技有限公司 Driver state monitoring method and device and terminal equipment
JP7185072B2 (en) * 2019-04-05 2022-12-06 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Method and system for providing emotional modification during video chat
CN110096600A (en) * 2019-04-16 2019-08-06 上海图菱新能源科技有限公司 Artificial intelligence mood improves interactive process and method
CN110110653A (en) * 2019-04-30 2019-08-09 上海迥灵信息技术有限公司 The Emotion identification method, apparatus and storage medium of multiple features fusion
CN112078590B (en) * 2019-05-27 2022-04-05 宇通客车股份有限公司 Driving behavior monitoring method and system
CN111754761B (en) * 2019-07-31 2022-09-20 广东小天才科技有限公司 Traffic safety alarm prompting method and electronic equipment
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
CN110826433B (en) * 2019-10-23 2022-06-03 上海能塔智能科技有限公司 Emotion analysis data processing method, device and equipment for test driving user and storage medium
CN112927721A (en) * 2019-12-06 2021-06-08 观致汽车有限公司 Human-vehicle interaction method, system, vehicle and computer readable storage medium
CN111145282B (en) * 2019-12-12 2023-12-05 科大讯飞股份有限公司 Avatar composition method, apparatus, electronic device, and storage medium
CN111401198B (en) * 2020-03-10 2024-04-23 广东九联科技股份有限公司 Audience emotion recognition method, device and system
CN112562267A (en) * 2020-11-27 2021-03-26 深圳腾视科技有限公司 Vehicle-mounted safety robot and safe driving assistance method
CN112699802A (en) * 2020-12-31 2021-04-23 青岛海山慧谷科技有限公司 Driver micro-expression detection device and method
CN113808623A (en) * 2021-09-18 2021-12-17 武汉轻工大学 Emotion recognition glasses for blind people

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206639274U (en) * 2016-12-24 2017-11-14 惠州市云鼎科技有限公司 A kind of safe driving monitoring device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293539A (en) * 2004-03-08 2005-10-20 Matsushita Electric Works Ltd Facial expression recognizing device
CN106650633A (en) * 2016-11-29 2017-05-10 上海智臻智能网络科技股份有限公司 Driver emotion recognition method and device
CN106897706B (en) * 2017-03-02 2019-11-22 利辛县诚创科技中介服务有限公司 A kind of Emotion identification device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206639274U (en) * 2016-12-24 2017-11-14 惠州市云鼎科技有限公司 A kind of safe driving monitoring device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
麦克风阵列语音增强技术及其应用;洪鸥;《微计算机信息》;20060131;第22卷(第2006(01)期);第1页第1-2段 *

Also Published As

Publication number Publication date
CN107705808A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107705808B (en) Emotion recognition method based on facial features and voice features
KR102270499B1 (en) Image-based vehicle damage determination method and device, and electronic device
JP6873237B2 (en) Image-based vehicle damage assessment methods, equipment, and systems, as well as electronic devices
US10713865B2 (en) Method and apparatus for improving vehicle loss assessment image identification result, and server
US11049334B2 (en) Picture-based vehicle loss assessment
CA3035298C (en) Predicting depth from image data using a statistical model
EP3608840A1 (en) Image processing for automated object identification
CN108596116A (en) Distance measuring method, intelligent control method and device, electronic equipment and storage medium
TW201947528A (en) Vehicle damage identification processing method, processing device, client and server
CN105426867A (en) Face identification verification method and apparatus
US11775054B2 (en) Virtual models for communications between autonomous vehicles and external observers
KR20150031896A (en) Speech recognition device and the operation method
CN108564066A (en) A kind of person recognition model training method and character recognition method
CN104834887A (en) Motion pedestrian representation method, identification method and motion pedestrian identification device
US20200114925A1 (en) Interaction device, interaction method, and program
CN109034136A (en) Image processing method, device, picture pick-up device and storage medium
JP2017120609A (en) Emotion estimation device, emotion estimation method and program
CN111901627A (en) Video processing method and device, storage medium and electronic equipment
CN110427810A (en) Video damage identification method, device, shooting end and machine readable storage medium
Anderson et al. Feasibility study on the utilization of microsoft hololens to increase driving conditions awareness
CN108710821A (en) Vehicle user state recognition system and its recognition methods
CN110807394A (en) Emotion recognition method, test driving experience evaluation method, device, equipment and medium
CN110321829A (en) A kind of face identification method and device, electronic equipment and storage medium
CN113627332A (en) Gradient control federal learning-based distraction driving behavior identification method
CN111626400B (en) Training and application method and device for multi-layer neural network model and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201225

Termination date: 20211120