CN111368590B - Emotion recognition method and device, electronic equipment and storage medium - Google Patents

Emotion recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111368590B
CN111368590B CN201811594128.XA CN201811594128A CN111368590B CN 111368590 B CN111368590 B CN 111368590B CN 201811594128 A CN201811594128 A CN 201811594128A CN 111368590 B CN111368590 B CN 111368590B
Authority
CN
China
Prior art keywords
emotion
target
target object
determining
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811594128.XA
Other languages
Chinese (zh)
Other versions
CN111368590A (en
Inventor
李海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201811594128.XA priority Critical patent/CN111368590B/en
Publication of CN111368590A publication Critical patent/CN111368590A/en
Application granted granted Critical
Publication of CN111368590B publication Critical patent/CN111368590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a mood recognition method, a mood recognition device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of first facial images of a target object during service provision; processing the plurality of first facial images by using a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image; and determining the first target emotion of the target object according to the first emotion. The apparatus is for performing the above method. According to the embodiment of the invention, the facial emotion of the target object in the service providing process is obtained by adopting the pre-established recognition model, so that the emotion of the target object in the service providing process can be known, and a basis is provided for the follow-up monitoring of the driver state and the complaint judgment.

Description

Emotion recognition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of Location Based Services (LBS) technology, and in particular, to a method and apparatus for emotion recognition, an electronic device, and a storage medium.
Background
At present, with the rapid development and popularization of internet car platform technology, people can utilize mobile phone terminals to generate car orders according to car demands, and convenience is provided for daily travel.
However, in the actual driving process, language, limb conflict and even criminal crime caused by the contradiction between the service requester and the driver occasionally occur, and the background server can monitor the driving route of the driver, but cannot learn the condition in the vehicle, so that the background server only knows the occurrence of the conflict after receiving the alarm or complaint information, and cannot judge the responsible party.
Disclosure of Invention
Accordingly, an object of the embodiments of the present invention is to provide a method, an apparatus, an electronic device, and a storage medium for emotion recognition, which can provide a basis for monitoring and complaint determination of a target object.
In a first aspect, an embodiment of the present invention provides a method for identifying emotion, including:
Acquiring a plurality of first facial images of a target object during service provision;
Processing a plurality of first facial images by using a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
and determining the first target emotion of the target object according to the first emotion.
Further, prior to processing the plurality of first facial images using the pre-established machine learning identification model, the method further comprises:
Extracting characteristic points of each first facial image to obtain a plurality of characteristic points corresponding to each first facial image;
and performing alignment operation on the first facial image of the target object according to the feature points corresponding to each first facial image.
Further, the aligning operation for the first facial image according to the feature points corresponding to each first facial image includes:
and carrying out alignment operation on the first facial images by adopting an affine transformation algorithm according to the feature points corresponding to each first facial image.
Further, before acquiring the plurality of first facial images of the target object during the providing of the service, the method further comprises:
And acquiring a video image of the target object in the service providing period, and extracting the first facial image from the video image according to a preset frame number.
Further, the machine learning identification model is obtained by:
Acquiring a historical video image of a training object in a historical time period during service providing, taking the complained historical video image as a sample negative example, and taking the historical video images except the complained historical video image as a sample positive example;
And training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
Further, the determining the first target emotion of the target object according to the first emotion includes:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
Further, the determining the first target emotion of the target object according to the time duty ratio corresponding to each first emotion includes:
And taking the first emotion with the maximum time ratio as the first target emotion of the target object.
Further, the determining the first target emotion of the target object according to the time duty ratio corresponding to each first emotion includes:
And taking the first emotion with the maximum time ratio and larger than a preset threshold value as a first target emotion of the target object.
Further, the determining the first target emotion of the target object according to the time duty ratio corresponding to each first emotion includes:
and acquiring the first N first moods with the time occupation ratios ordered in a descending order, and determining the first target moods of the target object according to the first N first moods.
Further, the determining the first target emotion of the target object according to the first emotion includes:
and obtaining an emotion sequence of each first emotion according to time, and determining the first target emotion of the target object according to the emotion sequence.
Further, the method further comprises:
acquiring a plurality of second face images of the service requester during the receiving of the service;
processing the plurality of second facial images by using the pre-established machine learning identification model to obtain a second emotion corresponding to each second facial image;
and determining the second target emotion of the service requester according to the corresponding time duty ratio of each second emotion.
Further, the method further comprises:
Acquiring voice information of a target object during service providing;
carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
correspondingly, the determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion includes:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
Further, the method further comprises:
and sending the first target emotion to a server.
Further, the sending the first target emotion to a server includes:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the preset emotion is a preset emotion to be sent to the server.
In a second aspect, an embodiment of the present invention further provides an emotion recognition device, including:
A first acquisition module for acquiring a plurality of first facial images of a target object during provision of a service;
The first recognition module is used for processing the plurality of first facial images by utilizing a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
And the first determining module is used for determining a first target emotion of the target object according to the first emotion.
Further, the apparatus further comprises:
the feature extraction module is used for extracting feature points of each first facial image to obtain a plurality of feature points corresponding to each first facial image;
And the alignment module is used for performing alignment operation on the first facial images of the target object according to the feature points corresponding to each first facial image.
Further, the alignment module is specifically configured to:
and carrying out alignment operation on the first facial images by adopting an affine transformation algorithm according to the feature points corresponding to each first facial image.
Further, the apparatus further comprises:
And the image extraction module is used for acquiring the video image of the target object in the service providing period and extracting the first facial image from the video image according to a preset frame number.
Further, the machine learning identification model is obtained by:
the sample acquisition module is used for acquiring historical video images of the training object in the historical time period during the service providing period, taking the complained historical video images as sample negative examples and taking the historical video images except the complained historical video images as sample positive examples;
and the training module is used for training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
Further, the first determining module is specifically configured to:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
Further, the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio as the first target emotion of the target object.
Further, the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio and larger than a preset threshold value as a first target emotion of the target object.
Further, the first determining module is specifically configured to:
and acquiring the first N first moods with the time occupation ratios ordered in a descending order, and determining the first target moods of the target object according to the first N first moods.
Further, the first determining module is specifically configured to:
and obtaining an emotion sequence of each first emotion according to time, and determining the first target emotion of the target object according to the emotion sequence.
Further, the apparatus further comprises:
A second acquisition module for acquiring a plurality of second face images of the service requester during the reception of the service;
the second recognition module is used for processing the second facial images by utilizing the machine learning recognition model which is built in advance to obtain a second emotion corresponding to each second facial image;
And the second determining module is used for determining the second target emotion of the service requester according to the corresponding time duty ratio of each second emotion.
Further, the apparatus further comprises:
A third acquisition module for acquiring voice information of the target object during the service providing period;
the third recognition module is used for carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
correspondingly, the determining module is specifically configured to:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
Further, the apparatus further comprises:
And the sending module is used for sending the first target emotion to a server.
Further, the first determining module is specifically configured to:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the preset emotion is a preset emotion to be sent to the server.
In a third aspect, an embodiment of the present invention further provides an emotion recognition method, including:
Receiving a plurality of first face images of a target object sent by a terminal during the service providing period;
processing the first facial images by using a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
and determining the first target emotion of the target object according to the first emotion.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, the present embodiment also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any one of the possible implementations of the first aspect.
According to the emotion recognition method, the emotion recognition device, the electronic equipment and the storage medium, the facial emotion of the target object in the service providing process is obtained by adopting the pre-established recognition model, the first target emotion is further obtained, and the first target emotion is sent to the server, so that the emotion of the target object in the service providing process can be known, and a basis is provided for the follow-up monitoring and complaint judgment of the state of the target object.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a emotion recognition method according to an embodiment of the present invention;
FIG. 2 (a) is a pre-alignment facial image provided by an embodiment of the present invention;
FIG. 2 (b) is an aligned facial image provided by an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of a neural network according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an emotion recognition device according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of interaction between a terminal and a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The background server can not monitor the state of the network taxi driver during the service providing period at present, so that some drivers can occasionally collide with passengers to threaten the safety and benefit of the passengers. It should be noted that, the method provided by the embodiment of the present invention is not only applied to the network vehicle service, but also applied to other application scenarios, and the embodiment of the present invention is not limited in particular.
For the convenience of understanding the present embodiment, a detailed description will be given of an emotion recognition method disclosed in the embodiment of the present invention.
The emotion recognition method comprises the following steps: acquiring a plurality of first facial images of a target object during service provision; processing the plurality of first facial images by using a pre-established recognition model to obtain a first emotion corresponding to each first facial image; and determining the first target emotion of the target object according to all the first emotions. It should be noted that the execution subject of the above method may be either a terminal or a server. The following will be described with respect to a terminal and a server, respectively:
Fig. 1 is a schematic flow chart of a method for emotion recognition according to an embodiment of the present invention, where, as shown in fig. 1, an execution subject is a terminal, and the method includes:
step 101: the terminal acquires a plurality of first face images of a target object during the service providing period;
In a specific implementation process, the terminal acquires a plurality of first facial images of the target object during the service providing process, wherein the terminal is provided with a camera and can communicate with the server, and the terminal can be installed in a vehicle or can be held by a service requester. If the terminal is installed in the vehicle, the server sends an image acquisition instruction to the terminal after judging that the target receives the service request party, and the terminal starts to acquire after receiving the instruction. If the terminal is held by the service requester, the service requester can turn on a camera of the terminal at a required moment to acquire images of the target object.
Step 102: the terminal processes the plurality of first facial images by utilizing a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
In a specific implementation process, after a terminal acquires a plurality of first facial images, the terminal sequentially inputs the plurality of first facial images into a pre-established machine learning identification model, and the machine learning identification model identifies each input first facial image to obtain a first emotion corresponding to each first facial image. Wherein, the first emotion may be: no expression, happiness, extremely happiness, vital energy, extremely vital energy, surprise, extremely surprise, fear, extremely fear, aversion, extremely aversion, sadness, extremely sadness and the like. Specifically, in the machine learning recognition model, a probability value for each first emotion of the first facial image is obtained, and the first emotion with the largest probability value is obtained. It should be noted that the identification may be performed according to the time sequence in which the first facial image is acquired.
Step 103: and the terminal determines the first target emotion of the target object according to the first emotion.
In a specific implementation process, after the first emotion corresponding to each first facial image is obtained, determining a first target emotion of the target object according to all the first emotions, and sending the first target emotion to the server. In addition, the server can monitor the first facial image of the target object in real time during the service providing period, judge whether the target object is possible or has collided with the service requester according to the first target emotion, if so, immediately let the relevant customer service personnel contact the target object and adjust the target object, thereby reducing the probability of collision through monitoring the target object.
According to the driver emotion recognition method, device, electronic equipment and storage medium, the first facial image of the target object in the service providing process is processed by adopting the pre-established recognition model, so that the first target emotion is obtained, the emotion of the target object in the service providing process can be known, and a basis is provided for the follow-up monitoring and complaint judgment of the state of the target object.
The embodiment of the invention provides another driver emotion recognition method, which uses a server as an execution main body: the server acquires a plurality of first facial images of the target object during the provision of the service, and it should be noted that the first facial images of the target object acquired by the server are transmitted by the terminal with the image capturing function. The server processes the plurality of first facial images by utilizing a pre-established recognition model to obtain a first emotion corresponding to each first facial image; and the server determines the first target emotion of the target object according to all the first emotions.
It can be understood that the method for identifying the emotion of the driver at the server side is consistent with that at the terminal side, and will not be described here again.
On the basis of the above embodiment, before processing the plurality of first facial images using a machine learning recognition model established in advance, the method further includes:
Extracting characteristic points of each first facial image to obtain a plurality of characteristic points corresponding to each first facial image;
and performing alignment operation on the first facial image of the target object according to the feature points corresponding to each first facial image.
In a specific implementation process, before recognition by using a machine learning recognition model, after a plurality of first facial images are acquired, feature points are extracted from each first facial image, so as to obtain feature points in each first facial image, wherein the shapes of the facial contours, eyebrows, eyes, nose and mouth can be represented by 60 feature points. It should be noted that there are various methods for extracting feature points, for example: ASM algorithm, AAM algorithm, etc., the invention does not limit the feature point extraction method specifically.
After the feature points are extracted, the first face image is aligned by using the feature points of each first face image and the spatial positions of the feature points, that is, if the first face image includes a side face or is inclined, the front face of the target object can be obtained after the alignment operation. In addition, during the alignment operation, an affine transformation method may be used, fig. 2 (a) is a face image before alignment provided in the embodiment of the present invention, fig. 2 (b) is a face image after alignment provided in the embodiment of the present invention, and as can be seen from fig. 2 (a) and fig. 2 (b), a front face image can be obtained after the alignment operation. It should be noted that the first face image may also be aligned by interpolation alignment.
According to the embodiment of the invention, the characteristic points of the first facial image are extracted, the extracted characteristic points are utilized to align the first facial image, the front face image is obtained, and then the aligned first facial image is input into the recognition model for recognition, so that the recognition accuracy can be greatly improved.
On the basis of the above embodiment, before acquiring the plurality of first face images of the target object during the service providing, the method further includes:
And acquiring a video image of the target object in the service providing period, and extracting the first facial image from the video image according to a preset frame number.
In a specific implementation process, if the terminal does not have an automatic photographing function, if a continuous first facial image is to be obtained, the photographing function needs to be triggered manually and continuously, or the terminal can perform video recording operation during the service providing period of the target object to obtain a video image, and then the video image can be extracted according to a preset frame number. Thereby obtaining a plurality of first facial images. It should be noted that, one image may be extracted as the first facial image every 5 frames, or the preset frame number may be adjusted according to the actual situation, which is not particularly limited in the embodiment of the present invention. After a plurality of first face images are obtained in a preset frame number, if some of the first face images do not include the face information of the target object, the first face images need to be subjected to a removal process.
According to the embodiment of the invention, the first facial image is extracted from the video image according to the preset frame number by acquiring the video image, so that the first facial image which is uniform in time can be obtained, and the problem that the photographing function is required to be triggered manually and continuously under the condition that the terminal cannot automatically and continuously photograph is avoided.
On the basis of the above embodiment, the machine learning identification model is obtained by:
Acquiring a historical video image of a training object in a historical time period during service providing, taking the complained historical video image as a sample negative example, and taking the historical video images except the complained historical video image as a sample positive example;
And training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
In a specific implementation process, a machine learning recognition model needs to be pre-established and obtained when emotion recognition is performed, and a historical video image of a training object during service providing is obtained in a certain historical time period, wherein the historical video image comprises complaints and also comprises non-complaints. The complaint training object is considered to collide with the service requester, the complaint history video image is taken as a sample negative example, and the rest history video image can be regarded as the training object to drive in a state of relatively stable emotion, so the training object is taken as a sample positive example. And marking the image emotion in the multi-sample negative example and the multi-sample positive example respectively, training the convolutional neural network by using the sample positive example and the sample negative example, and obtaining a machine learning recognition model after training is completed. It should be noted that, before training, it is necessary to perform image extraction on the historical video image according to a preset number of frames. Fig. 3 is a schematic structural diagram of a neural network provided in an embodiment of the present invention, as shown in fig. 3, including 4 network layers and 1 full connection layer, where the sizes of the 4 network layers are 256, 128, 64 and 32, two layers of convolution kernels are 3*3 in the network layers with the size of 256, and the number of filters is 64, and then performing maximum sub-sampling; two layers of convolution kernels are 3*3 in the network layer with the size of 128, the number of filters is 128, and then maximum sub-sampling is carried out; two layers of convolution kernels are 3*3 in the network layer with the size of 64, the number of the filters is 256, and then maximum sub-sampling is carried out; two layers of convolution kernels of 3*3 are arranged in the network layer with the size of 32, the number of the filters is 512, and then the maximum sub-sampling is carried out. The full connection layer includes 13 classification after the two layers fc are fully connected.
According to the embodiment of the invention, the historical video image which is complained is taken as a sample negative example, the rest historical video image is taken as a sample positive example to train the convolutional neural network, a machine learning identification model is obtained, and the machine learning identification model is used for carrying out emotion identification on a target object, so that whether a collision exists between a driver and a service requester in a vehicle or not and whether a potential risk exists can be judged in real time, a vehicle-on-network platform can intervene and mediate in proper time, more malignant event occurrence is avoided, the safety of the service requester is protected, and the public praise of the platform is guaranteed.
On the basis of the foregoing embodiment, the determining, according to the first emotion, a first target emotion of the target object includes:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
In a specific implementation process, since the plurality of first facial images are acquired according to a time sequence, after the first emotions corresponding to the plurality of first facial images of the target object are acquired, the time of each first emotion can be calculated, and the plurality of first facial images correspond to the total time, so that the time duty ratio corresponding to each first emotion can be obtained, and the first target emotion of the target object is determined according to the time duty ratio. It should be noted that the first emotion corresponding to the maximum time ratio may be regarded as the first target emotion of the target object. For example, there are 20 first facial images, each of which represents 5 seconds, and the total time is 100 seconds, and the first emotions corresponding to the 20 first facial images are respectively: the 1 st to 3 rd sheets are without expression, the 4 th to 15 th sheets are qi, the 16 th to 18 th sheets are surprise, and the 19 th to 20 th sheets are qi. Therefore, the time duty ratio for the Qi-generating emotion is the largest, so the first target emotion of the target object is Qi. It should be noted that the time ratio is calculated by the ratio of the sum of the corresponding times of 4 th to 15 th and 19 th to 20 th to the total time of the generated gas.
When the first target emotion of the target object is determined, the first emotion with the maximum time ratio and larger than the preset threshold value can be used as the first target emotion of the target object. The preset threshold value can be set according to actual conditions. This has the advantage that the accuracy of the target emotion can be improved.
When determining the first target emotion of the target object, the first emotions can be ranked according to the time proportion, can be ascending or descending, the first N first emotions are acquired from the beginning with the largest time proportion, average scores are calculated according to the scores of the first N first emotions, and then the first emotion closest to the average score is used as the first target emotion of the driver.
When determining the first target emotion of the target object, the first emotion can be further ranked according to time, an emotion sequence is obtained, and the first target emotion of the target object is determined according to the emotion sequence. For example: the emotion sequence is: extremely happy, non-expressive, angry, extremely angry, from this sequence of emotions it can be seen that the emotion of the target object is evolving towards extremely angry, whereby the first target emotion of the target object can be determined to be extremely angry. It should be noted that the emotion determination table may be pre-constructed, i.e. different emotion sequences correspond to different first target emotions. In addition, the calculated weights of the first emotions can be set according to the emotion sequences, the weight of the first emotion with the later time is higher than the first emotion with the earlier time, then the weighted average is calculated according to the emotions, and the first emotion closest to the average is taken as the first target emotion.
According to the embodiment of the invention, the first target emotion is determined according to the time proportion corresponding to each first emotion, so that the first target emotion can be obtained more accurately and reasonably.
On the basis of the foregoing embodiment, the sending the first target emotion to a server includes:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the preset emotion is a preset emotion to be sent to the server.
In a specific implementation process, some preset emotions are preset, where the preset emotions may include: qi, extreme Qi, surprise, extreme surprise, fear, extreme fear, aversion, extreme aversion, etc. After the first target emotion of the target object is obtained, judging whether the first target emotion is in a preset emotion or not, if so, sending the first target emotion to a server, and if not, not sending the first target emotion to the server.
According to the embodiment of the invention, whether the first target emotion is the preset emotion or not is judged, and if so, the first target emotion is sent to the server, so that the energy consumption of the terminal can be reduced.
On the basis of the above embodiment, the method further includes:
acquiring a plurality of second face images of the service requester during the receiving of the service;
In a specific implementation process, in order to protect the security of a target object (i.e., a service provider), or further improve the accuracy of in-vehicle monitoring. The terminal may also acquire a plurality of second facial images of the service requester during reception of the service.
Processing the plurality of second facial images by using the pre-established machine learning identification model to obtain a second emotion corresponding to each second facial image;
In a specific implementation process, after acquiring a plurality of second facial images, the terminal sequentially inputs the second facial images into a pre-established machine learning identification model, and the machine learning identification model identifies each input second facial image to obtain a second emotion corresponding to each second facial image. Specifically, in the machine learning recognition model, a probability value for each second emotion of the second face image is obtained, and the second target emotion with the largest probability value is obtained. It should be noted that the identification may be performed according to the time sequence in which the second face image is acquired.
And determining the second target emotion of the service requester according to the corresponding time duty ratio of each second emotion.
In a specific implementation process, after the second emotion corresponding to each second face image is obtained, determining a second target emotion of the service requester according to all the second emotions, and sending the second target emotion to the server. In addition, the server can monitor the facial emotion of the service requester during the service receiving period in real time, judge whether the service requester is likely to or has collided with the target object according to the second target emotion, and if so, immediately let the relevant customer service personnel contact the service requester and adjust the service requester, so that the probability of collision is reduced through monitoring the service requester.
According to the embodiment of the invention, the facial emotion of the service requester and the service provider is monitored at the same time, so that the server can obtain the first target emotion corresponding to the service provider and the second target emotion corresponding to the service requester, and further whether the conflict occurs can be accurately judged.
On the basis of the above embodiment, the method further includes:
Acquiring voice information of a target object during service providing;
carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
In a specific implementation process, the terminal may further acquire voice information, and perform emotion recognition according to the acquired voice information, so as to obtain a third emotion of the target object. It should be noted that there are many algorithms for emotion recognition through voice information, and embodiments of the present invention are not limited thereto. The third emotion may also include: no expression, happiness, extremely happiness, vital energy, extremely vital energy, surprise, extremely surprise, fear, extremely fear, aversion, extremely aversion, sadness, extremely sadness and the like.
Correspondingly, the determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion includes:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
In a specific implementation process, when determining the first target emotion corresponding to the target object, the time duty ratio and the third emotion corresponding to each first emotion can be comprehensively analyzed to obtain the first target emotion of the target object. An analysis table may be formulated in advance, where the table includes three fields, respectively: a first emotion, a third emotion, and a first target emotion. Presetting the correspondence between the first emotion and the third emotion and the first target emotion, for example: the first emotion is vital energy, the third emotion is vital energy, and the first target emotion is vital energy. The first emotion is vital energy, the third emotion is extremely happy, and the first target emotion is happy. It should be noted that the analysis table is empirically obtained. Of course, an analysis model may be established, the first emotion and the third emotion may be input into the analysis model, and the analysis model outputs the first target emotion.
According to the method and the device for determining the first target emotion, the first target emotion is obtained through comprehensive analysis of the first emotion and the third emotion, and accuracy of determining the first target emotion is greatly improved.
Fig. 4 is a schematic structural diagram of an emotion recognition device according to an embodiment of the present invention, as shown in fig. 4, where the device includes: a first acquisition module 401, a first identification module 402, and a first determination module 403, wherein:
The first acquisition module 401 is configured to acquire a plurality of first facial images of a target object during service provision; the first recognition module 402 is configured to process the plurality of first facial images by using a pre-established machine learning recognition model, so as to obtain a first emotion corresponding to each of the first facial images; the first determining module 403 is configured to determine a first target emotion of the target object according to the first emotion.
On the basis of the above embodiment, the apparatus further includes:
the feature extraction module is used for extracting feature points of each first facial image to obtain a plurality of feature points corresponding to each first facial image;
And the alignment module is used for performing alignment operation on the first facial images of the target object according to the feature points corresponding to each first facial image.
On the basis of the above embodiment, the alignment module is specifically configured to:
and carrying out alignment operation on the first facial images by adopting an affine transformation algorithm according to the feature points corresponding to each first facial image.
On the basis of the above embodiment, the apparatus further includes:
And the image extraction module is used for acquiring the video image of the target object in the service providing period and extracting the first facial image from the video image according to a preset frame number.
On the basis of the above embodiment, the machine learning identification model is obtained by:
the sample acquisition module is used for acquiring historical video images of the training object in the historical time period during the service providing period, taking the complained historical video images as sample negative examples and taking the historical video images except the complained historical video images as sample positive examples;
and the training module is used for training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio as the first target emotion of the target object.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio and larger than a preset threshold value as a first target emotion of the target object.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
and acquiring the first N first moods with the time occupation ratios ordered in a descending order, and determining the first target moods of the target object according to the first N first moods.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
and obtaining an emotion sequence of each first emotion according to time, and determining the first target emotion of the target object according to the emotion sequence.
On the basis of the foregoing embodiment, the first determining module is specifically configured to:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the first preset emotion is a preset emotion to be sent to the server.
On the basis of the above embodiment, the apparatus further includes:
A second acquisition module for acquiring a plurality of second face images of the service requester during the reception of the service;
the second recognition module is used for processing the second facial image by utilizing the pre-established machine learning recognition model to obtain a second emotion corresponding to the second facial image;
And the second determining module is used for determining the second target emotion of the service requester according to the corresponding time duty ratio of each second emotion.
On the basis of the above embodiment, the apparatus further includes:
A third acquisition module for acquiring voice information of the target object during the service providing period;
the third recognition module is used for carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
correspondingly, the determining module is specifically configured to:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
According to the embodiment of the invention, the facial emotion of the target object in the service providing process is obtained by adopting the pre-established recognition model, so that the first target emotion is obtained and is sent to the server, the emotion of the target object in the service providing process can be known, and a basis is provided for the follow-up monitoring of the driver state and the complaint judgment.
The embodiment of the invention relates to a terminal 800 and a memory recycling method, wherein the terminal 800 can be a mobile phone, a tablet personal computer, a personal digital assistant (PersonalDigital Assistant, a PDA), a Point of Sales (POS), a vehicle-mounted computer or the like.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Referring to fig. 5, a terminal 500 according to an embodiment of the present invention includes: processor 510, memory 520, input unit 530, power supply 550, radio Frequency (RF) circuit 560, audio circuit 570, wireless fidelity (WiFi) module 580.
The memory 520 includes a memory 521 and an external memory 522, the memory 521 is used for temporarily storing operation data in the processor 510 and data exchanged with the external memory 522 such as a hard disk, and the processor 510 exchanges data with the external memory 522 through the memory 521. The memory 521 may be one of nonvolatile memory (Non-Volatile Random Access Memory, NVRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), static random access memory (STATIC RAM, SRAM), flash memory, etc.; external memory 522 may be a hard disk, optical disk, USB disk, floppy disk, tape drive, or the like.
Processor 510 executes instructions in memory 520 in a user state: acquiring a plurality of first facial images of a target object during service provision; processing the plurality of first facial images by using a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image; and determining the first target emotion of the target object according to the first emotion.
The input unit 530 may be used to receive input digital or character information and to generate signal inputs related to user settings and function control of the terminal 500. Specifically, in the embodiment of the present invention, the input unit 530 may include a touch panel 531. The touch panel 531, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 531 or on the touch panel 531 using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 510, and can receive commands from the processor 510 and execute them. In addition, the touch panel 531 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 531, the input unit 530 may include other input devices 532, and the other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, etc.
The terminal 500 may further include a display unit 540, and the display unit 540 may be used to display information input by a user or information provided to the user and various menu interfaces of the terminal 500. The display unit 540 may include a display panel 541, and optionally, the display panel 541 may be configured in the form of an LCD (Liquid CRYSTAL DISPLAY) or an OLED (Organic Light-Emitting Diode) or the like.
In the embodiment of the present invention, the touch panel 531 covers the display panel 541 to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the touch display screen according to the type of the touch event.
In the embodiment of the invention, the touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be up-down arrangement, left-right arrangement and the like, and the arrangement modes of the two display areas can be distinguished. The application interface display area may be used to display an interface of an application. Each interface may contain at least one application's icon and/or interface elements such as a widget desktop control. The application interface display area may be an empty interface that does not contain any content. The common control display area is used for displaying controls with higher use rate, such as application icons including setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 510 is a control center of the terminal 500, connects various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the terminal 500 by running or executing software programs and/or modules stored in the memory 521, and calling data stored in the external memory 522, thereby performing overall monitoring of the terminal 500.
Furthermore, the embodiment of the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps of the driver emotion recognition method described in the above method embodiment.
The computer program product of the emotion recognition method provided by the embodiment of the present invention includes a computer readable storage medium storing program code, where the program code includes instructions for executing the steps of the emotion recognition method for a driver described in the above method embodiment, and specifically, reference may be made to the above method embodiment, and details thereof are not repeated herein.
Fig. 6 is a schematic diagram of interaction between a terminal and a server according to an embodiment of the present invention, where the server 602 is communicatively connected to one or more terminals 601 through a network 603 for data communication or interaction. The server 602 may be a web server, database server, or the like. The terminal 601 may be a personal computer (personal computer, PC), a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a wearable device, or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (26)

1. A method of emotion recognition, comprising:
Acquiring a plurality of first facial images of a target object during service provision;
Processing the plurality of first facial images by using a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
determining a first target emotion of the target object according to the first emotion;
the determining the first target emotion of the target object according to the first emotion comprises the following steps:
Obtaining an emotion sequence of each first emotion according to time;
setting weights of the first emotions according to the emotion sequences; wherein the weight of the first emotion at the back of the time is higher than the weight of the first emotion at the front of the time;
Calculating weighted average scores according to the number of the first facial images corresponding to each first emotion and the corresponding weights, and taking the first emotion closest to the weighted average score as a first target emotion of the target object;
Acquiring a plurality of second face images of the service requester during service acceptance;
processing the plurality of second facial images by using the pre-established machine learning identification model to obtain a second emotion corresponding to each second facial image;
determining a second target emotion of the service requester according to the corresponding time duty ratio of each second emotion;
And sending the first target emotion and the second target emotion to a server so that the server can judge whether conflict occurs or not based on the first target emotion and the second target emotion.
2. The method of claim 1, wherein prior to processing the plurality of first facial images using a pre-established machine learning recognition model, the method further comprises:
Extracting characteristic points of each first facial image to obtain a plurality of characteristic points corresponding to each first facial image;
and performing alignment operation on the first facial image of the target object according to the feature points corresponding to each first facial image.
3. The method according to claim 2, wherein the aligning the first facial images according to the feature points corresponding to each first facial image includes:
and carrying out alignment operation on the first facial images by adopting an affine transformation algorithm according to the feature points corresponding to each first facial image.
4. The method of claim 1, wherein prior to acquiring the plurality of first facial images of the target object during the providing of the service, the method further comprises:
And acquiring a video image of the target object in the service providing period, and extracting the first facial image from the video image according to a preset frame number.
5. The method of claim 1, wherein the machine learning identification model is obtained by:
Acquiring a historical video image of a training object in a historical time period during service providing, taking the complained historical video image as a sample negative example, and taking the historical video images except the complained historical video image as a sample positive example;
And training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
6. The method of claim 1, wherein the determining the first target emotion of the target object from the first emotion comprises:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
7. The method of claim 6, wherein the determining the first target emotion of the target object according to the corresponding time duty cycle of each first emotion comprises:
And taking the first emotion with the maximum time ratio as the first target emotion of the target object.
8. The method of claim 6, wherein the determining the first target emotion of the target object according to the corresponding time duty cycle of each first emotion comprises:
And taking the first emotion with the maximum time ratio and larger than a preset threshold value as a first target emotion of the target object.
9. The method of claim 6, wherein the determining the first target emotion of the target object according to the corresponding time duty cycle of each first emotion comprises:
and acquiring the first N first moods with the time occupation ratios ordered in a descending order, and determining the first target moods of the target object according to the first N first moods.
10. The method according to claim 6, further comprising:
acquiring voice information of the target object during service providing;
carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
correspondingly, the determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion includes:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
11. The method according to any one of claims 1-10, further comprising:
and sending the first target emotion to a server.
12. The method of claim 11, wherein the sending the first target emotion to a server comprises:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the preset emotion is a preset emotion to be sent to the server.
13. An emotion recognition device, characterized by comprising:
A first acquisition module for acquiring a plurality of first facial images of a target object during provision of a service;
The first recognition module is used for processing the plurality of first facial images by utilizing a pre-established machine learning recognition model to obtain a first emotion corresponding to each first facial image;
the first determining module is used for determining a first target emotion of the target object according to the first emotion;
the first determining module is specifically configured to:
Obtaining an emotion sequence of each first emotion according to time;
setting weights of the first emotions according to the emotion sequences; wherein the weight of the first emotion at the back of the time is higher than the weight of the first emotion at the front of the time;
Calculating weighted average scores according to the number of the first facial images corresponding to each first emotion and the corresponding weights, and taking the first emotion closest to the weighted average score as a first target emotion of the target object;
The device further comprises:
a second acquisition module for acquiring a plurality of second face images of the service requester during the service reception;
the second recognition module is used for processing the second facial image by utilizing the pre-established machine learning recognition model to obtain a second emotion corresponding to the second facial image;
the second determining module is used for determining a second target emotion of the service requester according to the corresponding time duty ratio of each second emotion;
And sending the first target emotion and the second target emotion to a server so that the server can judge whether conflict occurs or not based on the first target emotion and the second target emotion.
14. The apparatus of claim 13, wherein the apparatus further comprises:
the feature extraction module is used for extracting feature points of each first facial image to obtain a plurality of feature points corresponding to each first facial image;
And the alignment module is used for performing alignment operation on the first facial images of the target object according to the feature points corresponding to each first facial image.
15. The apparatus according to claim 14, wherein the alignment module is specifically configured to:
and carrying out alignment operation on the first facial images by adopting an affine transformation algorithm according to the feature points corresponding to each first facial image.
16. The apparatus of claim 13, wherein the apparatus further comprises:
And the image extraction module is used for acquiring the video image of the target object in the service providing period and extracting the first facial image from the video image according to a preset frame number.
17. The apparatus of claim 13, wherein the machine learning identification model is obtained by:
the sample acquisition module is used for acquiring historical video images of the training object in the historical time period during the service providing period, taking the complained historical video images as sample negative examples and taking the historical video images except the complained historical video images as sample positive examples;
and the training module is used for training the convolutional neural network according to the sample positive example and the sample negative example to obtain the machine learning identification model.
18. The apparatus according to claim 13, wherein the first determining module is specifically configured to:
and determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion.
19. The apparatus according to claim 18, wherein the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio as the first target emotion of the target object.
20. The apparatus according to claim 18, wherein the first determining module is specifically configured to:
And taking the first emotion with the maximum time ratio and larger than a preset threshold value as a first target emotion of the target object.
21. The apparatus according to claim 18, wherein the first determining module is specifically configured to:
and acquiring the first N first moods with the time occupation ratios ordered in a descending order, and determining the first target moods of the target object according to the first N first moods.
22. The apparatus of claim 13, wherein the apparatus further comprises:
A third acquisition module for acquiring voice information of the target object during the service providing period;
the third recognition module is used for carrying out emotion recognition according to the voice information to obtain a third emotion corresponding to the target object;
correspondingly, the determining module is specifically configured to:
And determining the first target emotion of the target object according to the corresponding time duty ratio of each first emotion and the third emotion.
23. The apparatus of claim 13, wherein the apparatus further comprises:
And the sending module is used for sending the first target emotion to a server.
24. The apparatus of claim 23, wherein the sending module is specifically configured to:
Judging whether the first target emotion is a preset emotion or not, if so, sending the first target emotion to the server; the preset emotion is a preset emotion to be sent to the server.
25. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 12.
26. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 12.
CN201811594128.XA 2018-12-25 2018-12-25 Emotion recognition method and device, electronic equipment and storage medium Active CN111368590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811594128.XA CN111368590B (en) 2018-12-25 2018-12-25 Emotion recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811594128.XA CN111368590B (en) 2018-12-25 2018-12-25 Emotion recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111368590A CN111368590A (en) 2020-07-03
CN111368590B true CN111368590B (en) 2024-04-23

Family

ID=71208141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811594128.XA Active CN111368590B (en) 2018-12-25 2018-12-25 Emotion recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111368590B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633965A (en) * 2020-12-11 2021-04-09 汉海信息技术(上海)有限公司 Order processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101317047B1 (en) * 2012-07-23 2013-10-11 충남대학교산학협력단 Emotion recognition appatus using facial expression and method for controlling thereof
CN106022676A (en) * 2016-05-09 2016-10-12 华南理工大学 Method and apparatus for rating complaint willingness of logistics client
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN108710820A (en) * 2018-03-30 2018-10-26 百度在线网络技术(北京)有限公司 Infantile state recognition methods, device and server based on recognition of face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101317047B1 (en) * 2012-07-23 2013-10-11 충남대학교산학협력단 Emotion recognition appatus using facial expression and method for controlling thereof
CN106022676A (en) * 2016-05-09 2016-10-12 华南理工大学 Method and apparatus for rating complaint willingness of logistics client
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN108710820A (en) * 2018-03-30 2018-10-26 百度在线网络技术(北京)有限公司 Infantile state recognition methods, device and server based on recognition of face

Also Published As

Publication number Publication date
CN111368590A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
JP7110414B2 (en) Image-based vehicle damage determination method, apparatus, and electronic device
US11062124B2 (en) Face pose detection method, device and storage medium
CN110751043B (en) Face recognition method and device based on face visibility and storage medium
EP3572974B1 (en) Method and device for sending warning message
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN110991249A (en) Face detection method, face detection device, electronic equipment and medium
CN111301280A (en) Dangerous state identification method and device
CN107633164A (en) Pay control method, device, computer installation and computer-readable recording medium
CN112908325B (en) Voice interaction method and device, electronic equipment and storage medium
WO2019033567A1 (en) Method for capturing eyeball movement, device and storage medium
CN113361468A (en) Business quality inspection method, device, equipment and storage medium
CN111368590B (en) Emotion recognition method and device, electronic equipment and storage medium
CN114299587A (en) Eye state determination method and apparatus, electronic device, and storage medium
CN111124109B (en) Interactive mode selection method, intelligent terminal, equipment and storage medium
CN113051958A (en) Driver state detection method, system, device and medium based on deep learning
CN110728206A (en) Fatigue driving detection method and device, computer readable storage medium and terminal
CN115171222A (en) Behavior detection method and device, computer equipment and storage medium
CN114844985A (en) Data quality inspection method, device, equipment and storage medium
CN110049316B (en) Method and device for detecting set number of terminals, portable terminal and storage medium
CN111611804A (en) Danger identification method and device, electronic equipment and storage medium
CN111243605A (en) Service processing method, device, equipment and storage medium
CN114170030B (en) Method, apparatus, electronic device and medium for remote damage assessment of vehicle
CN113850198B (en) Behavior detection method, device, medium and computer equipment based on edge calculation
CN111797784B (en) Driving behavior monitoring method and device, electronic equipment and storage medium
US11250242B2 (en) Eye tracking method and user terminal performing same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant