CN109670393B - Face data acquisition method, equipment, device and computer readable storage medium - Google Patents

Face data acquisition method, equipment, device and computer readable storage medium Download PDF

Info

Publication number
CN109670393B
CN109670393B CN201811127324.6A CN201811127324A CN109670393B CN 109670393 B CN109670393 B CN 109670393B CN 201811127324 A CN201811127324 A CN 201811127324A CN 109670393 B CN109670393 B CN 109670393B
Authority
CN
China
Prior art keywords
expression
user
matching degree
outputting
face data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811127324.6A
Other languages
Chinese (zh)
Other versions
CN109670393A (en
Inventor
黄文泱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811127324.6A priority Critical patent/CN109670393B/en
Publication of CN109670393A publication Critical patent/CN109670393A/en
Application granted granted Critical
Publication of CN109670393B publication Critical patent/CN109670393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses a face data acquisition method, which comprises the following steps: receiving a user expression; recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression; and collecting the user expression. The invention also discloses a face data acquisition device, a device and a computer readable storage medium. The technical scheme of the invention is beneficial to improving the acquisition effect of the face data.

Description

Face data acquisition method, equipment, device and computer readable storage medium
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a face data acquisition method, device, apparatus, and computer readable storage medium.
Background
Currently, face recognition technology is commonly used in the fields of payment, financial credit, identity authentication, and the like. When the operation is involved, the camera is used for collecting face data of the user, face recognition is carried out according to the existing face recognition model, and specific operation can be executed after the face recognition is successful. Meanwhile, in the face recognition process, the micro-expressions of the face are recognized to judge whether the user lies or not, so that the information or fund safety is further improved. The face recognition model is usually constructed by adopting a machine learning algorithm, and is continuously optimized along with the accumulation of recognition quantity, so that the face recognition model has higher recognition efficiency and recognition accuracy. Optimization of face recognition models requires reliance on large and reliable face data. However, the fields of payment, financial credit, identity authentication, etc. involve limited scenes, so that the acquired face data is also very limited. Meanwhile, under the scene, the user is easy to generate tension or embarrassing emotion, and the micro expression changes, so that the acquired face data is inaccurate.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a face data acquisition method, which aims to solve the technical problems that the acquisition amount of the face data is small and the acquisition is not accurate enough and improve the acquisition effect of the face data.
In order to achieve the above object, the present invention provides a face data acquisition method, comprising the steps of:
receiving a user expression;
recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression;
and collecting the user expression.
Preferably, after the step of collecting the user expression, the face data collecting method further includes the steps of:
respectively constructing a training set and a testing set according to the acquired user expression and the corresponding identification result;
and optimizing the expression recognition model based on a machine learning method according to the training set and the testing set.
Preferably, the step of receiving the user expression includes:
outputting an expression prompt signal, wherein the expression prompt signal is randomly generated according to the expression types in a preset expression library;
collecting image data of a user;
and extracting the user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal.
Preferably, the step of recognizing the user expression based on the expression recognition model and outputting the recognition result of the user expression includes:
decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions;
calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
comparing the matching degree with a first preset matching degree;
when the matching degree is greater than or equal to the first preset matching degree, converting expression scores according to the matching degree;
and outputting the expression score.
Preferably, after the step of comparing the matching degree with the first preset matching degree, the method further comprises the following steps:
outputting an expression confirmation signal when the matching degree is smaller than the first preset matching degree;
receiving a user expression obtained according to the expression confirmation signal;
and returning to the step of decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions until the accumulated times of continuously outputting the expression confirmation signals are greater than or equal to the preset times.
Preferably, the step of recognizing the user expression based on the expression recognition model and outputting the recognition result of the user expression includes:
identifying the user expression based on an expression identification model, and outputting expression identification information corresponding to the user expression;
receiving a confirmation instruction obtained according to the expression identification information;
and determining the matching degree between the expression identification information and the user expression according to the confirmation instruction.
Preferably, the step of collecting the user expression includes:
comparing the matching degree corresponding to the identification result with a second preset matching degree;
and when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree, acquiring the user expression.
In order to achieve the above object, the present invention further provides a face data acquisition apparatus, including: the face data acquisition method comprises the following steps of: receiving a user expression; recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression; and collecting the user expression.
In order to achieve the above object, the present invention further provides a face data acquisition device, including:
the receiving module is used for receiving the expression of the user;
the recognition module is used for recognizing the user expression based on the expression recognition model and outputting a recognition result of the user expression;
and the acquisition module is used for acquiring the user expression.
In order to achieve the above object, the present invention also provides a computer-readable storage medium having a face data collection program stored thereon, the face data collection program, when executed by a processor, implementing the steps of a face data collection method, the face data collection method comprising the steps of: receiving a user expression; recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression; and collecting the user expression.
In the technical scheme of the invention, the face data acquisition method comprises the following steps: receiving a user expression; identifying the user expression based on the expression identification model, and outputting an identification result of the user expression; and collecting the expression of the user. The face data acquisition method can be realized through WeChat applet or independent game application, and the enthusiasm of the user is stimulated through the game, so that the scene of expression acquisition is enriched, a large amount of face data can be acquired, and the data base required by face recognition is improved. Meanwhile, the game mode is also beneficial to relieving the tension emotion of the user, so that the expression of the user can be naturally expressed in various scenes, and the problem of inaccuracy of the acquired expression is avoided. In summary, the face data collection method in the scheme is beneficial to increasing the collection quantity of the user expressions and improving the collection quality, so that the collection effect of the face data is improved. Further combining with the optimization of the subsequent expression recognition model, a viscous closed loop of the running tool can be formed, and the vitality of the client product is promoted.
Drawings
FIG. 1 is a flowchart of a face data acquisition method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a face data acquisition method according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of a refinement flow of step S100 in a third embodiment of the face data acquisition method of the present invention;
fig. 4 is a schematic diagram of a refinement flow of step S200 in a fourth embodiment of the face data acquisition method of the present invention;
fig. 5 is a schematic diagram of a refinement flow of step S200 in a fifth embodiment of the face data acquisition method of the present invention;
fig. 6 is a schematic diagram of a refinement flow of step S200 in a sixth embodiment of the face data acquisition method of the present invention;
fig. 7 is a schematic diagram of a refinement flow of step S300 in a seventh embodiment of the face data acquisition method of the present invention;
fig. 8 is a schematic structural diagram of a face data acquisition device in a hardware operating environment according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: and in the process of receiving and identifying the user expression, the user expression is acquired.
Because the collection of the user expressions in the prior art is usually carried out under a specific scene, the collection quantity of the user expressions is less, and the user expressions are easy to be collected inaccurately due to tension emotion of the user and the like.
The invention provides a solution, in the process of receiving and identifying the user expression, the user expression is collected, so that the collection quantity of the user expression is increased, and the method is beneficial to relieving the tension emotion of the user, improving the accuracy of the user expression collection, and improving the collection effect of the user expression.
The first embodiment of the present invention provides a face data acquisition method, as shown in fig. 1, including the following steps:
step S100, receiving a user expression;
step S200, recognizing the user expression based on the expression recognition model, and outputting a recognition result of the user expression;
and step S300, collecting the expression of the user.
Specifically, the face data acquisition method can be realized based on WeChat applet or based on independent game application, and the user interest is stimulated in a game mode, so that a sufficient number of user expressions can be acquired, the tension emotion of the user is relieved, and the accuracy of expression acquisition is improved. In the process of collecting face data, the game can be a personal game, namely, the recognition of the user expression is realized through the interaction between the user and the face data collecting program, and the received user expression is collected at the same time; the game can also be a social game, namely a plurality of users respectively give out corresponding user expressions, and the users are identified and compared through a face data acquisition program, so that the interestingness of the game is improved, and meanwhile, the expressions of the plurality of users of different users are acquired. When receiving the user expression, the user expression can be directly received, and the emotion represented by the user expression is recognized through the expression recognition model so as to be confirmed by a user; of course, a certain keyword may be given, and then the user expression made by the user according to the keyword may be received to score the user expression, and the two game modes will be described in detail later. The main purpose of the scheme is to collect the user expression, so the requirement on the recognition accuracy of the expression recognition model is not high, but the recognition accuracy of the expression recognition model can be improved to a certain extent by recognizing the user expression to train the expression recognition model for multiple times. The identification result can be directly output, or can be output after being converted by certain processing, so as to more accord with the expectation of the user and improve the enthusiasm of the user.
In this embodiment, the face data acquisition method includes the following steps: receiving a user expression; identifying the user expression based on the expression identification model, and outputting an identification result of the user expression; and collecting the expression of the user. The face data acquisition method can be realized through WeChat applet or independent game application, and the enthusiasm of the user is stimulated through the game, so that the scene of expression acquisition is enriched, a large amount of face data can be acquired, and the data base required by face recognition is improved. Meanwhile, the game mode is also beneficial to relieving the tension emotion of the user, so that the expression of the user can be naturally expressed in various scenes, and the problem of inaccuracy of the acquired expression is avoided. In summary, the face data collection method in the scheme is beneficial to increasing the collection quantity of the user expressions and improving the collection quality, so that the collection effect of the face data is improved. Further combining with the optimization of the subsequent expression recognition model, a viscous closed loop of the running tool can be formed, and the vitality of the client product is promoted. Because the configured extension protocols are basically independent of each other, the coupling degree is very low, when a specific function needs to be realized, only the extension protocol related to the function needs to be configured, and the corresponding interface is called according to the extension protocol, so that the high-compatibility, full decoupling and lightweight interface maintenance are realized.
Based on the first embodiment, as shown in fig. 2, in a second implementation of the present invention, after the step of collecting the expression of the user, the face data collecting method further includes the following steps:
step S410, respectively constructing a training set and a testing set according to the acquired user expression and the corresponding recognition result;
step S420, optimizing the expression recognition model based on the machine learning method according to the training set and the testing set.
In this embodiment, after the user expression is collected, the expression recognition model is optimized based on the collected user expression. In particular, according to the matching degree of the previously recognized user expression, the role of the user expression in the process of optimizing the expression recognition model can be determined, so that a reasonable training set and a test set are respectively constructed, parameters in the expression recognition model are trained by using the training set, and the trained expression recognition model is verified by using the test set, so that the optimizing effect of the expression recognition model is improved.
Based on the above embodiments, as shown in fig. 3, in a third embodiment of the present invention, the step of receiving the user expression includes:
step S110, outputting an expression prompt signal, wherein the expression prompt signal is randomly generated according to the expression types in a preset expression library;
step S120, collecting image data of a user;
and step S130, extracting a user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal.
In this embodiment, in order to collect face data in a targeted manner to improve the optimization effect of the expression recognition model, the user is guided by outputting the expression prompting signal, so that the user makes or gives an expression matching with the expression prompting signal. The expression prompting signal is randomly generated according to the expression types in the preset expression library, and can be a refined signal covering various emotions so as to guide a user to pay attention to the change of the micro-expression, for example, the user has refined emotion instructions such as 'smiling face' and 'harmony and detail' under the emotion general category of 'happiness', and has refined emotion instructions such as 'smiling face' and 'anger and fire middle-fire' under the emotion general category of 'liveness'. Through the refined emotion instruction, the user can conveniently adjust own micro-expression so as to accord with the indication of the expression prompt signal, thereby optimizing the expression recognition model more finely. After the expression prompt signal is output, image data of the user are collected, the expression of the user is extracted from the image data, and the expression, which is made by the user and corresponds to the expression prompt signal, is obtained for further identification and collection.
Based on the above-described third embodiment, as shown in fig. 4, in a fourth embodiment of the present invention, the step of recognizing the user expression based on the expression recognition model and outputting the recognition result of the user expression includes:
s210, decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions;
step S220, calculating the sum of the decomposition matching degrees of all the micro-expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
step S230, comparing the matching degree with a first preset matching degree;
step S240, converting expression scores according to the matching degree when the matching degree is greater than or equal to a first preset matching degree;
step S250, outputting expression scores.
In order to encourage the user to make a correct expression according to the expression prompt signal, the acquired face data is more accurate, the matching degree between the received user expression and the expression prompt signal can be calculated based on the expression recognition model, and the expression score is further converted according to the matching degree to give the expression score, so that the user can play games or provide reference data for the games among a plurality of users, and the interestingness of the user is improved. In the process of recognizing the face data by the expression recognition model, the user expression is decomposed at first, the user expression is respectively matched with various micro-expressions, and the decomposition matching degree corresponding to the various micro-expressions is obtained. However, since the degree of refinement of such matching is high, the degree of decomposition matching is often low, and therefore, the sum of the degrees of decomposition matching of all the micro-expressions in the expression category corresponding to the expression prompt signal is further calculated to improve the degree of matching. The expression class is a class of expressions corresponding to the emotion broad class summarized according to various micro expressions, for example, micro expressions corresponding to the emotions such as "smile face open", "optimistic", "Qinpei", "inspire", "sincery high fierce", "trust", "peace detail", "gap" and the like can be all classified into the expression class of "smile". However, the matching degree obtained at this time is still often not high, and if the matching degree score is directly output, the enthusiasm of the user is easily hit. Meanwhile, the main purpose of the face data acquisition method in the scheme is to acquire face data, and higher recognition accuracy is not required, so that the matching degree can be processed through conversion to obtain relatively higher expression scores, and the scores accord with the expectations of users. In a specific example, for a specific user expression, multiple corresponding refined emotions can be often identified according to the expression recognition model, and each refined emotion has a corresponding weight. When the original matching degree is processed, all the refined emotions under the same major emotion can be collected, the weights of the corresponding refined emotions under the same major emotion are added, and the refined emotions are matched with the expression prompting signal. If the weight corresponding to the correct expression category is the highest in the recognition result, determining that the user expression is basically correct, and giving out a corresponding higher score in the subsequent conversion process. If the weight corresponding to the correct expression category is not the highest in the recognition result, determining that the expression of the user is wrong, and in the subsequent conversion process, adding the corresponding correction score to output to obtain the expression score and outputting the expression score. Further, after the refined emotion is collected, even if the weight corresponding to the correct expression category is the highest in the recognition result, if the obtained expression score is still lower, the correction score is increased according to the preset rule and the interval in which the expression score is located, so as to increase the value of the expression score. As shown in the following table, according to different expression score intervals, the determined correction score is:
expression score (a) interval Correction score (Δa) Modified expression score (a+Δa)
0≤a≤40 40 a+40
40<a≤50 30 a+30
50<a≤70 25 a+25
70<a≤90 10 a+10
90<a≤100 0 a
Meanwhile, when a plurality of users play the expression match, in order to avoid the situation that the expression scores of the plurality of people are consistent and the enthusiasm of the users is reduced, the corrected expression scores can be readjusted according to the correction probability, so that the expression scores of the plurality of users participating in the expression comparison are distributed according to a certain normal rule, the expression scores among different users are distinguished, the effective expression comparison among the plurality of people can be realized on the premise that the expression scores meet the expectation of the users, the enthusiasm of the users is improved, and more face data can be collected.
Based on the fourth embodiment, as shown in fig. 5, in a fifth embodiment of the present invention, after the step of comparing the matching degree with the first preset matching degree, the steps of:
step S260, outputting an expression confirmation signal when the matching degree is smaller than a first preset matching degree;
step S270, receiving the user expression obtained according to the expression confirmation signal;
returning to step S210, until the cumulative number of the continuous output expression confirmation signals is greater than or equal to the preset number.
In this embodiment, when the matching degree of the user expression and the expression prompt signal is smaller than the first preset matching degree, that is, when the user expression is wrong, the user expression is received again by outputting the expression confirmation signal, so that the user corrects the expression made or given, and the enthusiasm of the user is avoided. The expression confirmation prompt signal can be a specific expression prompt consistent with the expression prompt signal, or a plurality of other specific expression prompts can be given first, so that after the user is relaxed, the specific expression prompt consistent with the expression prompt signal is given. In the latter way, whether abnormal conditions exist or not can be identified to a certain extent, so that the accuracy of expression acquisition is improved. Of course, in order to improve the game efficiency, if there is a large difference between the user expression and the expression prompt signal or the expression confirmation signal received continuously and preset times, the above cycle is skipped.
Based on the above-described first and second embodiments, as shown in fig. 6, in a sixth embodiment of the present invention, the step of recognizing the user expression based on the expression recognition model and outputting the recognition result of the user expression includes:
step S281, recognizing the user expression based on the expression recognition model, and outputting expression recognition information corresponding to the user expression;
step S282, receiving a confirmation instruction obtained according to the expression recognition information;
step S283, according to the confirmation instruction, the matching degree between the expression recognition information and the expression of the user is determined.
Unlike the above-described third to fifth embodiments, in this embodiment, the user makes or gives a corresponding expression according to the given expression prompting signal, and in this embodiment, the user directly gives a certain expression, and the facial data acquisition program recognizes the user's expression based on the expression recognition model, and outputs corresponding expression recognition information such as "smiling face", "harmony and peace", "smiling face", "burning in anger", and the like. The user confirms the output expression recognition information, and particularly, the method can be carried out in a mode of scoring or selecting corresponding grades by the user, so that the matching degree between the expression recognition information and the expression of the user is determined according to the information given by the user, and the method is beneficial to improving the optimization effect of micro-expression recognition in the process of optimizing the expression recognition model. Meanwhile, interaction with a user is realized through the face data acquisition program, and a game can be played under the condition that the user has only one person, so that the scene of face data acquisition is enlarged, and the acquired face data can be enlarged.
Based on the above embodiments, as shown in fig. 7, in a seventh embodiment of the present invention, the step of collecting the user expression includes:
step S310, comparing the matching degree corresponding to the identification result with a second preset matching degree;
step S320, when the matching degree corresponding to the recognition result is greater than or equal to a second preset matching degree, the user expression is collected.
In consideration of the fact that the collected face data are mainly used for optimizing the expression recognition model, in the collection process, the face data can be primarily screened, face data with too low matching degree corresponding to the recognition result are removed, face data with matching degree larger than or equal to the second preset matching degree are selected, bad interference caused by the face data with too low matching degree to the optimization of the expression recognition model is avoided, and accuracy of the collected face data is improved.
As shown in fig. 8, fig. 8 is a schematic structural diagram of a terminal of a hardware operation environment, that is, a face data acquisition device according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a server, a PC, a mobile terminal device with a display function such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 3) player, a portable computer and the like.
As shown in fig. 8, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 6 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 8, an operating system, a network communication module, a user interface module, and a face data collection program may be included in a memory 1005 as one type of computer storage medium.
In the terminal shown in fig. 8, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call a face data collection program stored in the memory 1005, and perform the following operations:
receiving a user expression;
recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression;
and collecting the user expression.
Further, the processor 1001 may be configured to invoke a face data collection program stored in the memory 1005, and after the operation of collecting the user expression, further perform the following operations:
respectively constructing a training set and a testing set according to the acquired user expression and the corresponding identification result;
and optimizing the expression recognition model based on a machine learning method according to the training set and the testing set.
Further, the processor 1001 may be configured to invoke a face data collection program stored in the memory 1005, and the operations for receiving the expression of the user include:
outputting an expression prompt signal, wherein the expression prompt signal is randomly generated according to the expression types in a preset expression library;
collecting image data of a user;
and extracting the user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal.
Further, the processor 1001 may be configured to invoke a face data collection procedure stored in the memory 1005, identify the user expression based on the expression recognition model, and output a recognition result of the user expression, where the operations include:
decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions;
calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
comparing the matching degree with a first preset matching degree;
when the matching degree is greater than or equal to the first preset matching degree, converting expression scores according to the matching degree;
and outputting the expression score.
Further, the processor 1001 may be configured to invoke the face data collection program stored in the memory 1005, and after comparing the matching degree with the first preset matching degree, further perform the following operations:
outputting an expression confirmation signal when the matching degree is smaller than the first preset matching degree;
receiving a user expression obtained according to the expression confirmation signal;
and returning to the step of decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions until the accumulated times of continuously outputting the expression confirmation signals are greater than or equal to the preset times.
Further, the processor 1001 may be configured to invoke a face data collection procedure stored in the memory 1005, identify the user expression based on the expression recognition model, and output a recognition result of the user expression, where the operations include:
identifying the user expression based on an expression identification model, and outputting expression identification information corresponding to the user expression;
receiving a confirmation instruction obtained according to the expression identification information;
and determining the matching degree between the expression identification information and the user expression according to the confirmation instruction.
Further, the processor 1001 may be configured to invoke a face data collection program stored in the memory 1005, and the operations for collecting the user expression include:
comparing the matching degree corresponding to the identification result with a second preset matching degree;
and when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree, acquiring the user expression.
In addition, the embodiment of the invention also provides a face data acquisition device, which comprises:
the receiving module is used for receiving the expression of the user;
the recognition module is used for recognizing the user expression based on the expression recognition model and outputting a recognition result of the user expression;
and the acquisition module is used for acquiring the user expression.
Further, the face data acquisition device further includes:
the construction module is used for respectively constructing a training set and a testing set according to the acquired user expression and the corresponding recognition result;
and the model optimization module is used for optimizing the expression recognition model based on a machine learning method according to the training set and the testing set.
Further, the receiving module includes:
the signal output unit is used for outputting an expression prompt signal, wherein the expression prompt signal is randomly generated according to the expression types in a preset expression library;
the image acquisition unit is used for acquiring image data of a user;
and the expression extraction unit is used for extracting the expression of the user according to the image data, wherein the expression of the user is obtained according to the expression prompt signal.
Further, the identification module includes:
the decomposing unit is used for decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions;
the matching unit is used for calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
the first comparison unit is used for comparing the matching degree with a first preset matching degree;
the conversion unit is used for converting the expression score according to the matching degree when the matching degree is larger than or equal to the first preset matching degree;
and the score output unit is used for outputting the expression score.
Further, the signal output unit is further configured to output an expression confirmation signal when the matching degree is smaller than the first preset matching degree;
the receiving unit is also used for receiving the user expression obtained according to the expression confirmation signal;
the recognition module further comprises an accumulation unit for accumulating the accumulated times of continuously outputting the expression confirmation signals.
Further, the identification module includes:
the recognition information output unit is used for recognizing the user expression based on the expression recognition model and outputting expression recognition information corresponding to the user expression;
the instruction receiving unit is used for receiving a confirmation instruction obtained according to the expression identification information;
and the matching degree determining unit is used for determining the matching degree between the expression identifying information and the user expression according to the confirmation instruction.
Further, the acquisition module includes:
the second comparison unit is used for comparing the matching degree corresponding to the identification result with a second preset matching degree;
the acquisition unit is used for acquiring the user expression when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree.
In addition, the embodiment of the invention also provides a computer readable storage medium, the computer readable storage medium stores a face data acquisition program, and the face data acquisition program realizes the following operations when being executed by a processor:
receiving a user expression;
recognizing the user expression based on an expression recognition model, and outputting a recognition result of the user expression;
and collecting the user expression.
Further, when the face data acquisition program is executed by the processor, after the operation of acquiring the user expression, the following operations are further executed:
respectively constructing a training set and a testing set according to the acquired user expression and the corresponding identification result;
and optimizing the expression recognition model based on a machine learning method according to the training set and the testing set.
Further, when the face data collection program is executed by the processor, the operations of receiving the user expression include:
outputting an expression prompt signal, wherein the expression prompt signal is randomly generated according to the expression types in a preset expression library;
collecting image data of a user;
and extracting the user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal.
Further, when the face data collection program is executed by the processor, the operations of identifying the user expression based on the expression identification model and outputting the identification result of the user expression include:
decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions;
calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
comparing the matching degree with a first preset matching degree;
when the matching degree is greater than or equal to the first preset matching degree, converting expression scores according to the matching degree;
and outputting the expression score.
Further, when the face data acquisition program is executed by the processor, after the operation of comparing the matching degree with the first preset matching degree, the following operations are further executed:
outputting an expression confirmation signal when the matching degree is smaller than the first preset matching degree;
receiving a user expression obtained according to the expression confirmation signal;
and returning to the step of decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions until the accumulated times of continuously outputting the expression confirmation signals are greater than or equal to the preset times.
Further, when the face data collection program is executed by the processor, the operations of identifying the user expression based on the expression identification model and outputting the identification result of the user expression include:
identifying the user expression based on an expression identification model, and outputting expression identification information corresponding to the user expression;
receiving a confirmation instruction obtained according to the expression identification information;
and determining the matching degree between the expression identification information and the user expression according to the confirmation instruction.
Further, when the face data collection program is executed by the processor, the operations for collecting the user expression include:
comparing the matching degree corresponding to the identification result with a second preset matching degree;
and when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree, acquiring the user expression.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (6)

1. The face data acquisition method is characterized by comprising the following steps of:
outputting an expression prompt signal, wherein the expression prompt signal comprises a keyword, and the expression prompt signal is randomly generated according to the expression types in a preset expression library;
collecting image data of a user;
extracting a user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal;
identifying the user expression based on an expression identification model, and outputting an identification result of the user expression, wherein the identification result comprises an expression score;
comparing the matching degree corresponding to the identification result with a second preset matching degree;
when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree, acquiring the user expression;
the step of identifying the user expression based on the expression identification model and outputting the identification result of the user expression comprises the following steps:
decomposing the user expression, and matching the user expression with various micro-expressions to obtain the decomposition matching degree of the user expression relative to various micro-expressions;
calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree;
comparing the matching degree with a first preset matching degree;
when the matching degree is greater than or equal to the first preset matching degree, converting expression scores according to the matching degree;
outputting the expression score;
the step of identifying the user expression based on the expression identification model and outputting the identification result of the user expression comprises the following steps:
identifying the user expression based on an expression identification model, identifying a refined emotion corresponding to the user expression, and determining a weight corresponding to the refined emotion;
outputting expression scores of the user expressions based on the weights;
the step of identifying the user expression based on the expression identification model and outputting the identification result of the user expression comprises the following steps:
identifying the user expression based on an expression identification model, and outputting expression identification information corresponding to the user expression;
receiving a confirmation instruction obtained according to the expression identification information;
and determining the matching degree between the expression identification information and the user expression according to the confirmation instruction.
2. The face data collection method according to claim 1, wherein after the step of collecting the user expression, the face data collection method further comprises the steps of:
respectively constructing a training set and a testing set according to the acquired user expression and the corresponding identification result;
and optimizing the expression recognition model based on a machine learning method according to the training set and the testing set.
3. The face data acquisition method according to claim 1, further comprising, after the step of comparing the matching degree with a first preset matching degree, the steps of:
outputting an expression confirmation signal when the matching degree is smaller than the first preset matching degree;
receiving a user expression obtained according to the expression confirmation signal;
and returning to the step of decomposing the user expression to obtain the decomposition matching degree of the user expression relative to various micro expressions until the accumulated times of continuously outputting the expression confirmation signals are greater than or equal to the preset times.
4. A face data acquisition device, characterized in that the face data acquisition device comprises: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the face data acquisition method as claimed in any one of claims 1 to 3.
5. A face data acquisition device, characterized in that the face data acquisition device comprises:
the receiving module is used for outputting an expression prompt signal, wherein the expression prompt signal comprises a keyword, and the expression prompt signal is randomly generated according to the expression types in a preset expression library; collecting image data of a user; extracting a user expression according to the image data, wherein the user expression is obtained according to the expression prompt signal;
the recognition module is used for recognizing the user expression based on the expression recognition model and outputting a recognition result of the user expression, wherein the recognition result comprises expression scores;
the acquisition module is used for comparing the matching degree corresponding to the identification result with a second preset matching degree; when the matching degree corresponding to the identification result is greater than or equal to a second preset matching degree, acquiring the user expression;
the recognition module is also used for decomposing the user expression, and matching the user expression with various micro-expressions to obtain the decomposition matching degree of the user expression relative to various micro-expressions; calculating the sum of the decomposition matching degrees of all the micro expressions in the expression category corresponding to the expression prompting signal to obtain the matching degree; comparing the matching degree with a first preset matching degree; when the matching degree is greater than or equal to the first preset matching degree, converting expression scores according to the matching degree; outputting the expression score;
the recognition module is further used for recognizing the user expression based on an expression recognition model, recognizing a refined emotion corresponding to the user expression and determining a weight corresponding to the refined emotion; outputting expression scores of the user expressions based on the weights;
the recognition module is further used for recognizing the user expression based on an expression recognition model and outputting expression recognition information corresponding to the user expression; receiving a confirmation instruction obtained according to the expression identification information; and determining the matching degree between the expression identification information and the user expression according to the confirmation instruction.
6. A computer-readable storage medium, wherein a face data acquisition program is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the face data acquisition method according to any one of claims 1 to 3.
CN201811127324.6A 2018-09-26 2018-09-26 Face data acquisition method, equipment, device and computer readable storage medium Active CN109670393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811127324.6A CN109670393B (en) 2018-09-26 2018-09-26 Face data acquisition method, equipment, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811127324.6A CN109670393B (en) 2018-09-26 2018-09-26 Face data acquisition method, equipment, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109670393A CN109670393A (en) 2019-04-23
CN109670393B true CN109670393B (en) 2023-12-19

Family

ID=66142007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811127324.6A Active CN109670393B (en) 2018-09-26 2018-09-26 Face data acquisition method, equipment, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109670393B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825808A (en) * 2019-09-23 2020-02-21 重庆特斯联智慧科技股份有限公司 Distributed human face database system based on edge calculation and generation method thereof
CN113361415A (en) * 2021-06-08 2021-09-07 浙江工商大学 Micro-expression data set collection method based on crowdsourcing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636734A (en) * 2015-02-28 2015-05-20 深圳市中兴移动通信有限公司 Terminal face recognition method and device
WO2017006872A1 (en) * 2015-07-03 2017-01-12 学校法人慶應義塾 Facial expression identification system, facial expression identification method, and facial expression identification program
CN106649712A (en) * 2016-12-20 2017-05-10 北京小米移动软件有限公司 Method and device for inputting expression information
CN106778621A (en) * 2016-12-19 2017-05-31 四川长虹电器股份有限公司 Facial expression recognizing method
CN108564007A (en) * 2018-03-27 2018-09-21 深圳市智能机器人研究院 A kind of Emotion identification method and apparatus based on Expression Recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636734A (en) * 2015-02-28 2015-05-20 深圳市中兴移动通信有限公司 Terminal face recognition method and device
WO2017006872A1 (en) * 2015-07-03 2017-01-12 学校法人慶應義塾 Facial expression identification system, facial expression identification method, and facial expression identification program
CN106778621A (en) * 2016-12-19 2017-05-31 四川长虹电器股份有限公司 Facial expression recognizing method
CN106649712A (en) * 2016-12-20 2017-05-10 北京小米移动软件有限公司 Method and device for inputting expression information
CN108564007A (en) * 2018-03-27 2018-09-21 深圳市智能机器人研究院 A kind of Emotion identification method and apparatus based on Expression Recognition

Also Published As

Publication number Publication date
CN109670393A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN108154398B (en) Information display method, device, terminal and storage medium
CN108075892B (en) Voice processing method, device and equipment
CN109558512B (en) Audio-based personalized recommendation method and device and mobile terminal
US11238871B2 (en) Electronic device and control method thereof
US20160117699A1 (en) Questionnaire system, questionnaire response device, questionnaire response method, and questionnaire response program
JP2007249585A (en) Authentication device and control method therefor, electronic equipment provided with authentication device, control program for authentication device, and recording medium with the program thereon
CN110544468B (en) Application awakening method and device, storage medium and electronic equipment
CN109670393B (en) Face data acquisition method, equipment, device and computer readable storage medium
CN108182626A (en) Service push method, information acquisition terminal and computer readable storage medium
CN106341539A (en) Automatic evidence obtaining method of malicious caller voiceprint, apparatus and mobile terminal thereof
CN108322770B (en) Video program identification method, related device, equipment and system
CN111243604B (en) Training method for speaker recognition neural network model supporting multiple awakening words, speaker recognition method and system
US11455833B2 (en) Electronic device for tracking user activity and method of operating the same
CN111416904A (en) Data processing method, electronic device and medium
US11372907B2 (en) Electronic device for generating natural language response and method thereof
CN108153568B (en) Information processing method and electronic equipment
CN105797375A (en) Method and terminal for changing role model expressions along with user facial expressions
US10806389B2 (en) Electronic device and method for providing personalized biometric information based on biometric signal using same
CN109086448B (en) Voice question searching method based on gender characteristic information and family education equipment
KR20180043925A (en) Singing evaluation system, singing evaluation server and method thereof
KR20210052746A (en) Method, apparatus and computer program for estimating sales volume of content to be productized
CN107562788B (en) Interaction method, interaction device and computer-readable storage medium
CN113763925B (en) Speech recognition method, device, computer equipment and storage medium
CN112307186A (en) Question-answering service method, system, terminal device and medium based on emotion recognition
CN107944056B (en) Multimedia file identification method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant