CN110717377B - Face driving risk prediction model training and prediction method thereof and related equipment - Google Patents

Face driving risk prediction model training and prediction method thereof and related equipment Download PDF

Info

Publication number
CN110717377B
CN110717377B CN201910789702.5A CN201910789702A CN110717377B CN 110717377 B CN110717377 B CN 110717377B CN 201910789702 A CN201910789702 A CN 201910789702A CN 110717377 B CN110717377 B CN 110717377B
Authority
CN
China
Prior art keywords
face
driving risk
face image
risk
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910789702.5A
Other languages
Chinese (zh)
Other versions
CN110717377A (en
Inventor
肖嵘
顾青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910789702.5A priority Critical patent/CN110717377B/en
Priority to PCT/CN2019/118607 priority patent/WO2021035983A1/en
Publication of CN110717377A publication Critical patent/CN110717377A/en
Application granted granted Critical
Publication of CN110717377B publication Critical patent/CN110717377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities

Abstract

A training method of a human face driving risk prediction model comprises the following steps: constructing a quadruple by using the face images and the driving risk scores of two different persons selected from the first face database and two different face images and the identifications of the same person selected from the second face database; calculating a first risk loss value and a second risk loss value according to the quadruple and the face driving risk prediction model; calculating a target risk loss value based on the two risk loss values; and determining the parameters when the target risk loss value reaches the minimum value as optimal parameters, and outputting a face driving risk prediction model corresponding to the optimal parameters. The invention also provides a face driving risk prediction method, a face driving risk prediction model training device, a terminal and a storage medium. The method adopts a heterogeneous mode to construct a data source, solves the problem of insufficient data of the training human face driving risk prediction model, and the trained human face driving risk prediction model has higher prediction accuracy and prediction stability.

Description

Face driving risk prediction model training and prediction method thereof and related equipment
Technical Field
The invention relates to the technical field of risk prediction, in particular to a face driving risk prediction model training method, a face driving risk prediction device, a terminal and a storage medium.
Background
In recent years, the traffic industry in China is vigorously developed, the quantity of retained automobiles of residents is greatly increased, and accompanying road traffic accidents frequently occur, such as rear-end collision and side turning of vehicles, most of the traffic accidents are caused by dangerous driving behaviors of drivers, such as overhigh speed, fatigue driving and the like.
The driving risk of the driver is analyzed accurately and objectively in advance, and the risk level of the insurance application can be determined by a vehicle insurance company according to the accurate analysis value of the driving risk; the traffic management department can determine the time length of the safe driving training class of the driver and the like according to the accurate analysis value of the driving risk.
At present, the driving risk score of a driver is evaluated by training a driving risk score recognition model, however, in actual life, the driving risk score recognition model is limited by multiple factors, so that the obtained training sample is seriously insufficient, the generalization capability of the trained driving risk score recognition model is poor, the human face images of the same driver in different angles, light, time, background and other environments are difficult to obtain, and the accuracy of the trained driving risk score recognition model is low.
Therefore, a new training scheme for the human face driving risk prediction model is needed to solve the technical problems of insufficient training samples and single visual angle of the training samples.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a terminal and a storage medium for training a face driving risk prediction model, which can construct a data source in a heterogeneous manner, solve the problem of insufficient data of the trained face driving risk prediction model, and obtain the trained face driving risk prediction model with higher prediction accuracy and prediction stability.
The invention provides a training method of a human face driving risk prediction model, which comprises the following steps:
selecting a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from a first face database;
selecting a third face image and a fourth face image of a third person and corresponding identifications from a second face database;
constructing a four-tuple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image and the identifier;
calculating a first risk loss value and a second risk loss value according to the quadruple and a face driving risk prediction model, wherein the face driving risk prediction model is represented by G (| w), and w is a target parameter to be optimized;
calculating a target risk loss value based on the first risk loss value and the second risk loss value;
and determining the parameters when the target risk loss value calculated by the gradient back-transmission algorithm reaches the minimum value as optimal parameters, and outputting a human face driving risk prediction model corresponding to the optimal parameters.
In an alternative embodiment, the first risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000021
wherein, T0(w) is the first risk loss value, m is margin, and N ═ Σu,v1,
Figure RE-GDA0002295593830000022
For the driving risk prediction value corresponding to the first face image u arbitrarily selected from the first face database,
Figure RE-GDA0002295593830000023
and the driving risk prediction value corresponding to the second face image v selected from the first face database at will.
In an alternative embodiment, the second risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000024
wherein, T3(w) is the second risk loss value,
Figure RE-GDA0002295593830000031
representing the variance between driving risk predictions for different people,
Figure RE-GDA0002295593830000032
representing the variance between driving risk predictions for the same person, M ═ Σs,t1,
Figure RE-GDA0002295593830000033
For the driving risk prediction value corresponding to the third face image s arbitrarily selected from the second face database,
Figure RE-GDA0002295593830000034
and the driving risk prediction value is the driving risk prediction value corresponding to the fourth face image t randomly selected from the second face database.
In an alternative embodiment, the target risk loss value is calculated using the following formula:
L(w)=λT0(w)+(1-λ)T3(w)
where L (w) is the target risk loss value and λ ∈ [0, 1] is the weighting coefficient.
In an optional embodiment, said constructing a quadruple from said first facial image, said first driving risk score, said second facial image, said second driving risk score, said third facial image, said fourth facial image and said identification comprises:
taking the first face image and the first driving risk score as a first data pair;
taking the second face image and the second driving risk score as a second data pair;
taking the third face image and the identifier as a third data pair;
taking the fourth face image and the identification as a fourth data pair;
and respectively taking the first data pair, the second data pair, the third data pair and the fourth data as a unit in a quadruple.
A second aspect of the present invention provides a method for predicting a driving risk of a human face, the method including:
acquiring a face image of a driver to be detected;
inputting the face image into a face driving risk prediction model, wherein the face driving risk prediction model is obtained by pre-training according to the face driving risk prediction model training method;
acquiring an output result of the human face driving risk prediction model;
and determining the driving risk score of the driver to be tested according to the output result.
A third aspect of the present invention provides a facial driving risk prediction model training apparatus, including:
the first selection module is used for selecting a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from a first face database;
the second selection module is used for selecting a third face image and a fourth face image of a third person and corresponding identifications from the second face database;
an array construction module, configured to construct a quadruple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image, and the identifier;
the first calculation module is used for calculating a first risk loss value and a second risk loss value according to the quadruple and a human face driving risk prediction model, wherein the human face driving risk prediction model is represented by G (. | w), and w is a target parameter to be optimized;
a second calculation module to calculate a target risk loss value based on the first risk loss value and the second risk loss value;
and the model determining module is used for determining the parameters when the target risk loss value calculated by the gradient return algorithm reaches the minimum value as the optimal parameters and outputting the face driving risk prediction model corresponding to the optimal parameters.
A fourth aspect of the present invention provides a human face driving risk prediction apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a face image of a driver to be detected;
the image input module is used for inputting the face image into a face driving risk prediction model, wherein the face driving risk prediction model is obtained by pre-training according to the face driving risk prediction model training method;
the second acquisition module is used for acquiring an output result of the human face driving risk prediction model;
and the risk determination module is used for determining the driving risk score of the driver to be tested according to the output result.
A fifth aspect of the present invention provides a terminal, including a processor, configured to implement the facial driving risk prediction model training method or the facial driving risk prediction method when executing a computer program stored in a memory.
A sixth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the facial driving risk prediction model training method or implements the facial driving risk prediction method.
In summary, the face driving risk prediction model training method, the face driving risk prediction method, the device, the terminal and the storage medium of the invention adopt a heterogeneous mode to construct a data source, solve the problem of insufficient data of a training face driving risk prediction model by using internal business data (a first face database comprising one face image and driving risk score per person) and an external public face recognition database (a second face database comprising a plurality of face images per person and no driving risk score), and improve the prediction capability of the face driving risk prediction model; a new face risk prediction model algorithm is obtained by constructing a quadruple (different driving risk score pairs and different photo pairs of the same person) in combination with the Tetrad Loss and the SoftRank Loss, and the trained face driving risk prediction model has high prediction accuracy and prediction stability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a facial driving risk prediction model training method according to an embodiment of the present invention.
Fig. 2 is a flowchart of a facial driving risk prediction method according to a second embodiment of the present invention.
Fig. 3 is a structural diagram of a facial driving risk prediction model training device according to a third embodiment of the present invention.
Fig. 4 is a structural diagram of a facial driving risk prediction apparatus according to a fourth embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a terminal according to a fifth embodiment of the present invention.
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example one
Fig. 1 is a flowchart of a facial driving risk prediction model training method according to an embodiment of the present invention.
In this embodiment, the face driving risk prediction model training method may be applied to a terminal, and for a terminal that needs to perform face driving risk prediction model training, the function of face driving risk prediction model training provided by the method of the present invention may be directly integrated on the terminal, or may be run in the terminal in the form of a Software Development Kit (SKD).
As shown in fig. 1, the method for training a facial driving risk prediction model specifically includes the following steps, and the order of the steps in the flowchart may be changed and some may be omitted according to different requirements.
S11, selecting a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from the first face database.
Wherein the first face database may include: the method comprises the steps of constructing a face image in advance and corresponding driving risk scores.
The pre-constructed face image can be derived from a car insurance service database, banking service data, a credit investigation database and the like.
The driving risk score may be calculated by a driving risk score recognition model trained in advance. The pre-trained driving risk score recognition model may be any suitable image recognition intelligent decision model, such as a deep convolutional neural network. The driving risk score recognition model is obtained by training face images in a car insurance business database, banking business data and a credit investigation database, however, experiments show that the driving risk score recognition model is low in accuracy and poor in generalization capability, so that a face driving risk prediction model needs to be retrained to improve the driving risk prediction rate of the face images in any environment. Since the focus of the present invention is not on the driving risk score recognition model, the training process for the driving risk score recognition model is not described in detail herein.
After a first face database is established, face images of two different persons and corresponding driving risk scores are selected from the first face database at will. The face image of a first person is called a first face image, the driving risk score corresponding to the face image of the first person is called a first driving risk score, the face image of a second person is called a second face image, and the driving risk score corresponding to the face image of the second person is called a second driving risk score. The first driving at risk score is different from the second driving at risk score.
The first face database is a face risk database.
And S12, selecting a third face image and a fourth face image of a third person and corresponding identifications from the second face database.
Wherein the second face database may include: the face image and the corresponding identification are selected in advance.
The pre-selected face image may be derived from some well-known face databases, such as LFW face database, MegaFace face database, and the like.
The identification refers to identity attributes for distinguishing different people, and can be identified by using a number label.
And after the second face data is selected, two different face images and corresponding identifications of the same person are selected from the second face database at will. For the convenience of distinguishing, the person selected from the second face database is called a third person, and two different face images of the third person are respectively called a third face image and a fourth face image.
The second face data is a face recognition database.
Because the facial images are affected by various environmental factors, such as different illumination, different backgrounds, different expressions, different gestures, different imaging devices (mobile phones, cameras, etc.), and even different compression modes or storage formats, the facial images of the same person are greatly different from one another. Generally speaking, only one facial image of a driver is stored in a car insurance business database, banking business data and a credit investigation database, and a facial driving risk prediction model trained based on a single facial image is very sensitive to environmental factors. That is to say, the driving risk scores obtained by calculation of the face images of the same person in different environments have great differences, so that the accuracy of the face driving risk prediction is low, and the reliability is low.
As is known, each person in a known face database has more than ten to hundreds of face images, which cover different imaging conditions. A face recognition model trained on the basis of a known face database has high recognition rate and high stability under different imaging conditions. However, the data of these well-known face databases have no driving risk label, so that these well-known face databases cannot be directly used for training a face driving risk prediction model.
Therefore, the first face database and the second face database can be jointly used as data sets, namely, the data in the face risk database and the data in the face recognition database are integrated together in the heterogeneous mode to train the face driving risk prediction model, the problem of insufficient data in the face risk database is solved, and the trained face driving risk prediction model has strong generalization capability.
S13, constructing a quadruple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image and the identification.
And respectively taking the selected first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image and the mark as factors, and constructing a quaternion array based on the factors.
In an optional embodiment, said constructing a quadruple from said first facial image, said first driving risk score, said second facial image, said second driving risk score, said third facial image, said fourth facial image and said identification comprises:
taking the first face image and the first driving risk score as a first data pair;
taking the second face image and the second driving risk score as a second data pair;
taking the third face image and the identifier as a third data pair;
taking the fourth face image and the identification as a fourth data pair;
and respectively taking the first data pair, the second data pair, the third data pair and the fourth data as a unit in a quadruple.
Illustratively, assume that the data in the first face database is
Figure RE-GDA0002295593830000091
The data in the second face database is
Figure RE-GDA0002295593830000092
Wherein the content of the first and second substances,
Figure RE-GDA0002295593830000093
and
Figure RE-GDA0002295593830000094
respectively representing the face images in the first face database and the second face database,
Figure RE-GDA0002295593830000095
representing a driving risk score for a corresponding facial image in the first facial database,
Figure RE-GDA0002295593830000096
representing the identity of the corresponding face image in the second face database.
Randomly selecting face images and driving risk scores of two different persons from the first face database and forming a first data pair
Figure RE-GDA0002295593830000097
And a second data pair
Figure RE-GDA0002295593830000098
Suppose that
Figure RE-GDA0002295593830000099
Two different photos and marks of the same person are randomly selected from the second face database to form a third data pair
Figure RE-GDA00022955938300000910
And a fourth data pair
Figure RE-GDA00022955938300000911
It is obvious that
Figure RE-GDA00022955938300000912
Then connect the first data pair
Figure RE-GDA00022955938300000913
Second data pair
Figure RE-GDA00022955938300000914
Third data pair
Figure RE-GDA00022955938300000915
And a fourth data pair
Figure RE-GDA00022955938300000916
A quadruple may be formed. Because the quadruple comprises the face images and the driving risk scores of different people and different face images and identifications of the same person, the data in the quadruple can comprehensively cover the shooting environment of the face images: the human face driving risk prediction model trained based on the four-tuple has strong stability, so that the driving risk scores predicted by human face images acquired by the same driver in different environments tend to be the same.
Preferably, after the face image is selected, before the quadruple is constructed, the method further includes:
extracting face risk features in each face image, wherein the face risk features comprise: glasses, lip thickness, eye openness, gender, age, face shape.
Wherein the human face risk feature comprehensively describes a plurality of human face features related to driving risk, and the facial features comprise: round or long, wide or narrow, etc.
Through the analysis and research on a large amount of vehicle insurance claim data, an objective characteristic rule is found: the responsibility difference of the non-structural features such as the human face image feature, the GPS track feature and the like reflected in the vehicle accident is very obvious. For example, when the human eyes observe objects, if the eyesight is poor, the human eyes can be squinted as much as possible, the opening degree of the eyes is usually smaller than the normal value, and many vehicle accidents are caused by the influence of the eyesight, especially under the condition of low visibility; the severity of vehicle accidents of drivers wearing glasses is generally greater than that of drivers not wearing glasses, and objective reasons may include uncorrected vision, continuous decline of vision, untimely replacement of glasses, short-time blind vision caused by easy atomization of glasses, and the like; the probability of a vehicle accident occurring to a driver with good mood is much smaller than that of a driver with poor mood, and some features of the face of the driver with good mood are usually diastolic and natural; the probability of a female driver having a vehicle accident is generally greater than that of a male driver; the probability of vehicle accidents for elderly drivers is generally greater than for young and middle-aged drivers.
Compared with the whole human face image, the method has the advantages that some objectively existing features in the human face image are extracted, the feature dimension input into the human face driving risk prediction model can be effectively reduced, the extracted human face risk feature dimension is far smaller than the feature dimension corresponding to the whole human face image, the reduction of the feature dimension can reduce the data calculation amount, and therefore the convergence speed of the training human face driving risk prediction model can be improved in an auxiliary mode.
It should be understood that if the face risk features in the face images are extracted, all the face images in the first data pair, the second data pair, the third data pair and the fourth data pair are replaced by the face risk features.
And S14, calculating a first risk loss value and a second risk loss value according to the quadruple and the human face driving risk prediction model.
Assuming that the model G (| w) is a facial driving risk prediction model, w is a target parameter to be optimized, and the final purpose is to expect that the driving risk prediction value output by the trained facial driving risk prediction model has obvious distinction for different people. That is, the difference between the driving risk prediction values of two different face images of the same person is smaller than the difference between the driving risk prediction values of two different face images of the same person.
Expressed by the following notations:
Figure RE-GDA0002295593830000101
wherein the content of the first and second substances,
Figure RE-GDA0002295593830000102
in an alternative embodiment, the first risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000103
wherein, T0(w) is the first risk loss value, m is margin, and N ═ Σu,v1,
Figure RE-GDA0002295593830000111
For the driving risk prediction value corresponding to the first face image u arbitrarily selected from the first face database,
Figure RE-GDA0002295593830000112
for the second face image v pair arbitrarily selected from the first face databaseThe expected driving risk prediction value.
Defining the first risk Loss value as Softmargin Loss.
In an alternative embodiment, the second risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000113
wherein, T3(w) is the second risk loss value,
Figure RE-GDA0002295593830000114
representing the variance between driving risk predictions for different people,
Figure RE-GDA0002295593830000115
representing the variance between driving risk predictions for the same person, M ═ Σs,t1,
Figure RE-GDA0002295593830000116
For the driving risk prediction value corresponding to the third face image s arbitrarily selected from the second face database,
Figure RE-GDA0002295593830000117
and the driving risk prediction value is the driving risk prediction value corresponding to the fourth face image t randomly selected from the second face database.
Defining the second risk Loss value as a tetra Loss. It can be seen that tetra Loss enables the driving risk prediction values of the same person to be as close as possible, and the driving risk prediction values of different persons to be as far away as possible.
S15, calculating a target risk loss value based on the first risk loss value and the second risk loss value.
In this embodiment, after the first risk loss value and the second risk loss value are obtained through calculation, a compromise needs to be found, so that the first risk loss value and the second risk loss value are balanced.
In an alternative embodiment, the target risk loss value is calculated using the following formula:
L(w)=λT0(w)+(1-λ)T3(w)
where L (w) is the target risk loss value and λ ∈ [0, 1] is the weighting coefficient.
A larger λ places the optimization objective more on the ranking of driving risks, while a smaller λ places the optimization objective more on the structure of driving risks.
All that is needed at this point is to find the parameter w such that the target risk loss value is minimal.
And S16, determining the parameters when the target risk loss value calculated by the gradient back-transmission algorithm reaches the minimum value as optimal parameters, and outputting a human face driving risk prediction model corresponding to the optimal parameters.
In this embodiment, when the target risk loss value is minimized through the gradient back-pass algorithm, it indicates that the model G (| w) has already tended to be stable, the parameter w at this time reaches an optimal value, and the model G (| w) corresponding to the parameter w is the optimal face driving risk prediction model obtained through training.
Regarding the gradient back-propagation algorithm as the prior art, the present invention is not further described herein.
In conclusion, the training method of the face driving risk prediction model adopts a heterogeneous mode to construct a data source, solves the problem of insufficient data of the training face driving risk prediction model by using an internal business data (a first face database comprising one face image and driving risk score per person) and an external public face recognition database (a second face database comprising a plurality of face images per person and no driving risk score), and improves the prediction capability of the face driving risk prediction model; a new face risk prediction model algorithm is obtained by constructing a quadruple (different driving risk score pairs and different photo pairs of the same person) in combination with quad Loss and SoffRank Loss, and the trained face driving risk prediction model has high prediction accuracy and prediction stability.
Example two
Fig. 2 is a flowchart of a human face driving risk prediction method provided by the invention.
In this embodiment, the method for predicting the risk of driving a human face may be applied to a terminal, and for a terminal that needs to perform the risk prediction of driving a human face, the method of the present invention may be directly integrated on the terminal, or may be operated in the terminal in the form of a Software Development Kit (SKD).
As shown in fig. 2, the method for predicting the driving risk of the human face specifically includes the following steps, and the order of the steps in the flowchart may be changed and some may be omitted according to different requirements.
And S21, acquiring the face image of the driver to be detected.
In this embodiment, if a driving risk prediction is to be performed on a certain driver, a face image of the driver may be obtained, and the driving risk of the driver is predicted by identifying a driving risk score of the face image.
And S22, inputting the face image into a face driving risk prediction model.
The facial driving risk prediction model is obtained by pre-training according to all or part of the steps of the training method of the facial driving risk prediction model in the first embodiment.
The training process of the face driving risk prediction model may be an off-line process.
After the training process of the face driving risk prediction model is finished, the face image can be trained and input into the face driving risk prediction model on line for prediction.
And S23, acquiring an output result of the human face driving risk prediction model.
In this embodiment, the facial driving risk prediction model predicts the facial image and outputs a prediction result.
And when the terminal detects that the prediction of the face driving risk prediction model is finished, acquiring an output result of the face driving risk prediction model.
And S24, determining the driving risk score of the driver to be tested according to the output result.
The output result may be a driving risk score, for example, 90 points.
The output may be a driving risk category, e.g., high, medium, low.
When the output result is a driving risk score, if the driving risk score is larger than a preset first score threshold value, it is indicated that the driving risk of the driver to be tested is higher; if the driving risk score is smaller than or equal to the preset first score threshold and larger than or equal to the preset second score threshold, the driving risk of the driver to be tested is moderate, and if the driving risk score is smaller than the preset second score threshold, the driving risk of the driver to be tested is low.
In an optional embodiment, after the determining the driving risk score of the driver to be tested according to the output result, the method further comprises:
and matching out insurance corresponding to the driving risk score.
Wherein, the corresponding relation between the driving risk score and the insurance amount can be stored in advance. After the driving risk score of the driver to be tested is determined, insurance can be matched according to the corresponding relation, so that insurance benefits of insurance companies can be reduced.
The insurance can be car insurance or human life insurance.
In the embodiment, the face driving risk prediction model obtained by training has strong generalization capability, so that the face image of the target driver in any environment can be obtained, and higher prediction accuracy can be obtained.
EXAMPLE III
Fig. 3 is a structural diagram of a facial driving risk prediction model training device provided by the present invention.
In some embodiments, the facial driving risk prediction model training apparatus 30 may include a plurality of functional modules composed of program code segments. The program codes of the various program segments in the facial driving risk prediction model training apparatus 30 may be stored in the memory of the terminal and executed by the at least one processor to perform (see fig. 1 for details) the facial driving risk prediction model training function.
In this embodiment, the facial driving risk prediction model training device 30 may be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: a first selecting module 301, a second selecting module 302, an array constructing module 303, a first calculating module 304, a second calculating module 305, and a model determining module 306. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The first selecting module 301 is configured to select a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from a first face database.
Wherein the first face database may include: the method comprises the steps of constructing a face image in advance and corresponding driving risk scores.
The pre-constructed face image can be derived from a car insurance service database, banking service data, a credit investigation database and the like.
The driving risk score may be calculated by a driving risk score recognition model trained in advance. The pre-trained driving risk score recognition model may be any suitable image recognition intelligent decision model, such as a deep convolutional neural network. The driving risk score recognition model is obtained by training face images in a car insurance business database, banking business data and a credit investigation database, however, experiments show that the driving risk score recognition model is low in accuracy and poor in generalization capability, so that a face driving risk prediction model needs to be retrained to improve the driving risk prediction rate of the face images in any environment. Since the focus of the present invention is not on the driving risk score recognition model, the training process for the driving risk score recognition model is not described in detail herein.
After a first face database is established, face images of two different persons and corresponding driving risk scores are selected from the first face database at will. The face image of a first person is called a first face image, the driving risk score corresponding to the face image of the first person is called a first driving risk score, the face image of a second person is called a second face image, and the driving risk score corresponding to the face image of the second person is called a second driving risk score. The first driving at risk score is different from the second driving at risk score.
The first face database is a face risk database.
A second selecting module 302, configured to select a third face image and a fourth face image of a third person and a corresponding identifier from the second face database.
Wherein the second face database may include: the face image and the corresponding identification are selected in advance.
The pre-selected face image may be derived from some well-known face databases, such as LFW face database, MegaFace face database, and the like.
The identification refers to identity attributes for distinguishing different people, and can be identified by using a number label.
And after the second face data is selected, two different face images and corresponding identifications of the same person are selected from the second face database at will. For the convenience of distinguishing, the person selected from the second face database is called a third person, and two different face images of the third person are respectively called a third face image and a fourth face image.
The second face data is a face recognition database.
Because the facial images are affected by various environmental factors, such as different illumination, different backgrounds, different expressions, different gestures, different imaging devices (mobile phones, cameras, etc.), and even different compression modes or storage formats, the facial images of the same person are greatly different from one another. Generally speaking, only one facial image of a driver is stored in a car insurance business database, banking business data and a credit investigation database, and a facial driving risk prediction model trained based on a single facial image is very sensitive to environmental factors. That is to say, the driving risk scores obtained by calculation of the face images of the same person in different environments have great differences, so that the accuracy of the face driving risk prediction is low, and the reliability is low.
As is known, each person in a known face database has more than ten to hundreds of face images, which cover different imaging conditions. A face recognition model trained on the basis of a known face database has high recognition rate and high stability under different imaging conditions. However, the data of these well-known face databases have no driving risk label, so that these well-known face databases cannot be directly used for training a face driving risk prediction model.
Therefore, the first face database and the second face database can be jointly used as data sets, namely, the data in the face risk database and the data in the face recognition database are integrated together in the heterogeneous mode to train the face driving risk prediction model, the problem of insufficient data in the face risk database is solved, and the trained face driving risk prediction model has strong generalization capability.
An array construction module 303, configured to construct a quadruple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image, and the identifier.
And respectively taking the selected first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image and the mark as factors, and constructing a quaternion array based on the factors.
In an optional embodiment, the constructing the four-tuple by the array construction module 303 according to the first facial image, the first driving risk score, the second facial image, the second driving risk score, the third facial image, the fourth facial image and the identification includes:
taking the first face image and the first driving risk score as a first data pair;
taking the second face image and the second driving risk score as a second data pair;
taking the third face image and the identifier as a third data pair;
taking the fourth face image and the identification as a fourth data pair;
and respectively taking the first data pair, the second data pair, the third data pair and the fourth data as a unit in a quadruple.
Illustratively, assume that the data in the first face database is
Figure RE-GDA0002295593830000171
The data in the second face database is
Figure RE-GDA0002295593830000172
Wherein the content of the first and second substances,
Figure RE-GDA0002295593830000173
and
Figure RE-GDA0002295593830000174
respectively representing the face images in the first face database and the second face database,
Figure RE-GDA0002295593830000175
representing a driving risk score for a corresponding facial image in the first facial database,
Figure RE-GDA0002295593830000176
representing the identity of the corresponding face image in the second face database.
Optionally selecting face images of two different persons from the first face databaseImaging and driving risk scoring and forming a first data pair
Figure RE-GDA0002295593830000177
And a second data pair
Figure RE-GDA0002295593830000178
Suppose that
Figure RE-GDA0002295593830000179
Two different photos and marks of the same person are randomly selected from the second face database to form a third data pair
Figure RE-GDA00022955938300001710
And a fourth data pair
Figure RE-GDA00022955938300001711
It is obvious that
Figure RE-GDA00022955938300001712
Then connect the first data pair
Figure RE-GDA00022955938300001713
Second data pair
Figure RE-GDA00022955938300001714
Third data pair
Figure RE-GDA00022955938300001715
And a fourth data pair
Figure RE-GDA00022955938300001716
A quadruple may be formed. Because the quadruple comprises the face images and the driving risk scores of different people and different face images and identifications of the same person, the data in the quadruple can comprehensively cover the shooting environment of the face images: the human face driving risk prediction model trained based on the quadruple has stronger stability, so that the same driver can be driven in different environmentsThe driving risk scores predicted by the face images acquired next tend to be the same.
Preferably, after the face image is selected, before the quadruple is constructed, the method further includes:
extracting face risk features in each face image, wherein the face risk features comprise: glasses, lip thickness, eye openness, gender, age, face shape.
Wherein the human face risk feature comprehensively describes a plurality of human face features related to driving risk, and the facial features comprise: round or long, wide or narrow, etc.
Through the analysis and research on a large amount of vehicle insurance claim data, an objective characteristic rule is found: the responsibility difference of the non-structural features such as the human face image feature, the GPS track feature and the like reflected in the vehicle accident is very obvious. For example, when the human eyes observe objects, if the eyesight is poor, the human eyes can be squinted as much as possible, the opening degree of the eyes is usually smaller than the normal value, and many vehicle accidents are caused by the influence of the eyesight, especially under the condition of low visibility; the severity of vehicle accidents of drivers wearing glasses is generally greater than that of drivers not wearing glasses, and objective reasons may include uncorrected vision, continuous decline of vision, untimely replacement of glasses, short-time blind vision caused by easy atomization of glasses, and the like; the probability of a vehicle accident occurring to a driver with good mood is much smaller than that of a driver with poor mood, and some features of the face of the driver with good mood are usually diastolic and natural; the probability of a female driver having a vehicle accident is generally greater than that of a male driver; the probability of vehicle accidents for elderly drivers is generally greater than for young and middle-aged drivers.
Compared with the whole human face image, the method has the advantages that some objectively existing features in the human face image are extracted, the feature dimension input into the human face driving risk prediction model can be effectively reduced, the extracted human face risk feature dimension is far smaller than the feature dimension corresponding to the whole human face image, the reduction of the feature dimension can reduce the data calculation amount, and therefore the convergence speed of the training human face driving risk prediction model can be improved in an auxiliary mode.
It should be understood that if the face risk features in the face images are extracted, all the face images in the first data pair, the second data pair, the third data pair and the fourth data pair are replaced by the face risk features.
And the first calculation module 304 is used for calculating a first risk loss value and a second risk loss value according to the quadruple and the human face driving risk prediction model.
Assuming that the model G (| w) is a facial driving risk prediction model, w is a target parameter to be optimized, and the final purpose is to expect that the driving risk prediction value output by the trained facial driving risk prediction model has obvious distinction for different people. That is, the difference between the driving risk prediction values of two different face images of the same person is smaller than the difference between the driving risk prediction values of two different face images of the same person.
Expressed by the following notations:
Figure RE-GDA0002295593830000181
wherein the content of the first and second substances,
Figure RE-GDA0002295593830000182
in an alternative embodiment, the first risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000183
wherein, T0(w) is the first risk loss value, m is margin, and N ═ Σu,v1,
Figure RE-GDA0002295593830000191
For the driving risk prediction value corresponding to the first face image u arbitrarily selected from the first face database,
Figure RE-GDA0002295593830000192
and the driving risk prediction value corresponding to the second face image v selected from the first face database at will.
Defining the first risk Loss value as Softmargin Loss.
In an alternative embodiment, the second risk loss value is calculated using the following formula:
Figure RE-GDA0002295593830000193
wherein, T3(w) is the second risk loss value,
Figure RE-GDA0002295593830000194
representing the variance between driving risk predictions for different people,
Figure RE-GDA0002295593830000195
representing the variance between driving risk predictions for the same person, M ═ Σs,t1,
Figure RE-GDA0002295593830000196
For the driving risk prediction value corresponding to the third face image s arbitrarily selected from the second face database,
Figure RE-GDA0002295593830000197
and the driving risk prediction value is the driving risk prediction value corresponding to the fourth face image t randomly selected from the second face database.
Defining the second risk Loss value as a tetra Loss. It can be seen that tetra Loss enables the driving risk prediction values of the same person to be as close as possible, and the driving risk prediction values of different persons to be as far away as possible.
A second calculation module 305 for calculating a target risk loss value based on the first risk loss value and the second risk loss value.
In this embodiment, after the first risk loss value and the second risk loss value are obtained through calculation, a compromise needs to be found, so that the first risk loss value and the second risk loss value are balanced.
In an alternative embodiment, the target risk loss value is calculated using the following formula:
L(w)=λT0(w)+(1-λ)T3(w)
where L (w) is the target risk loss value and λ ∈ [0, 1] is the weighting coefficient.
A larger λ places the optimization objective more on the ranking of driving risks, while a smaller λ places the optimization objective more on the structure of driving risks.
All that is needed at this point is to find the parameter w such that the target risk loss value is minimal.
And the model determining module 306 is configured to determine a parameter when the target risk loss value calculated by the gradient back-transmission algorithm reaches a minimum value as an optimal parameter, and output a face driving risk prediction model corresponding to the optimal parameter.
In this embodiment, when the target risk loss value is minimized through the gradient back-pass algorithm, it indicates that the model G (| w) has already tended to be stable, the parameter w at this time reaches an optimal value, and the model G (| w) corresponding to the parameter w is the optimal face driving risk prediction model obtained through training.
Regarding the gradient back-propagation algorithm as the prior art, the present invention is not further described herein.
In conclusion, the training device for the face driving risk prediction model adopts a heterogeneous mode to construct a data source, solves the problem of insufficient data of the training face driving risk prediction model of the internal business data (a first face database comprising one face image and driving risk score per person) and an external public face recognition database (a second face database comprising a plurality of face images per person and without driving risk score), and improves the prediction capability of the face driving risk prediction model; a new face risk prediction model algorithm is obtained by constructing a quadruple (different driving risk score pairs and different photo pairs of the same person) in combination with the Tetrad Loss and the SoftRank Loss, and the trained face driving risk prediction model has high prediction accuracy and prediction stability.
Example four
Fig. 4 is a block diagram of a human face driving risk prediction device according to the present invention.
In some embodiments, the facial driving risk prediction apparatus 40 may include a plurality of functional modules composed of program code segments. The program codes of the various program segments in the facial driving risk prediction apparatus 40 may be stored in a memory of the terminal and executed by the at least one processor to perform (see fig. 2 for details) the facial driving risk prediction function.
In this embodiment, the human face driving risk prediction device 40 may be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: a first acquisition module 401, an image input module 402, a second acquisition module 403, and a risk determination module 404. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The first obtaining module 401 is configured to obtain a face image of a driver to be detected.
In this embodiment, if a driving risk prediction is to be performed on a certain driver, a face image of the driver may be obtained, and the driving risk of the driver is predicted by identifying a driving risk score of the face image.
An image input module 402, configured to input the facial image into a facial driving risk prediction model.
The facial driving risk prediction model is obtained by pre-training according to all or part of the steps of the training method of the facial driving risk prediction model in the first embodiment.
The training process of the face driving risk prediction model may be an off-line process.
After the training process of the face driving risk prediction model is finished, the face image can be trained and input into the face driving risk prediction model on line for prediction.
A second obtaining module 403, configured to obtain an output result of the facial driving risk prediction model.
In this embodiment, the facial driving risk prediction model predicts the facial image and outputs a prediction result.
And when the terminal detects that the prediction of the face driving risk prediction model is finished, acquiring an output result of the face driving risk prediction model.
And a risk determination module 404, configured to determine a driving risk score of the driver to be tested according to the output result.
The output result may be a driving risk score, for example, 90 points.
The output may be a driving risk category, e.g., high, medium, low.
When the output result is a driving risk score, if the driving risk score is larger than a preset first score threshold value, it is indicated that the driving risk of the driver to be tested is higher; if the driving risk score is smaller than or equal to the preset first score threshold and larger than or equal to the preset second score threshold, the driving risk of the driver to be tested is moderate, and if the driving risk score is smaller than the preset second score threshold, the driving risk of the driver to be tested is low.
In an optional embodiment, after determining the driving risk score of the driver to be tested according to the output result, the facial driving risk prediction apparatus 40 further includes:
and the insurance matching module is used for matching insurance corresponding to the driving risk score.
Wherein, the corresponding relation between the driving risk score and the insurance amount can be stored in advance. After the driving risk score of the driver to be tested is determined, insurance can be matched according to the corresponding relation, so that insurance benefits of insurance companies can be reduced.
The insurance can be car insurance or human life insurance.
In the embodiment, the face driving risk prediction model obtained by training has strong generalization capability, so that the face image of the target driver in any environment can be obtained, and higher prediction accuracy can be obtained.
EXAMPLE five
Fig. 5 is a schematic structural diagram of the terminal according to the present invention. In the preferred embodiment of the present invention, the terminal 5 includes a memory 51, at least one processor 52, at least one communication bus 53, and a transceiver 54.
It will be appreciated by those skilled in the art that the configuration of the terminal shown in fig. 5 is not limiting to the embodiments of the present invention, and may be a bus-type configuration or a star-type configuration, and the terminal 5 may include more or less hardware or software than those shown, or a different arrangement of components.
In some embodiments, the terminal 5 includes a terminal capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The terminal 5 may further include a client device, which includes, but is not limited to, any electronic product capable of performing human-computer interaction with a client through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, and the like.
It should be noted that the terminal 5 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, the memory 51 is used for storing program codes and various data, such as devices installed in the terminal 5, and realizes high-speed and automatic access to programs or data during the operation of the terminal 5. The Memory 51 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only disk (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer capable of carrying or storing data.
In some embodiments, the at least one processor 52 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The at least one processor 52 is a control unit (control unit) of the terminal 5, connects various components of the entire terminal 5 by using various interfaces and lines, and executes various functions of the terminal 5 and processes data by running or executing programs or modules stored in the memory 51 and calling data stored in the memory 51.
In some embodiments, the at least one communication bus 53 is arranged to enable connection communication between the memory 51 and the at least one processor 52, etc.
Although not shown, the terminal 5 may further include a power supply (such as a battery) for supplying power to various components, and preferably, the power supply may be logically connected to the at least one processor 52 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The terminal 5 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a terminal, or a network device) or a processor (processor) to execute parts of the methods according to the embodiments of the present invention.
In a further embodiment, in conjunction with fig. 3 and/or fig. 4, the at least one processor 52 may execute operating means of the terminal 5 as well as installed various types of applications, program codes, etc., such as the various modules described above.
The memory 51 has program code stored therein, and the at least one processor 52 can call the program code stored in the memory 51 to perform related functions. For example, the respective modules described in fig. 3 and/or fig. 4 are program codes stored in the memory 51 and executed by the at least one processor 52, thereby implementing the functions of the respective modules.
In one embodiment of the invention, the memory 51 stores a plurality of instructions that are executed by the at least one processor 52 to implement all or a portion of the steps of the method of the invention.
Specifically, the at least one processor 52 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 and/or fig. 2, and is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. A training method of a human face driving risk prediction model is characterized by comprising the following steps:
selecting a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from a first face database;
selecting a third face image and a fourth face image of a third person and corresponding identifications from a second face database;
constructing a four-tuple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image and the identifier;
calculating a first risk loss value and a second risk loss value according to the quadruple and a face driving risk prediction model, wherein the face driving risk prediction model is represented by G (| w), w is a target parameter to be optimized, and the first risk loss value is calculated by using the following formula:
Figure FDA0002800792480000011
Figure FDA0002800792480000012
T0(w) is the first risk loss value, m is margin, and N ═ Σu,v1,
Figure FDA0002800792480000013
Figure FDA0002800792480000014
For the driving risk prediction value corresponding to the first face image u arbitrarily selected from the first face database,
Figure FDA0002800792480000015
for the driving risk prediction value corresponding to the second facial image v arbitrarily selected from the first facial database, the second risk loss value is calculated by using the following formula:
Figure FDA0002800792480000016
T3(w) is the second risk loss value,
Figure FDA0002800792480000017
representing the variance between driving risk predictions for different people,
Figure FDA0002800792480000018
representing the variance between driving risk predictions for the same person, M ═ Σs,t1,
Figure FDA0002800792480000019
For the driving risk prediction value corresponding to the third face image s arbitrarily selected from the second face database,
Figure FDA00028007924800000110
the driving risk prediction value corresponding to the fourth face image t selected from the second face database at will;
calculating a target risk loss value based on the first risk loss value and the second risk loss value;
and determining the parameters when the target risk loss value calculated by the gradient back-transmission algorithm reaches the minimum value as optimal parameters, and outputting a human face driving risk prediction model corresponding to the optimal parameters.
2. The method of claim 1, wherein the target risk loss value is calculated using the formula:
L(w)=λT0(w)+(1-λ)T3(w)
where L (w) is the target risk loss value and λ ∈ [0, 1] is the weighting coefficient.
3. The method of claim 1 or 2, wherein said constructing a quadruple from the first facial image, the first driving risk score, the second facial image, the second driving risk score, the third facial image, the fourth facial image and the identification comprises:
taking the first face image and the first driving risk score as a first data pair;
taking the second face image and the second driving risk score as a second data pair;
taking the third face image and the identifier as a third data pair;
taking the fourth face image and the identification as a fourth data pair;
and respectively taking the first data pair, the second data pair, the third data pair and the fourth data as a unit in a quadruple.
4. A method for predicting risk of driving a human face, the method comprising:
acquiring a face image of a driver to be detected;
inputting the face image into a face driving risk prediction model, wherein the face driving risk prediction model is obtained by pre-training according to the method of any one of claims 1 to 3;
acquiring an output result of the human face driving risk prediction model;
and determining the driving risk score of the driver to be tested according to the output result.
5. A device for training a human face driving risk prediction model, which is characterized by comprising:
the first selection module is used for selecting a first face image of a first person and a corresponding first driving risk score, and a second face image of a second person and a corresponding second driving risk score from a first face database;
the second selection module is used for selecting a third face image and a fourth face image of a third person and corresponding identifications from the second face database;
an array construction module, configured to construct a quadruple according to the first face image, the first driving risk score, the second face image, the second driving risk score, the third face image, the fourth face image, and the identifier;
the first calculation module is used for calculating a first risk loss value and a second risk loss value according to the quadruple and the face driving risk prediction model, wherein the face driving risk prediction model is represented by G (. | w), w is a target parameter to be optimized, and the first risk loss value is calculated by using the following formula:
Figure FDA0002800792480000031
T0(w) is the first risk loss value, m is margin, and N ═ Σu,v1,
Figure FDA0002800792480000032
For the driving risk prediction value corresponding to the first face image u arbitrarily selected from the first face database,
Figure FDA0002800792480000033
for the driving risk prediction value corresponding to the second facial image v arbitrarily selected from the first facial database, the second risk loss value is calculated by using the following formula:
Figure FDA0002800792480000034
T3(w) is the second risk loss value,
Figure FDA0002800792480000035
representing the variance between driving risk predictions for different people,
Figure FDA0002800792480000036
representing the variance between driving risk predictions for the same person, M ═ Σs,t1,
Figure FDA0002800792480000037
For the driving risk prediction value corresponding to the third face image s arbitrarily selected from the second face database,
Figure FDA0002800792480000038
the driving risk prediction value corresponding to the fourth face image t selected from the second face database at will;
a second calculation module to calculate a target risk loss value based on the first risk loss value and the second risk loss value;
and the model determining module is used for determining the parameters when the target risk loss value calculated by the gradient return algorithm reaches the minimum value as the optimal parameters and outputting the face driving risk prediction model corresponding to the optimal parameters.
6. A facial driving risk prediction apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a face image of a driver to be detected;
an image input module, configured to input the facial image into a facial driving risk prediction model, where the facial driving risk prediction model is obtained by pre-training according to the method according to any one of claims 1 to 3;
the second acquisition module is used for acquiring an output result of the human face driving risk prediction model;
and the risk determination module is used for determining the driving risk score of the driver to be tested according to the output result.
7. A terminal, characterized in that the terminal comprises a processor for implementing the facial driving risk prediction model training method according to any one of claims 1 to 3 or the facial driving risk prediction method according to claim 4 when executing a computer program stored in a memory.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a facial driving risk prediction model training method according to any one of claims 1 to 3, or implements a facial driving risk prediction method according to claim 4.
CN201910789702.5A 2019-08-26 2019-08-26 Face driving risk prediction model training and prediction method thereof and related equipment Active CN110717377B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910789702.5A CN110717377B (en) 2019-08-26 2019-08-26 Face driving risk prediction model training and prediction method thereof and related equipment
PCT/CN2019/118607 WO2021035983A1 (en) 2019-08-26 2019-11-14 Method for training face-based driving risk prediction model, driving risk prediction method based on face, and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789702.5A CN110717377B (en) 2019-08-26 2019-08-26 Face driving risk prediction model training and prediction method thereof and related equipment

Publications (2)

Publication Number Publication Date
CN110717377A CN110717377A (en) 2020-01-21
CN110717377B true CN110717377B (en) 2021-01-12

Family

ID=69209462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789702.5A Active CN110717377B (en) 2019-08-26 2019-08-26 Face driving risk prediction model training and prediction method thereof and related equipment

Country Status (2)

Country Link
CN (1) CN110717377B (en)
WO (1) WO2021035983A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414874B (en) * 2020-03-26 2020-10-30 中国平安财产保险股份有限公司 Driving risk prediction method, device and equipment based on human face and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5749279B2 (en) * 2010-02-01 2015-07-15 グーグル インコーポレイテッド Join embedding for item association
US9898759B2 (en) * 2014-03-28 2018-02-20 Joseph Khoury Methods and systems for collecting driving information and classifying drivers and self-driving systems
US9183464B1 (en) * 2014-07-24 2015-11-10 National Taipei University Of Technology Face annotation method and face annotation system
WO2018019354A1 (en) * 2016-07-25 2018-02-01 Swiss Reinsurance Company Ltd. An apparatus for a dynamic, score-based, telematics connection search engine and aggregator and corresponding method thereof
CN107832721B (en) * 2017-11-16 2021-12-07 百度在线网络技术(北京)有限公司 Method and apparatus for outputting information
CN110046981B (en) * 2018-01-15 2022-03-08 腾讯科技(深圳)有限公司 Credit evaluation method, device and storage medium
CN108446666A (en) * 2018-04-04 2018-08-24 平安科技(深圳)有限公司 The training of binary channels neural network model and face comparison method, terminal and medium
CN108764185B (en) * 2018-06-01 2022-07-19 京东方科技集团股份有限公司 Image processing method and device
CN109664894A (en) * 2018-12-03 2019-04-23 盐城工学院 Fatigue driving safety pre-warning system based on multi-source heterogeneous data perception
CN109840485B (en) * 2019-01-23 2021-10-08 科大讯飞股份有限公司 Micro-expression feature extraction method, device, equipment and readable storage medium
CN110069988A (en) * 2019-01-31 2019-07-30 中国平安财产保险股份有限公司 AI based on multidimensional data drives risk analysis method, server and storage medium
CN110135389A (en) * 2019-05-24 2019-08-16 北京探境科技有限公司 Face character recognition methods and device

Also Published As

Publication number Publication date
WO2021035983A1 (en) 2021-03-04
CN110717377A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN202257856U (en) Driver fatigue-driving monitoring device
CN111814775B (en) Target object abnormal behavior identification method, device, terminal and storage medium
You et al. A fatigue driving detection algorithm based on facial motion information entropy
CN109559481A (en) Drive risk intelligent identification Method, device, computer equipment and storage medium
CN110069988A (en) AI based on multidimensional data drives risk analysis method, server and storage medium
Lindow et al. Driver behavior monitoring based on smartphone sensor data and machine learning methods
CN110991249A (en) Face detection method, face detection device, electronic equipment and medium
CN113190372B (en) Multi-source data fault processing method and device, electronic equipment and storage medium
Verma et al. Avoiding stress driving: Online trip recommendation from driving behavior prediction
CN112614578A (en) Doctor intelligent recommendation method and device, electronic equipment and storage medium
CN114663223A (en) Credit risk assessment method, device and related equipment based on artificial intelligence
CN110363121A (en) Fingerprint image processing method and processing device, storage medium and electronic equipment
CN110717377B (en) Face driving risk prediction model training and prediction method thereof and related equipment
CN113486203A (en) Data processing method and device based on question-answering platform and related equipment
CN115545958A (en) Intelligent vehicle insurance evaluation method and device, computer equipment and storage medium
Flores-Calero et al. Ecuadorian traffic sign detection through color information and a convolutional neural network
CN112950344A (en) Data evaluation method and device, electronic equipment and storage medium
CN112686232A (en) Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN116453226A (en) Human body posture recognition method and device based on artificial intelligence and related equipment
Li et al. Real-time driver drowsiness estimation by multi-source information fusion with Dempster–Shafer theory
CN116108276A (en) Information recommendation method and device based on artificial intelligence and related equipment
CN115222549A (en) Risk assessment processing method and device, computer equipment and storage medium
CN113435975A (en) Wheelchair leasing processing method and device and related equipment
CN113221990A (en) Information input method and device and related equipment
CN112528935A (en) Online resource sorting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019487

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant