CN113947790A - Financial big data face recognition method and financial management system - Google Patents

Financial big data face recognition method and financial management system Download PDF

Info

Publication number
CN113947790A
CN113947790A CN202111111968.8A CN202111111968A CN113947790A CN 113947790 A CN113947790 A CN 113947790A CN 202111111968 A CN202111111968 A CN 202111111968A CN 113947790 A CN113947790 A CN 113947790A
Authority
CN
China
Prior art keywords
face
feature vector
residual block
user
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111111968.8A
Other languages
Chinese (zh)
Inventor
秦桂珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111111968.8A priority Critical patent/CN113947790A/en
Publication of CN113947790A publication Critical patent/CN113947790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes

Abstract

The invention discloses a financial big data face recognition method and a financial management system, wherein face images of a current user are collected, and the face images comprise a front face image, a left side face image and a right side face image; obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number; the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face feature vector is extracted by the face recognition model, so that the features of the user can be accurately expressed, the accurate identity information of the user can be obtained by taking out the face feature vector based on the face recognition model, the accuracy of user information recognition is improved, and the accuracy of face recognition is improved.

Description

Financial big data face recognition method and financial management system
Technical Field
The invention relates to the technical field of computers, in particular to a financial big data face recognition method and a financial management system.
Background
In the field of financial technology, identification of user identity information is a very necessary task, and only by ensuring the accuracy of user identity identification can the fund security of financial users and the fund security of financial institutions be ensured, and simultaneously the operation efficiency of the financial institutions can be improved.
Particularly for banks, when a user enters a bank hall, the user needs to be subjected to face recognition. Therefore, an accurate face recognition method for the user is needed.
Disclosure of Invention
The invention aims to provide a financial big data face recognition method and a financial management system, which are used for solving the problems in the prior art
In a first aspect, an embodiment of the present invention provides a financial big data face recognition method, including:
acquiring a face image of a current user, wherein the face image comprises a front face image, a left side face image and a right side face image;
obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector;
comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number;
the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
Optionally, the training process of the face recognition model includes:
obtaining a training set, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces; the difficult faces comprise difficult error faces and difficult correct faces; the difficult error face is a face image which is similar to the basic face but is not the same as the basic face; the difficult correct face is a face image which is not similar to the basic face but is the same as the basic face; the face image comprises a front face, a left side face and a right side face; the face images are face images of a plurality of users in a bank database; the user numbers are numbers of a plurality of users in a bank database; the number has uniqueness;
inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the face feature vector comprises elements representing face features of a face;
inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector comprises elements representing left face features and elements representing left face category features;
inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector comprises elements representing right face features and elements representing right face category features;
obtaining a loss value based on the front face feature vector, the left face feature vector and the right face feature vector;
and obtaining the maximum iteration times of the face recognition model training, and stopping training the face recognition model when the loss value is not more than a first threshold value or the loss value reaches the maximum iteration times to obtain the trained face recognition model.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector includes:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000021
therein, loss1For the value of the face front loss,
Figure RE-GDA0003368836970000031
is the ith element in the face feature vector of the basic human face,
Figure RE-GDA0003368836970000032
the ith element of the face feature vector for the difficult-to-correct face,
Figure RE-GDA0003368836970000033
the ith element of the face feature vector of the face with the difficulty error is shown, n is the length of the feature vector, marginxThe face threshold is used for judging whether the basic face is the face of the user.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector, further includes:
obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000034
therein, loss2For the left-side face loss value,
Figure RE-GDA0003368836970000035
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure RE-GDA0003368836970000036
the ith element of the partial vector characterizing the facial features in the left face feature vector of the difficult-to-correct face,
Figure RE-GDA0003368836970000037
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure RE-GDA0003368836970000038
a k element of a partial vector for representing the user category in the left face feature vector of the basic face;
Figure RE-GDA0003368836970000039
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector, further includes:
obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA00033688369700000310
therein, loss3For the right-side face loss value,
Figure RE-GDA00033688369700000311
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure RE-GDA00033688369700000312
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure RE-GDA0003368836970000041
the ith element of the partial vector for characterizing the facial features in the right side face feature vector of the difficult error face, n is the right side face featureLength of vector, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure RE-GDA0003368836970000042
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure RE-GDA0003368836970000043
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents the user categories, and K is an integer from 1 to K;
the loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
In a second aspect, an embodiment of the present invention provides a financial management system, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a face image of a current user, and the face image comprises a front face image, a left side face image and a right side face image;
the face recognition module is used for obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number; the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
Optionally, the training process of the face recognition model includes:
obtaining a training set, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces; the difficult faces comprise difficult error faces and difficult correct faces; the difficult error face is a face image which is similar to the basic face but is not the same as the basic face; the difficult correct face is a face image which is not similar to the basic face but is the same as the basic face; the face image comprises a front face, a left side face and a right side face; the face images are face images of a plurality of users in a bank database; the user numbers are numbers of a plurality of users in a bank database; the number has uniqueness;
inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the face feature vector comprises elements representing face features of a face;
inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector comprises elements representing left face features and elements representing left face category features;
inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector comprises elements representing right face features and elements representing right face category features;
obtaining a loss value based on the front face feature vector, the left face feature vector and the right face feature vector;
and obtaining the maximum iteration times of the face recognition model training, and stopping training the face recognition model when the loss value is not more than a first threshold value or the loss value reaches the maximum iteration times to obtain the trained face recognition model.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector includes:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000051
therein, loss1For the value of the face front loss,
Figure RE-GDA0003368836970000052
is the ith element in the face feature vector of the basic human face,
Figure RE-GDA0003368836970000053
the ith element of the face feature vector for the difficult-to-correct face,
Figure RE-GDA0003368836970000054
the ith element of the face feature vector of the face with the difficulty error is shown, n is the length of the feature vector, marginxThe face threshold is used for judging whether the basic face is the face of the user.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector, further includes:
obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000061
therein, loss2For the left-side face loss value,
Figure RE-GDA0003368836970000062
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure RE-GDA0003368836970000063
partial vectors for characterizing facial features in left side face feature vectors of the difficult-to-correct faceThe (c) th element of (a),
Figure RE-GDA0003368836970000064
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure RE-GDA0003368836970000065
a k element of a partial vector for representing the user category in the left face feature vector of the basic face;
Figure RE-GDA0003368836970000066
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector, further includes:
obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000067
therein, loss3For the right-side face loss value,
Figure RE-GDA0003368836970000068
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure RE-GDA0003368836970000069
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure RE-GDA00033688369700000610
the ith element of the partial vector for characterizing the facial features in the right side face feature vector of the difficult error face, n is the length of the right side face feature vector, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure RE-GDA00033688369700000611
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure RE-GDA00033688369700000612
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents the user categories, and K is an integer from 1 to K;
the loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
the embodiment of the invention provides a financial big data face recognition method and a financial management system, wherein face images of a current user are collected, and the face images comprise a front face image, a left side face image and a right side face image; obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number; the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
The face feature vector is extracted by the face recognition model, so that the features of the user can be accurately expressed, the accurate identity information of the user can be obtained by taking out the face feature vector based on the face recognition model, the accuracy of user information recognition is improved, and the accuracy of face recognition is improved.
Drawings
Fig. 1 is a flowchart of a financial big data face recognition method according to an embodiment of the present invention.
Fig. 2 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The labels in the figure are: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Examples
As shown in fig. 1, an embodiment of the present invention provides a financial big data face recognition method, where the method includes:
s101: and acquiring a face image of the current user.
The face image comprises a front face image, a left side face image and a right side face image.
S102: and obtaining a face feature vector based on the face image of the current user through the face recognition model.
The face feature vector comprises a face front face feature vector, a face left side face feature vector and a face right side face feature vector.
S103: and comparing the face characteristic vector with face characteristic vectors of other users stored in a database to obtain face information.
The face information is a user number; the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
The face feature vector is extracted by the face recognition model, so that the features of the user can be accurately expressed, the accurate identity information of the user can be obtained by taking out the face feature vector based on the face recognition model, the accuracy of user information recognition is improved, and the accuracy of face recognition is improved.
Optionally, the method comprises
The method comprises the steps of collecting a face image of a current user, wherein the face image comprises a front face image, a left side face image and a right side face image.
And obtaining a face feature vector based on the face image of the current user through the face recognition model. The face feature vector comprises a face front face feature vector, a face left side face feature vector and a face right side face feature vector.
Comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number.
The method comprises the steps of obtaining user information stored in a database, wherein the user information comprises attribute states of a plurality of user attributes, and the user attributes comprise user credit levels, user identity information, user real estate information and user wage information. The attribute states of the user credit rating include negative level, zero level, first level, and second level. The attribute state of the user identity information comprises personal loan and enterprise loan; the attribute state of the user real estate information comprises mortgageable and non-mortgageable. The attribute states of the user payroll information include low payroll, medium payroll and high payroll.
And obtaining a loan classification tree based on the user information.
And inputting the current user information into the loan classification tree to obtain a loan classification set which can be loaned by the user. The category attribute of the loan category comprises loan amount, repayment date, purpose and repayment mode.
And according to the user loan category set, based on the loan form submitted by the current user, issuing a loan to the current user.
Through adopting above scheme, adopt the mode of automatic loan, saved the manpower, through the face to the people's face, the left side face, the construction that face identification model was carried out to the right side face, increased the accuracy of discernment people's face, through the discernment people's face that face identification model can be accurate to make the user information who obtains make things convenient for safety more, inaccuracy and waste time when preventing to fill in information. The loan type that the user can loan is provided for the user to select through the loan classification tree, and the user information is limited to the loan type. So that the user can only choose the loan type within his or her own bearing capacity.
In conclusion, the user can make a loan safely, conveniently and quickly, and the burden of the staff is reduced.
The face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
The current user submits a loan form to characterize the user's loan request, which includes the user's loan type, amount, length of time, etc. The user can loan category set comprises a plurality of loan items, wherein the loan items comprise loan categories, money amounts and time lengths; the loan category of the loan item corresponds to the user loan type.
According to the user loan category set, based on the loan form submitted by the current user, a loan is issued to the current user, which specifically comprises the following steps: the current user submits a loan form, and one or more target loan items are selected from the user's loan category set to the user, so as to release a loan for the current user. The loan category of the target loan item corresponds to the user loan type, and specifically, the loan category of the target loan item may be the same as the user loan type. The amount of the target credit corresponds to the amount in the user form, and specifically, the amount of the target credit is greater than or equal to the amount in the user form. The time length of the target loan project is consistent with the time length of the loan submitted by the current user, and specifically comprises the following steps: the loan offering item has a time length greater than or equal to the time length of the current loan submission by the user.
Wherein the residual block comprises: four convolutional networks, four normalization layers, three activation function layers.
The front face image is an input of a first convolution network in a first residual block, the first normalization layer is an output of the first convolution network, the input of the first activation function is an output of the first normalization layer, the input of the second convolution network is an output of the first normalization layer, the input of the second normalization layer is an output of the second convolution network, the input of the second activation function is an output of the second normalization layer, the input of the third convolution network is an output of the second activation function, the input of the third normalization layer is an output of the third convolution network, the input of the fourth convolution network is the front face image, the input of the fourth normalization layer is an output of the fourth convolution network, the input of the third activation function is an output of the third normalization layer and an output of the fourth normalization layer, the output of the third activation function is the output of the first residual block.
By adopting the scheme, the face recognition model using the residual error module is constructed, so that the convolutional neural network with enough depth can still carry out parameter learning.
Optionally, the training process of the face recognition model includes:
a training set is obtained, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces. The difficult faces include difficult false faces and difficult correct faces. The difficult error face is a face image that is similar to the base face but is not the same user as the base face. The difficult correct face is an image of a face of the same user that is not similar to the base face but is the same as the base face. The face image comprises a front face, a left side face and a right side face; the facial images are facial images of a plurality of users in a bank database. The user number is the number of a plurality of users in the bank database. The number is unique.
Inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the positive face feature vector includes elements representing positive face features.
Inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector includes elements representing left face features and elements of left face class features.
Inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector includes elements representing features of the right face and elements of the right face class features.
And obtaining a loss value based on the front face feature vector, the left side face feature vector and the right side face feature vector.
And obtaining the maximum iteration times of the face recognition model training, and stopping the training until the loss value is not greater than a first threshold value or the maximum iteration times is reached to obtain the trained face recognition model.
By adopting the scheme, the parameters in the front face convolution neural network, the left side face convolution neural network and the right side face convolution neural network are respectively trained through a plurality of face images, so that the parameters can well meet the requirement that the face is input to obtain the face characteristic vector capable of identifying the user. Through training three convolutional networks of the face convolutional neural network, the left face convolutional neural network and the right face convolutional neural network, the different characteristics of the front face, the left face and the right face are better met, and the face characteristics of a user where the face image is located are more accurately output.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector includes:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000111
therein, loss1For the value of the face front loss,
Figure RE-GDA0003368836970000112
is the ith element in the face feature vector of the basic human face,
Figure RE-GDA0003368836970000113
the ith element of the face feature vector for the difficult-to-correct face,
Figure RE-GDA0003368836970000121
is the face of the difficult wrong faceI-th element of the feature vector, n being the length of the feature vector, marginxThe face threshold is used for judging whether the basic face is the face of the user.
Obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000122
therein, loss2For the left-side face loss value,
Figure RE-GDA0003368836970000123
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure RE-GDA0003368836970000124
the ith element of the partial vector characterizing the facial features in the left face feature vector of the difficult-to-correct face,
Figure RE-GDA0003368836970000125
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure RE-GDA0003368836970000126
a k element of a partial vector for representing the user category in the left face feature vector of the basic face;
Figure RE-GDA0003368836970000127
to label the kth element of the user category vector, saidMarking elements in the user category vector to indicate that the vector is 1 when the basic face is the user and 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
Obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000128
therein, loss3For the right-side face loss value,
Figure RE-GDA0003368836970000129
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure RE-GDA00033688369700001210
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure RE-GDA00033688369700001211
the ith element of the partial vector for characterizing the facial features in the right side face feature vector of the difficult error face, n is the length of the right side face feature vector, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure RE-GDA00033688369700001212
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure RE-GDA00033688369700001213
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
The loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
In this embodiment, the front face threshold is 1, the left face threshold is 0.5, and the right face threshold is 0.5.
The front face has more features for judging the face, so the front face loss value is obtained only by comparing the features. Because the features used for judging the face in the left face feature vector and the right face feature vector are less, the left face loss value and the right face loss value are obtained by adopting a mode of comparing the features and judging which user the face belongs to.
By adopting the scheme, the judgment on the characteristics of the left side face and the right side face is enhanced.
Optionally, obtaining the loan classification tree based on the user information includes:
a root node is obtained, wherein the root node comprises a plurality of loan categories.
Obtaining a first classification attribute based on the attribute state and the weight of the user information; the information entropy of the first classification attribute for classifying the loan categories in the root node is smaller than the information entropy of the plurality of other user attributes for classifying the loan categories in the root node; the attribute weights represent the importance of each attribute.
Classifying the loan categories in the root node based on the first classification attribute to obtain first-layer child nodes, wherein the number of the first-layer child nodes is the number of attribute states of the first classification attribute; a plurality of the first level sub-nodes comprise a set of categorized loan categories.
And sequencing other classification attributes from large to small according to the weight to obtain node attributes, wherein the node attributes comprise a second classification attribute, a third classification attribute and a fourth classification attribute. The weight of the second classification attribute is greater than the weight of the third classification attribute; the weight of the third classification attribute is greater than the weight of the fourth classification attribute.
Classifying the loan categories in the sub nodes of the first layer of sub nodes based on the second classification attribute to obtain second layer of sub nodes, wherein the number of the second layer of sub nodes is the number of attribute states of the second classification attribute; a plurality of the second level sub-nodes comprise a set of categorized loan categories.
And (4) classifying for multiple times until the last layer of node attribute is reached, and stopping classification to obtain the empty node classification tree.
And pruning the classification tree containing the empty nodes to obtain the loan classification tree.
In this embodiment, the loan attribute status of the loan amount includes a large-amount loan, a medium-amount loan, and a small-amount loan. The repayment date comprises one-year repayment, two-year repayment, five-year repayment and ten-year repayment. The loan uses include personal loans and corporate loans. The repayment means includes one-time repayment and installment. The partial loan categories for this embodiment are described in table 1.
TABLE 1
Loan category Amount of loan Date of repayment Use of Repayment method
Loan of the first kind Loan of great margin Two-year repayment Company loan Disposable repayment
Loan of the second kind Loan of great margin One year repayment Personal loan Disposable repayment
Loan of the third kind Loan of small amount Five-year repayment Personal loan Amortization
Loan of the fourth kind Loan of money in gold Ten years repayment Personal loan Amortization
Loan of the fifth kind Loan of money in gold One year repayment Company loan Disposable repayment
Classifying loan categories by the relationship between the attribute status of the user information and the loan attribute status of the loan type
If the relationship between the attribute state of the user's wages and the loan attribute state of the loan category is shown in table 2, if the attribute state of the user's wages is low wages, the loan category of the loan with the loan amount of small-amount loan is obtained.
TABLE 2
Attribute state of user payroll Low payroll Medium payroll High payroll
Loan attribute status for loan category Loan of small amount Loan of money in gold Loan of great margin
The loan classification tree is established by using the historical user information stored in the database before and the classification attribute used for judging whether the loan is available or not and which type of loan is available if the loan is available. And calculating the classification attribute for classifying the loan categories in the root node through the information entropy, wherein the information entropy represents the chaos degree of the classified loan categories. The smaller the degree of the disorder, the more easily the attribute status that cannot be loaned, and the more utilizing the later pruning operation. Since the branch where the empty node is located needs to be pruned later, the classification attribute is judged only through the weight. Meanwhile, each layer has the same classification attribute, so that the searching operation is convenient to perform when the device is used later.
By adopting the scheme, the loan classification tree is established by adopting a mode of calculating the information entropy, the loan classification tree with smaller nodes can be established, the computer space and the calculation time are saved, and the search is convenient.
Optionally, the pruning the classification tree containing the empty node to obtain the classification tree includes:
obtaining the leaf nodes of the last layer which are not empty based on the classification tree containing empty nodes;
traversing from bottom to top based on the leaf nodes of the last layer which are not empty to obtain key nodes; the key nodes are all nodes which can reach the leaf nodes of the last layer which are not empty from the root node;
based on the top-down traversal of the key nodes, deleting subtrees where non-key nodes are located, and replacing the non-key nodes with empty nodes to obtain a classification tree; the non-key nodes are all nodes in the empty node classification tree which are not key nodes.
Since the loan classification tree is finally an empty node, the node which can not be loaned is represented, and the branch of the attribute state which can not be loaned is deleted and replaced by the empty node.
By adopting the scheme, the path for searching the loan classification tree is reduced.
Optionally, the first classification attribute is obtained based on the attribute state and the weight of the user information; the information entropy of the first classification attribute for classifying the loan categories in the root node is smaller than the information entropy of the plurality of other user attributes for classifying the loan categories in the root node; the attribute weight represents the importance of each attribute, including:
obtaining information entropies of various user attributes based on the user attributes;
the information entropy is obtained by the calculation mode described by the following formula:
Figure RE-GDA0003368836970000151
wherein H (D) is the information entropy of the user attribute D, K represents the kth state of the user attribute, the value of K is an integer between 1 and K, K is the number of all the states of the user attribute, pkProbability that the state is k;
obtaining weighted information entropy based on the information entropies of the user attributes; the weighted information entropy represents the sum of the reciprocal of the information entropy of the plurality of user attributes multiplied by the respective weight;
obtaining a first classification attribute based on the weighted information entropy; the weighted information entropy of the first classification attribute is larger than the weighted information entropy of other user attributes;
optionally, inputting the current user information into the loan classification tree to obtain a loan classification set which can be loaned by the user; the category attribute of the loan category comprises loan amount, repayment date, purpose and repayment mode;
in this embodiment, the loan attribute status of the loan amount includes a large-amount loan, a medium-amount loan, and a small-amount loan. The repayment date comprises one-year repayment, two-year repayment, five-year repayment and ten-year repayment. The loan uses include personal loans and corporate loans. The repayment means includes one-time repayment and installment. The partial loan categories for this embodiment are described in table 1.
TABLE 1
Loan category Amount of loan Date of repayment Use of Repayment method
Loan of the first kind Loan of great margin Two-year repayment Company loan Disposable repayment
Loan of the second kind Loan of great margin One year repayment Personal loan Disposable repayment
Loan of the third kind Loan of small amount Five-year repayment Personal loan Amortization
Loan of the fourth kind Loan of money in gold Ten years repayment Personal loan Amortization
Loan of the fifth kind Loan of money in gold One year repayment Company loan Disposable compensationAnd also
Based on the above financial big data face recognition method, an embodiment of the present invention further provides a financial management system, configured to execute the above financial big data face recognition method, where the system includes: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a face image of a current user, and the face image comprises a front face image, a left side face image and a right side face image;
the face recognition module is used for obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number.
Optionally, another financial management system provided in an embodiment of the present invention includes:
an acquisition module: acquiring a face image of a current user, wherein the face image comprises a front face image, a left side face image and a right side face image; obtaining user information stored in a database, wherein the user information comprises attribute states of a plurality of user attributes, and the user attributes comprise user credit levels, user identity information, user real estate information and user wage information; the attribute states of the user credit level comprise a negative level, a zero level, a first level and a second level; the attribute state of the user identity information comprises personal loan and enterprise loan; the attribute state of the user real estate information comprises mortgageable and non-mortgageable; the attribute state of the user payroll information comprises low payroll, medium payroll and high payroll;
a face recognition module: obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number;
a loan classification module: obtaining a loan classification tree based on the user information; obtaining current user information based on the face information; the current user information represents the attribute state of the user who wants to loan; inputting the current user information into the loan classification tree to obtain a loan type set which can be loaned by the user; the attributes of the loan type comprise loan amount, loan interest rate, repayment date, usage and repayment mode;
a loan module: according to the user loan category set, based on the loan form submitted by the current user, a loan is issued to the current user;
optionally, the training process of the face recognition model includes:
obtaining a training set, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces; the difficult faces comprise difficult error faces and difficult correct faces; the difficult error face is a face image which is similar to the basic face but is not the same as the basic face; the difficult correct face is a face image which is not similar to the basic face but is the same as the basic face; the face image comprises a front face, a left side face and a right side face; the face images are face images of a plurality of users in a bank database; the user numbers are numbers of a plurality of users in a bank database; the number has uniqueness;
inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the face feature vector comprises elements representing face features of a face;
inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector comprises elements representing left face features and elements representing left face category features;
inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector comprises elements representing right face features and elements representing right face category features;
obtaining a loss value based on the front face feature vector, the left face feature vector and the right face feature vector;
and obtaining the maximum iteration times of the face recognition model training, and stopping the training until the loss value is not greater than a first threshold value or the maximum iteration times is reached to obtain the trained face recognition model.
Optionally, obtaining a loss value based on the front face feature vector, the left side face feature vector, and the right side face feature vector includes:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000171
therein, loss1For the value of the face front loss,
Figure RE-GDA0003368836970000172
is the ith element in the face feature vector of the basic human face,
Figure RE-GDA0003368836970000173
the ith element of the face feature vector for the difficult-to-correct face,
Figure RE-GDA0003368836970000174
the ith element of the face feature vector of the face with the difficulty error is shown, n is the length of the feature vector, marginxThe face threshold is used for judging whether the basic face is the face of the user.
Obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000175
therein, loss2For the left-side face loss value,
Figure RE-GDA0003368836970000176
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure RE-GDA0003368836970000177
the ith element of the partial vector characterizing the facial features in the left face feature vector of the difficult-to-correct face,
Figure RE-GDA0003368836970000178
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure RE-GDA0003368836970000179
a k element of a partial vector for representing the user category in the left face feature vector of the basic face;
Figure RE-GDA00033688369700001710
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
Obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure RE-GDA0003368836970000181
therein, loss3For the right-side face loss value,
Figure RE-GDA0003368836970000182
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure RE-GDA0003368836970000183
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure RE-GDA0003368836970000184
the ith element of the partial vector for characterizing the facial features in the right side face feature vector of the difficult error face, n is the length of the right side face feature vector, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure RE-GDA0003368836970000185
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure RE-GDA0003368836970000186
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
The loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
Optionally, obtaining the loan classification tree based on the user information includes:
obtaining a root node, wherein the root node comprises a plurality of loan categories;
obtaining a first classification attribute based on the user attribute and the root node; the information entropy of the first classification attribute for classifying the loan categories in the root node is smaller than the information entropy of the plurality of other user attributes for classifying the loan categories in the root node; the attribute weight represents the importance of each attribute;
classifying the loan categories in the root node based on the first classification attribute to obtain first-layer child nodes, wherein the number of the first-layer child nodes is the number of attribute states of the first classification attribute; a plurality of the first level child nodes comprise a set of classified loan categories;
sorting other classification attributes according to the weight from large to small to obtain node attributes, wherein the node attributes comprise a second classification attribute, a third classification attribute and a fourth classification attribute; the weight of the second classification attribute is greater than the weight of the third classification attribute; the weight of the third classification attribute is greater than that of the fourth classification attribute;
classifying the loan categories in the sub nodes of the first layer of sub nodes based on the second classification attribute to obtain second layer of sub nodes, wherein the number of the second layer of sub nodes is the number of attribute states of the second classification attribute; a plurality of sub-nodes in the second tier of sub-nodes comprise a set of classified loan categories;
and (4) classifying for multiple times until the last layer of node attribute is reached, and stopping classification to obtain the empty node classification tree.
And pruning the classification tree containing the empty nodes to obtain the loan classification tree.
The specific manner in which the respective modules perform operations has been described in detail in the embodiments related to the method, and will not be elaborated upon here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 2, including a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502 implements the steps of any one of the above-mentioned financial big data face recognition methods when executing the program.
Where in fig. 2 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above-mentioned financial big data face recognition methods and the above-mentioned related data.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A financial big data face recognition method is characterized by comprising the following steps:
acquiring a face image of a current user, wherein the face image comprises a front face image, a left side face image and a right side face image;
obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector;
comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number;
the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
2. The method of claim 1, wherein the face recognition model training process:
obtaining a training set, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces; the difficult faces comprise difficult error faces and difficult correct faces; the difficult error face is a face image which is similar to the basic face but is not the same as the basic face; the difficult correct face is a face image which is not similar to the basic face but is the same as the basic face; the face image comprises a front face, a left side face and a right side face; the face images are face images of a plurality of users in a bank database; the user numbers are numbers of a plurality of users in a bank database; the number has uniqueness;
inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the face feature vector comprises elements representing face features of a face;
inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector comprises elements representing left face features and elements representing left face category features;
inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector comprises elements representing right face features and elements representing right face category features;
obtaining a loss value based on the front face feature vector, the left face feature vector and the right face feature vector;
and obtaining the maximum iteration times of the face recognition model training, and stopping training the face recognition model when the loss value is not more than a first threshold value or the loss value reaches the maximum iteration times to obtain the trained face recognition model.
3. The method of claim 2, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector comprises:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000021
therein, loss1For the value of the face front loss,
Figure FDA0003274249430000022
is the ith element in the face feature vector of the basic human face,
Figure FDA0003274249430000023
the ith element of the face feature vector for the difficult-to-correct face,
Figure FDA0003274249430000024
the ith element of the face feature vector of the face with the difficulty error, n is the length of the feature vector,marginxthe face threshold is used for judging whether the basic face is the face of the user.
4. The method of claim 2, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector further comprises:
obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000025
therein, loss2For the left-side face loss value,
Figure FDA0003274249430000026
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure FDA0003274249430000027
the ith element of the partial vector characterizing the facial features in the left face feature vector of the difficult-to-correct face,
Figure FDA0003274249430000028
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure FDA0003274249430000031
is the left side of the basic faceThe kth element of a partial vector for representing the user category in the face feature vector;
Figure FDA0003274249430000032
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
5. The method of claim 2, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector further comprises:
obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000033
therein, loss3For the right-side face loss value,
Figure FDA0003274249430000034
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure FDA0003274249430000035
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure FDA0003274249430000036
characterizing faces in right side face feature vectors of the faces with the difficulty errorsThe ith element of the partial vector of the partial feature, n is the length of the feature vector of the right face, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure FDA0003274249430000037
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure FDA0003274249430000038
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents the user categories, and K is an integer from 1 to K;
the loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
6. A financial management system, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a face image of a current user, and the face image comprises a front face image, a left side face image and a right side face image;
the face recognition module is used for obtaining a face feature vector based on a face image of a current user through a face recognition model; the face feature vector comprises a face front face feature vector, a face left face feature vector and a face right face feature vector; comparing the face feature vector with face feature vectors of other users stored in a database to obtain face information; the face information is a user number;
the face recognition model comprises a face convolution neural network, a left face convolution neural network and a right face convolution neural network; the face-positive convolution neural network comprises three face-positive residual blocks and a face-positive full-connection network; the front face facial image is a first front face residual block input, the first front face residual block input is the second front face residual block output, the third front face residual block input is the second front face residual block output, the fully connected network input is the third front face residual block output, and the front face fully connected network output is a front face feature vector; the left face neural convolution network comprises five left face residual blocks and a left face full-connection network; the left face facial image is a first left face residual block input, the first left face residual block input is the second left face residual block output, the third left face residual block input is the second left face residual block output, the fourth left face residual block input is the third left face residual block output, the fifth left face residual block input is the fourth left face residual block output, the left face fully-connected network input is the fifth left face residual block output, the left face fully-connected network output is a left face feature vector; the right face convolutional neural network comprises five right face residual blocks and a right face full-connection network; the right side face facial image is the input of first right side face residual block, the input of first right side face residual block is the output of second right side face residual block, the input of third right side face residual block is the output of second right side face residual block, the input of fourth right side face residual block is the output of third right side face residual block, the input of fifth right side face residual block is the output of fourth right side face residual block, the input of right side face full connection network is the output of fifth right side face residual block, the output of right side face full connection network is right side face eigenvector.
7. The system of claim 6, wherein the face recognition model training process:
obtaining a training set, wherein the training set comprises face images and user numbers, and the face images comprise basic faces and difficult faces; the difficult faces comprise difficult error faces and difficult correct faces; the difficult error face is a face image which is similar to the basic face but is not the same as the basic face; the difficult correct face is a face image which is not similar to the basic face but is the same as the basic face; the face image comprises a front face, a left side face and a right side face; the face images are face images of a plurality of users in a bank database; the user numbers are numbers of a plurality of users in a bank database; the number has uniqueness;
inputting the front face of the face image into a front face convolution neural network to obtain a front face characteristic vector; the face feature vector comprises elements representing face features of a face;
inputting the left face of the face image into a left face convolution neural network to obtain a left face characteristic vector; the left face feature vector comprises elements representing left face features and elements representing left face category features;
inputting the right side face of the face image into a right side face convolution neural network to obtain a right side face characteristic vector; the right face feature vector comprises elements representing right face features and elements representing right face category features;
obtaining a loss value based on the front face feature vector, the left face feature vector and the right face feature vector;
and obtaining the maximum iteration times of the face recognition model training, and stopping training the face recognition model when the loss value is not more than a first threshold value or the loss value reaches the maximum iteration times to obtain the trained face recognition model.
8. The system of claim 7, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector comprises:
obtaining a face loss value based on the face feature vector of the basic face, the face feature vector of the face with the difficulty errors and the face feature vector of the face with the difficulty errors;
the face front loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000051
therein, loss1For the value of the face front loss,
Figure FDA0003274249430000052
is the ith element in the face feature vector of the basic human face,
Figure FDA0003274249430000053
the ith element of the face feature vector for the difficult-to-correct face,
Figure FDA0003274249430000054
the ith element of the face feature vector of the face with the difficulty error is shown, n is the length of the feature vector, marginxThe face threshold is used for judging whether the basic face is the face of the user.
9. The system of claim 7, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector further comprises:
obtaining a left face loss value based on the left face feature vector of the basic face, the left face feature vector of the difficult and wrong face and the left face feature vector of the difficult and correct face;
the left face loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000055
therein, loss2For the left-side face loss value,
Figure FDA0003274249430000056
an ith element of a partial vector characterizing facial features in a left side face feature vector of the base face,
Figure FDA0003274249430000057
the ith element of the partial vector characterizing the facial features in the left face feature vector of the difficult-to-correct face,
Figure FDA0003274249430000058
the ith element of a partial vector for characterizing the facial features in the left face feature vector of the face with the difficulty error is used, n is the length of the feature vector, marginyThe left face threshold value is a left face threshold value, and whether the left face threshold value is the face of the user of which the basic face is judged;
Figure FDA0003274249430000059
a k element of a partial vector for representing the user category in the left face feature vector of the basic face;
Figure FDA00032742494300000510
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents a user category, and K is an integer from 1 to K.
10. The system of claim 7, wherein deriving a loss value based on the front face feature vector, the left face feature vector, and the right face feature vector further comprises:
obtaining a right side face loss value through a front face loss function based on the right side face feature vector of the basic face, the right side face feature vector of the correct face and the right side face feature vector of the correct face;
the right side face loss value is obtained by a calculation method according to the following formula:
Figure FDA0003274249430000061
therein, loss3For the right-side face loss value,
Figure FDA0003274249430000062
the ith element of the partial vector of the characteristic facial features in the right side face feature vector of the basic face,
Figure FDA0003274249430000063
the ith element of the partial vector characterizing the facial features in the right side face feature vector of the difficult-to-correct face,
Figure FDA0003274249430000064
the ith element of the partial vector for characterizing the facial features in the right side face feature vector of the difficult error face, n is the length of the right side face feature vector, marginzThe right-side face threshold value is used for judging whether the right-side face threshold value is the face of the user of which the basic face is the user;
Figure FDA0003274249430000065
a k element of a partial vector for representing the user category in the right face feature vector of the basic face;
Figure FDA0003274249430000066
the k-th element of the user category vector is marked, the element in the user category vector indicates that the vector is 1 when the basic face is the user, and the vector is 0 when the basic face is not the user; k is the number of user categories, K represents the user categories, and K is an integer from 1 to K;
the loss value is obtained by the calculation mode of the following formula:
Loss=loss1+0.5loss2+0.5loss3
wherein, Loss is Loss value1Loss value of face2Left side face loss value, loss3Right face loss values.
CN202111111968.8A 2021-09-23 2021-09-23 Financial big data face recognition method and financial management system Pending CN113947790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111111968.8A CN113947790A (en) 2021-09-23 2021-09-23 Financial big data face recognition method and financial management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111111968.8A CN113947790A (en) 2021-09-23 2021-09-23 Financial big data face recognition method and financial management system

Publications (1)

Publication Number Publication Date
CN113947790A true CN113947790A (en) 2022-01-18

Family

ID=79329086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111111968.8A Pending CN113947790A (en) 2021-09-23 2021-09-23 Financial big data face recognition method and financial management system

Country Status (1)

Country Link
CN (1) CN113947790A (en)

Similar Documents

Publication Publication Date Title
TWI712981B (en) Risk identification model training method, device and server
CN108763277B (en) Data analysis method, computer readable storage medium and terminal device
US8930295B2 (en) Systems and methods for monitoring and analyzing transactions
CN110473083B (en) Tree risk account identification method, device, server and storage medium
CN107230108A (en) The processing method and processing device of business datum
CN109766454A (en) A kind of investor's classification method, device, equipment and medium
CN107704512A (en) Financial product based on social data recommends method, electronic installation and medium
CN108764194A (en) A kind of text method of calibration, device, equipment and readable storage medium storing program for executing
CN109284369A (en) Determination method, system, device and the medium of security news information importance
CN111696656B (en) Doctor evaluation method and device of Internet medical platform
WO2023071120A1 (en) Method for recognizing proportion of green assets in digital assets and related product
Asim et al. An adaptive model for identification of influential bloggers based on case-based reasoning using random forest
Sekerke Bayesian risk management: A guide to model risk and sequential learning in financial markets
CN113378090B (en) Internet website similarity analysis method and device and readable storage medium
CN109885745A (en) A kind of user draws a portrait method, apparatus, readable storage medium storing program for executing and terminal device
CN112950347A (en) Resource data processing optimization method and device, storage medium and terminal
CN111143533A (en) Customer service method and system based on user behavior data
Luo et al. Intelligent algorithm of optimal investment model under stochastic interest rate and stochastic volatility
Stevens et al. Predicting real estate price using text mining
CN114385921B (en) Bidding recommendation method, system, equipment and storage medium
CN113947790A (en) Financial big data face recognition method and financial management system
CN109784406A (en) A kind of user draws a portrait method, apparatus, readable storage medium storing program for executing and terminal device
CN114886383A (en) Electroencephalogram signal emotional feature classification method based on transfer learning
CN113947470A (en) Big data loan management method and system
CN113536111A (en) Insurance knowledge content recommendation method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination