CN111461654A - Face recognition sign-in method and device based on deep learning algorithm - Google Patents

Face recognition sign-in method and device based on deep learning algorithm Download PDF

Info

Publication number
CN111461654A
CN111461654A CN202010241640.7A CN202010241640A CN111461654A CN 111461654 A CN111461654 A CN 111461654A CN 202010241640 A CN202010241640 A CN 202010241640A CN 111461654 A CN111461654 A CN 111461654A
Authority
CN
China
Prior art keywords
face
image
unit
expressing
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010241640.7A
Other languages
Chinese (zh)
Inventor
谷雨
王伟
王朝阳
徐洪福
蒋曦
刘志远
刘盼盼
刘海峰
刘东亮
王曼曼
杨越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Hebei Electric Power Co Ltd
Cangzhou Power Supply Co of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Hebei Electric Power Co Ltd
Cangzhou Power Supply Co of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Hebei Electric Power Co Ltd, Cangzhou Power Supply Co of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010241640.7A priority Critical patent/CN111461654A/en
Publication of CN111461654A publication Critical patent/CN111461654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a face recognition check-in method and a device based on a deep learning algorithm, wherein the face recognition check-in method comprises the following steps: the method comprises the steps of collecting a person image, processing the person image based on a Haar characteristic and integral graph principle to determine a face area in the person image, determining the position of each key point in the face area, intercepting the face area from the person image, correcting the position of each key point in the face area based on a spatial transformation principle and a preset key point reference position to obtain a face image, inputting the face image into a preset convolutional neural network to obtain face characteristic information, matching the face characteristic information with a preset personnel database, and if the matching is successful and the matched personnel are not signed in, executing sign-in processing for the matched personnel according to the current time. The technical scheme provided by the application can improve the accuracy of face recognition check-in under high efficiency.

Description

Face recognition sign-in method and device based on deep learning algorithm
Technical Field
The application relates to the technical field of electronic check-in, in particular to a face recognition check-in method and device based on a deep learning algorithm.
Background
The human face is the most common biological feature adopted when determining the identity of a person, the research on the human face recognition and related technologies has very important theoretical value and application value, the human face is used as the condition of identity authentication, the characteristic of 'carrying about' is the embodiment of convenience, the non-losability is the safety of the recognition, and the method for authenticating the identity of a person uses the biological feature of the person to authenticate the person and seems to be more consistent with the cognition of the person, and is the development trend at the present time and in the future.
The existing face recognition check-in method requires that a user keeps a preset distance with check-in equipment when checking in, the user cannot move, the method is neither convenient nor efficient, if the user moves or shoots light rays undesirably, the image to be detected is unclear, the detection result is influenced, and the face recognition check-in accuracy is further reduced.
Disclosure of Invention
The application provides a face recognition check-in method and device based on a deep learning algorithm, which can improve the accuracy of face recognition check-in under high efficiency.
In order to achieve the technical effect, a first aspect of the present application provides a face recognition check-in method based on a deep learning algorithm, where the face recognition check-in method includes:
acquiring a figure image, wherein the figure image is an image of a figure entering a preset area;
processing the human figure image based on Haar characteristics and an integral graph principle to determine a human face area in the human figure image;
determining the position of each key point in the face area;
intercepting the face area from the figure image, and correcting the position of each key point in the face area based on a spatial transformation principle and a preset key point reference position to obtain a face image;
inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and if the matching is successful and the matched personnel are not checked in, executing check-in processing for the matched personnel according to the current time.
Based on the first aspect of the present application, in a first possible implementation manner, the processing the human figure image based on Haar features and an integral graph principle includes:
calculating a characteristic value of the Haar characteristic of the human image, and accelerating the calculation of the characteristic value based on an integral diagram principle;
training a weak classifier for each Haar feature based on the feature values;
generating a strong classifier based on the weak classifier;
cascading the strong classifiers to form a cascade classifier;
and processing the character image based on the cascade classifier to determine a face area in the character image.
Based on the first aspect of the present application or the first possible implementation manner of the first aspect of the present application, in a second possible implementation manner, the determining the position of each key point in the face region specifically includes: and determining the positions of all key points in the face area based on a random forest algorithm.
Based on the first aspect of the present application, or the first possible implementation manner of the first aspect of the present application, in a third possible implementation manner, the performing, on the basis of the spatial transformation principle and a preset key point reference position, position correction on each key point in the face region includes:
normalizing the face area and each key point in the face area based on a transformation matrix, wherein the transformation matrix is as follows:
Figure BDA0002432748960000031
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syDenotes the magnification of the ordinate, p denotes the distance of the translation of the abscissa, q denotes the distance of the translation of the ordinate,x represents the abscissa before transformation, y represents the ordinate before transformation, x 'represents the abscissa after transformation, and y' represents the ordinate after transformation;
and correcting the position of each key point in the normalized human face area based on the key point reference position.
The second aspect of the present application provides a face recognition check-in device based on a deep learning algorithm, where the face recognition check-in device includes:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a person image, and the person image is an image of a person entering a preset area;
the first processing unit is used for processing the human image based on Haar features and an integral graph principle to determine a human face area in the human image;
the determining unit is used for determining the positions of all key points in the face area;
an intercepting unit configured to intercept the face region from the person image;
the correction unit is used for correcting the positions of all key points in the face region based on a spatial transformation principle and preset key point reference positions to obtain a face image;
the input unit is used for inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
the matching unit is used for matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and the check-in unit is used for executing check-in processing for the matched personnel according to the current time when the matching is successful and the matched personnel do not check in.
Based on the second aspect of the present application, in a first possible implementation manner, the first processing unit includes:
a calculation unit for calculating a feature value of a Haar feature of the human image and accelerating the calculation of the feature value based on an integral graph principle;
the training unit is used for training a weak classifier for each Haar feature based on the feature values;
a generating unit for generating a strong classifier based on the weak classifier;
the cascade unit is used for cascading the strong classifiers to form a cascade classifier;
and the sub-processing unit is used for processing the character image based on the cascade classifier so as to determine a face area in the character image.
Based on the second aspect of the present application or the first possible implementation manner of the second aspect of the present application, in a second possible implementation manner, the determining unit is specifically configured to: and determining the positions of all key points in the face area based on a random forest algorithm.
Based on the second aspect of the present application or the first possible implementation manner of the second aspect of the present application, in a third possible implementation manner, the correction unit includes:
a second processing unit, configured to perform normalization processing on the face region and each key point in the face region based on a transformation matrix, where the transformation matrix is:
Figure BDA0002432748960000051
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
and the sub-correction unit is used for correcting the positions of all the key points in the normalized human face area based on the key point reference positions.
A third aspect of the present application provides a face recognition check-in device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 4 when executing the computer program.
A fourth aspect of the application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
In view of the above, the present application discloses a face recognition check-in method and device based on a deep learning algorithm, and the face recognition check-in method includes: the method comprises the steps of collecting a person image, processing the person image based on a Haar characteristic and integral graph principle to determine a face area in the person image, determining the position of each key point in the face area, intercepting the face area from the person image, correcting the position of each key point in the face area based on a spatial transformation principle and a preset key point reference position to obtain a face image, inputting the face image into a preset convolutional neural network, outputting face characteristic information, matching the face characteristic information with a preset personnel database, and if the matching is successful and the matched personnel are not checked in, executing check-in processing for the matched personnel according to the current time. On one hand, the character image is the character image entering the preset area, the preset area can be set according to the user requirement and is an area range, and the character image entering the area range can be automatically collected and identified, so that the person does not need to be identified at a fixed position, and the sign-in efficiency is improved; on the other hand, the human face area is obtained after the human image is processed through the Haar feature and the integral graph principle, the human face area is input into the convolutional neural network after being intercepted and corrected, the higher the image accuracy input into the convolutional neural network is, the more accurate the obtained result is, and therefore the accuracy rate of the human face recognition check-in can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a face recognition check-in method provided by the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a face recognition check-in method provided by the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a face recognition check-in device provided by the present application;
fig. 4 is a schematic structural diagram of another embodiment of a face recognition check-in device provided by the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein, and it will be apparent to those of ordinary skill in the art that the present application is not limited by the specific embodiments disclosed below.
Example one
The application provides a face recognition check-in method based on a deep learning algorithm, and as shown in figure I, the face recognition check-in method comprises the following steps:
step 101, acquiring a person image;
the character image is an image of a character entering a preset area;
in this embodiment of the application, the execution main body of the step 101 may be set according to actual requirements, and the device applied to acquiring the person image may be deployed at an entrance of the preset area, or may be deployed at other places suitable for check-in the preset area, which is not limited herein.
102, processing the human image based on Haar characteristics and an integral graph principle to determine a human face area in the human image;
optionally, as shown in fig. 2, the processing the human figure image based on the Haar feature and the integral graph principle includes:
step 1021, calculating a characteristic value of Haar characteristic of the human image, and accelerating the calculation of the characteristic value based on an integral diagram principle;
specifically, the Haar features include: three types of features including edge features, linear features, central features and diagonal features are combined to form a feature template, two types of rectangles including black and white are arranged in the feature template, and the feature values are the sum of white matrix pixels and black subtraction matrix pixels;
it should be noted that, the features of the human face can be described by the matrix features of the human image, for example, the color of the eyes is darker than the color of the cheeks, the color of the two sides of the nose bridge is darker than the color of the nose bridge, and the color of the mouth is darker than the color around the mouth, so the feature value can reflect the gray level change of the human image;
step 1022, training a weak classifier for each Haar feature based on the feature values;
in this embodiment of the present application, the training of one weak classifier for each Haar feature based on the above feature values includes:
calculating the characteristic values of all training samples for each Haar characteristic, and sequencing the characteristic values;
scanning the sorted eigenvalues, for each element in the sorted table, calculating the following values:
calculate the weight sum T of all positive examples+
Calculate the weight sum T of all negative examples-
Calculating the weight sum S of the antecedent+
Calculating the weight sum S of the previous negative examples before the element-
Selecting the number between the characteristic value of the current element and the previous characteristic value as a threshold value, and calculating the classification error e of the threshold value based on a first formula, wherein the first formula is as follows:
e=min(S++(T--S-),S-+(T+-S+))
scanning the sorting table from beginning to end, and selecting the threshold with the minimum classification error as a weak classifier;
for example, assume 2000 face samples and 4000 non-face samples, the sample size is 20 × 20, and there are 78460 features for 78460Any one of the features fiIs calculated on the above feature fiThen, 6000 eigenvalues can be obtained from the eigenvalues of 2000 human face samples and 4000 non-human face samples, and the eigenvalues are sorted;
1023, generating a strong classifier based on the weak classifier;
in this embodiment of the application, the generating the strong classifier based on the weak classifier includes: combining all weak classifiers obtained by training all Haar features based on the feature values to form a strong classifier;
step 1024, cascading the strong classifiers to form a cascade classifier;
step 1025, processing the human image based on the cascade classifier to determine the human face area in the human image.
103, determining the positions of all key points in the face area;
optionally, the determining the position of each key point in the face region specifically includes: determining the positions of all key points in the face area based on a random forest algorithm;
specifically, the position of each key point in the face region may be determined by a classification method, each key point trains a random forest classifier, the random forest classifier is composed of N decision trees, each decision tree is a classifier, each node of the decision tree is a weak classifier, and the decision result of the random forest classifier is determined by the average of the classification results of all the decision trees;
104, intercepting the face region from the figure image, and correcting the position of each key point in the face region based on a spatial transformation principle and a preset key point reference position to obtain a face image;
optionally, the performing, based on the spatial transformation principle and the preset key point reference position, position correction on each key point in the face region includes:
normalizing the face area and each key point in the face area based on a transformation matrix, wherein the transformation matrix is as follows:
Figure BDA0002432748960000111
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
correcting the position of each key point in the normalized human face area based on the key point reference position;
in an embodiment of the present application, the performing, based on the keypoint reference position, position correction on each normalized keypoint may include:
transforming the face area to a preset size, and then correcting the position of each key point in the transformed face area based on a preset key point reference position, wherein the preset key point reference position can refer to table 1;
Figure BDA0002432748960000121
TABLE 1
Step 105, inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
in an embodiment of the present application, the convolutional neural network includes: an input layer, a first convolution layer, a first pooling layer, a second convolution layer and a second pooling layer;
the above inputting the face image into a preset convolutional neural network includes: inputting the face image into the first convolution layer, the first pooling layer, the second convolution layer, and the second pooling layer in this order by activating a signal of the convolutional neural network through the input layer;
the first convolutional layer adopts 16 filters, convolution kernels with the size of 5 × 5 are used, the sliding step length is 1, the boundary is not compensated, the number of channels of the first pooling layer is 16, the size of a pooling window is 2 × 2, and the step length is 2, the second convolutional layer adopts 32 filters, convolution kernels with the size of 3 × 3, the step length is 1, the boundary is not compensated, 16 channels are used, the number of channels of the second pooling layer is 32, the size of the pooling window is 2 × 2, and the step length is 2;
step 106, matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and 107, if the matching is successful and the matched personnel are not checked in, executing check-in processing for the matched personnel according to the current time.
Therefore, according to the scheme of the application, the person image is collected, the person image is processed based on the Haar feature and the integral map principle, the face area in the person image and the position of each key point in the face area are determined, the face area is intercepted from the person image, position correction is carried out on each key point in the face area based on the spatial transformation principle and the preset key point reference position, the face image is obtained, the face image is input into the preset convolutional neural network, the face feature information is output, the face feature information is matched with the preset personnel database, and if matching is successful and the matched personnel are not signed in, sign-in processing is carried out on the matched personnel according to the current time. On one hand, the character image is the character image entering the preset area, the preset area can be set according to the user requirement and is an area range, and the character image entering the area range can be automatically collected and identified, so that the person does not need to be identified at a fixed position, and the sign-in efficiency is improved; on the other hand, the human face area is obtained after the human image is processed through the Haar feature and the integral graph principle, the human face area is input into the convolutional neural network after being intercepted and corrected, the higher the image accuracy input into the convolutional neural network is, the more accurate the obtained result is, and therefore the accuracy rate of the human face recognition check-in can be improved.
Example two
The embodiment of the present application further provides a face recognition check-in device based on a deep learning algorithm, as shown in fig. 3, the face recognition check-in device 30 includes:
the image acquisition unit 301 is configured to acquire a person image, where the person image is an image of a person entering a preset area;
in the embodiment of the present application, the above-mentioned acquisition unit can be set according to actual requirements, and is not limited herein.
A first processing unit 302, configured to process the human image based on Haar features and an integral graph principle to determine a human face region in the human image;
a determining unit 303, configured to determine positions of key points in the face region;
an intercepting unit 304, configured to intercept the face region from the person image;
a correcting unit 305, configured to perform position correction on each key point in the face region based on a spatial transformation principle and a preset key point reference position, so as to obtain a face image;
an input unit 306, configured to input the face image into a preset convolutional neural network to obtain face feature information output by the convolutional neural network, where the convolutional neural network is obtained by training based on a face image sample in advance;
a matching unit 307, configured to match the face feature information with a preset personnel database, where the personnel database includes face feature information of each person entered in advance;
and the check-in unit 308 is configured to, when the matching is successful and the matched person does not check in, execute check-in processing for the matched person according to the current time.
Optionally, the first processing unit 302 includes:
a calculation unit 3021 configured to calculate a feature value of a Haar feature of the human image, and accelerate the calculation of the feature value based on an integral graph principle;
a training unit 3022, configured to train a weak classifier for each Haar feature based on the feature values;
a generating unit 3023 configured to generate a strong classifier based on the weak classifier;
a cascade unit 3024, configured to cascade the strong classifiers to form a cascade classifier;
a sub-processing unit 3025, configured to process the personal image based on the cascade classifier to determine a face region in the personal image.
Optionally, the determining unit is specifically configured to: and determining the positions of all key points in the face area based on a random forest algorithm.
Optionally, the correction unit includes:
a second processing unit, configured to perform normalization processing on the face region and each key point in the face region based on a transformation matrix, where the transformation matrix is:
Figure BDA0002432748960000151
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
and the sub-correction unit is used for correcting the positions of all the key points in the normalized human face area based on the key point reference positions.
Therefore, in the scheme of the application, the human face recognition check-in device acquires a human image, processes the human image based on Haar features and an integral map principle, determines a human face area in the human image and positions of all key points in the human face area, intercepts the human face area from the human image, corrects the positions of all key points in the human face area based on a spatial transformation principle and preset key point reference positions to obtain a human face image, inputs the human face image into a preset convolutional neural network, outputs human face feature information, matches the human face feature information with a preset personnel database, and if the matching is successful and the matched personnel are not checked in, executes check-in processing for the matched personnel according to the current time. On one hand, the character image is the character image entering the preset area, the preset area can be set according to the user requirement and is an area range, and the character image entering the area range can be automatically collected and identified, so that the person does not need to be identified at a fixed position, and the sign-in efficiency is improved; on the other hand, the human face area is obtained after the human image is processed through the Haar feature and the integral graph principle, the human face area is input into the convolutional neural network after being intercepted and corrected, the higher the image accuracy input into the convolutional neural network is, the more accurate the obtained result is, and therefore the accuracy rate of the human face recognition check-in can be improved.
EXAMPLE III
The embodiment of the application provides a face identification device of registering, includes: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the memory is used for storing software programs and modules, the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, and the memory and the processor are connected through the bus.
In this embodiment of the present application, the above-mentioned face recognition check-in device may implement the following steps when the processor runs the computer program stored in the memory:
acquiring a figure image, wherein the figure image is an image of a figure entering a preset area;
processing the human figure image based on Haar characteristics and an integral graph principle to determine a human face area in the human figure image;
determining the position of each key point in the face area;
intercepting the face area from the figure image, and correcting the position of each key point in the face area based on a spatial transformation principle and a preset key point reference position to obtain a face image;
inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and if the matching is successful and the matched personnel are not checked in, executing check-in processing for the matched personnel according to the current time.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the processing the human figure image based on the Haar feature and the integral map principle includes:
calculating a characteristic value of the Haar characteristic of the human image, and accelerating the calculation of the characteristic value based on an integral diagram principle;
training a weak classifier for each Haar feature based on the feature values;
generating a strong classifier based on the weak classifier;
cascading the strong classifiers to form a cascade classifier;
and processing the character image based on the cascade classifier to determine a face area in the character image.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the determining the position of each key point in the face region specifically includes: and determining the positions of all key points in the face area based on a random forest algorithm.
In a fourth possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the performing, on the basis of a spatial transformation principle and a preset key point reference position, position correction on each key point in the face region includes:
normalizing the face area and each key point in the face area based on a transformation matrix, wherein the transformation matrix is as follows:
Figure BDA0002432748960000181
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
and correcting the position of each key point in the normalized human face area based on the key point reference position.
Therefore, according to the scheme of the application, the person image is collected, the person image is processed based on the Haar feature and the integral map principle, the face area in the person image and the position of each key point in the face area are determined, the face area is intercepted from the person image, position correction is carried out on each key point in the face area based on the spatial transformation principle and the preset key point reference position, the face image is obtained, the face image is input into the preset convolutional neural network, the face feature information is output, the face feature information is matched with the preset personnel database, and if matching is successful and the matched personnel are not signed in, sign-in processing is carried out on the matched personnel according to the current time. On one hand, the character image is the character image entering the preset area, the preset area can be set according to the user requirement and is an area range, and the character image entering the area range can be automatically collected and identified, so that the person does not need to be identified at a fixed position, and the sign-in efficiency is improved; on the other hand, the human face area is obtained after the human image is processed through the Haar feature and the integral graph principle, the human face area is input into the convolutional neural network after being intercepted and corrected, the higher the image accuracy input into the convolutional neural network is, the more accurate the obtained result is, and therefore the accuracy rate of the human face recognition check-in can be improved.
It should be understood that in the embodiments of the present Application, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read-only memory and random access memory, and provides instructions and data to the processor; some or all of the memory may also include non-volatile random access memory.
The integrated modules described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be noted that, the methods and the details thereof provided by the foregoing embodiments may be combined with the apparatuses and devices provided by the embodiments, which are referred to each other and are not described again.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition check-in method based on a deep learning algorithm is characterized by comprising the following steps:
acquiring a person image, wherein the person image is an image of a person entering a preset area;
processing the human image based on Haar features and an integral graph principle to determine a human face area in the human image;
determining the position of each key point in the face area;
intercepting the face area from the figure image, and correcting the position of each key point in the face area based on a spatial transformation principle and a preset key point reference position to obtain a face image;
inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and if the matching is successful and the matched personnel are not checked in, executing check-in processing for the matched personnel according to the current time.
2. The face recognition check-in method of claim 1, wherein the processing the human figure image based on Haar features and integral map principles comprises:
calculating a characteristic value of a Haar characteristic of the human image, and accelerating the calculation of the characteristic value based on an integral diagram principle;
training a weak classifier for each Haar feature based on the feature values;
generating a strong classifier based on the weak classifier;
cascading the strong classifiers to form a cascade classifier;
and processing the character image based on the cascade classifier to determine a face region in the character image.
3. The face recognition check-in method according to claim 1 or 2, wherein the determining the position of each key point in the face region specifically comprises: and determining the positions of all key points in the face area based on a random forest algorithm.
4. The face recognition check-in method according to claim 1 or 2, wherein the performing position correction on each keypoint in the face region based on a spatial transformation principle and a preset keypoint reference position comprises:
normalizing the face area and each key point in the face area based on a transformation matrix, wherein the transformation matrix is as follows:
Figure FDA0002432748950000021
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
and correcting the position of each key point in the normalized human face area based on the key point reference position.
5. A face recognition check-in device based on a deep learning algorithm is characterized by comprising the following steps:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a person image, and the person image is an image of a person entering a preset area;
the first processing unit is used for processing the human image based on Haar features and an integral graph principle to determine a human face area in the human image;
the determining unit is used for determining the positions of all key points in the face area;
an intercepting unit configured to intercept the face region from the person image;
the correction unit is used for correcting the positions of all key points in the face region based on a spatial transformation principle and preset key point reference positions to obtain a face image;
the input unit is used for inputting the face image into a preset convolutional neural network to obtain face characteristic information output by the convolutional neural network, wherein the convolutional neural network is obtained by training based on a face image sample in advance;
the matching unit is used for matching the face feature information with a preset personnel database, wherein the personnel database comprises face feature information of each personnel input in advance;
and the check-in unit is used for executing check-in processing for the matched personnel according to the current time when the matching is successful and the matched personnel do not check in.
6. The face recognition check-in apparatus of claim 5, wherein the first processing unit comprises:
the computing unit is used for computing the characteristic value of the Haar characteristic of the human image and accelerating the computation of the characteristic value based on the integral diagram principle;
the training unit is used for training a weak classifier for each Haar feature based on the feature value;
a generating unit for generating a strong classifier based on the weak classifier;
the cascade unit is used for cascading the strong classifiers to form a cascade classifier;
and the sub-processing unit is used for processing the character image based on the cascade classifier so as to determine a face area in the character image.
7. The face recognition check-in apparatus of claim 5 or 6, wherein the determining unit is specifically configured to: and determining the positions of all key points in the face area based on a random forest algorithm.
8. The face recognition check-in device of claim 5 or 6, characterized in that the correction unit comprises:
a second processing unit, configured to perform normalization processing on the face region and each key point in the face region based on a transformation matrix, where the transformation matrix is:
Figure FDA0002432748950000041
where θ represents the angle of counterclockwise rotation about the origin when the coordinate point rotates, sxDenotes the magnification of the abscissa, syExpressing the magnification of the ordinate, p expressing the translation distance of the abscissa, q expressing the translation distance of the ordinate, x expressing the abscissa before transformation, y expressing the ordinate before transformation, x 'expressing the abscissa after transformation, y' expressing the ordinate after transformation;
and the sub-correction unit is used for correcting the positions of all the key points in the normalized human face area based on the key point reference positions.
9. A face recognition check-in device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN202010241640.7A 2020-03-31 2020-03-31 Face recognition sign-in method and device based on deep learning algorithm Pending CN111461654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241640.7A CN111461654A (en) 2020-03-31 2020-03-31 Face recognition sign-in method and device based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241640.7A CN111461654A (en) 2020-03-31 2020-03-31 Face recognition sign-in method and device based on deep learning algorithm

Publications (1)

Publication Number Publication Date
CN111461654A true CN111461654A (en) 2020-07-28

Family

ID=71685099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241640.7A Pending CN111461654A (en) 2020-03-31 2020-03-31 Face recognition sign-in method and device based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN111461654A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931675A (en) * 2020-08-18 2020-11-13 熵基科技股份有限公司 Coercion alarm method, device, equipment and storage medium based on face recognition
CN112487955A (en) * 2020-11-27 2021-03-12 杭州电子科技大学 Intelligent pickup method for improving R-FCN (R-FCN-fiber channel communication) network based on RGB-D (Red, Green, blue and D) information
CN113434227A (en) * 2021-06-18 2021-09-24 深圳掌酷软件有限公司 Screen locking wallpaper switching method, device, equipment and storage medium
CN113656842A (en) * 2021-08-10 2021-11-16 支付宝(杭州)信息技术有限公司 Data verification method, device and equipment
WO2023142453A1 (en) * 2022-01-28 2023-08-03 中国银联股份有限公司 Biometric identification method, server, and client

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN104183029A (en) * 2014-09-02 2014-12-03 济南大学 Portable quick crowd attendance method
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063683A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN104183029A (en) * 2014-09-02 2014-12-03 济南大学 Portable quick crowd attendance method
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张成成等: "基于深度学习的人脸识别技术在课堂签到上的应用", 《时代汽车》, no. 04, 5 April 2019 (2019-04-05), pages 26 - 27 *
桑高丽: "《基于真实测量三维人脸的识别技术研究》", 西安电子科技大学出版社, pages: 18 - 19 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931675A (en) * 2020-08-18 2020-11-13 熵基科技股份有限公司 Coercion alarm method, device, equipment and storage medium based on face recognition
CN112487955A (en) * 2020-11-27 2021-03-12 杭州电子科技大学 Intelligent pickup method for improving R-FCN (R-FCN-fiber channel communication) network based on RGB-D (Red, Green, blue and D) information
CN113434227A (en) * 2021-06-18 2021-09-24 深圳掌酷软件有限公司 Screen locking wallpaper switching method, device, equipment and storage medium
CN113656842A (en) * 2021-08-10 2021-11-16 支付宝(杭州)信息技术有限公司 Data verification method, device and equipment
CN113656842B (en) * 2021-08-10 2024-02-02 支付宝(杭州)信息技术有限公司 Data verification method, device and equipment
WO2023142453A1 (en) * 2022-01-28 2023-08-03 中国银联股份有限公司 Biometric identification method, server, and client

Similar Documents

Publication Publication Date Title
CN111461654A (en) Face recognition sign-in method and device based on deep learning algorithm
CN110326001B (en) System and method for performing fingerprint-based user authentication using images captured with a mobile device
Erdem et al. Combining Haar feature and skin color based classifiers for face detection
US20120294535A1 (en) Face detection method and apparatus
CN103914676A (en) Method and apparatus for use in face recognition
Stojanović et al. A novel neural network based approach to latent overlapped fingerprints separation
KR20170045813A (en) Detecting method and apparatus of biometrics region for user authentication
Vega et al. Biometric personal identification system based on patterns created by finger veins
Ilankumaran et al. Multi-biometric authentication system using finger vein and iris in cloud computing
US8971592B2 (en) Method for determining eye location on a frontal face digital image to validate the frontal face and determine points of reference
Tereikovska et al. Recognition of emotions by facial Geometry using a capsule neural network
CN109598235B (en) Finger vein image authentication method and device
Raja et al. Prognostic evaluation of multimodal biometric traits recognition based human face, finger print and iris images using ensembled SVM classifier
CN111612083B (en) Finger vein recognition method, device and equipment
Buddharpawar et al. Iris recognition based on pca for person identification
Ng et al. An effective segmentation method for iris recognition system
Baker et al. User identification system for inked fingerprint pattern based on central moments
CN111881789A (en) Skin color identification method and device, computing equipment and computer storage medium
Mohammed et al. Developing iris recognition system based on enhanced normalization
CN103226698A (en) Face detection method
CN110458004A (en) A kind of recongnition of objects method, apparatus, equipment and storage medium
Jose et al. Towards building a better biometric system based on vein patterns in human beings
Vélez et al. Robust ear detection for biometric verification
Viriri et al. Improving iris-based personal identification using maximum rectangular region detection
Punyani et al. Iris recognition system using morphology and sequential addition based grouping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200728