CN117037343B - Full-automatic face biological recognition intelligent lock unlocking method and system - Google Patents

Full-automatic face biological recognition intelligent lock unlocking method and system Download PDF

Info

Publication number
CN117037343B
CN117037343B CN202311293906.2A CN202311293906A CN117037343B CN 117037343 B CN117037343 B CN 117037343B CN 202311293906 A CN202311293906 A CN 202311293906A CN 117037343 B CN117037343 B CN 117037343B
Authority
CN
China
Prior art keywords
sub
window
degree
gray level
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311293906.2A
Other languages
Chinese (zh)
Other versions
CN117037343A (en
Inventor
赖湘明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodum Electronic Co ltd
Original Assignee
Shenzhen Goodum Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodum Electronic Co ltd filed Critical Shenzhen Goodum Electronic Co ltd
Priority to CN202311293906.2A priority Critical patent/CN117037343B/en
Publication of CN117037343A publication Critical patent/CN117037343A/en
Application granted granted Critical
Publication of CN117037343B publication Critical patent/CN117037343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention relates to the field of facial recognition and authorization security, in particular to a full-automatic facial biometric intelligent lock unlocking method and system. The method comprises the following steps: carrying out multi-scale processing on the face gray level image to obtain reference images under different scales, analyzing the change condition of the gray level values of the pixel points of the sub-windows divided in the reference images to obtain the initial gray level gradient degree of the sub-windows, and correcting the initial gray level gradient degree by combining the scale level of the reference images to obtain the final gray level gradient degree and the gray level gradient direction of the sub-windows; obtaining illumination interference degree based on the difference of gray level gradual change directions of the sub-window and the adjacent window and the final gray level gradual change degree; and obtaining a final matching degree based on the illumination interference degree of the sub-window and the extracted feature value to be detected, and finishing unlocking the intelligent lock. The invention reduces the influence of illumination on face recognition, improves the accuracy of face recognition, and further improves the success rate of unlocking the intelligent lock.

Description

Full-automatic face biological recognition intelligent lock unlocking method and system
Technical Field
The invention relates to the field of facial recognition and authorization security, in particular to a full-automatic facial biometric intelligent lock unlocking method and system.
Background
The arrival of the intelligent age greatly facilitates the life style of people, and the face biological recognition is an important product of the intelligent age and is applied to the aspects of life. The intelligent lock with the face biological recognition solves the problem that the traditional key is easy to lose and can not be unlocked because of the coming release of hands of a person, extracts the characteristic data of the face of the user through initially collecting the face information of the user and stores the characteristic data into a database, and when in unlocking, the intelligent lock with the face biological recognition is matched with the data stored in the database through collecting the characteristic data of the face of the current user, so that the unlocking process of the intelligent lock is completed.
In the prior art, the intelligent lock generally collects feature data of the face of the user by using a local binary pattern (Local Binary Patterns, LBP) algorithm and performs matching with feature data stored in a database to finish unlocking, but because the LBP algorithm is easy to be influenced by illumination in the process of extracting the features of the face of the user, the matching degree between the extracted feature data of the face of the user and the feature data of the user in the database is low, the accuracy of face recognition is reduced, and the success rate of unlocking the intelligent lock is reduced.
Disclosure of Invention
In order to solve the technical problems that when the LBP algorithm is used for extracting the characteristics of the face of a user, the face image is easy to be influenced by illumination, so that the matching degree between the extracted characteristic data of the face of the user and the characteristic data of the user in a database is low, the accuracy of face recognition is reduced, and the success rate of unlocking an intelligent lock is reduced, the invention aims to provide a full-automatic face biological recognition intelligent lock unlocking method and system, and the adopted technical scheme is as follows:
the invention provides a full-automatic face biological recognition intelligent lock unlocking method, which comprises the following steps:
acquiring a facial gray image of a user, and performing multi-scale processing on the facial gray image to obtain reference images with the same size under different scales, wherein the reference images comprise the facial gray image;
performing window division processing on all the reference images to obtain at least two sub-windows with the same size, and taking other sub-windows in a preset neighborhood range with the sub-windows as centers as adjacent windows; making straight lines in different directions through the central pixel point of the sub-window, and obtaining the initial gray level gradual change degree of the sub-window in each direction according to the change of the gray level value of the pixel point on the same straight line in the sub-window and the adjacent window;
correcting the initial gray level gradient degree of the sub-window at the same position in all the reference images according to the scale levels corresponding to all the reference images to obtain the final gray level gradient degree of the sub-window in each direction; obtaining the gray level gradient direction of the sub-window according to the final gray level gradient degree; obtaining the illumination interference degree of the sub-window according to the difference of the gray level gradient directions between the sub-window and the adjacent window and the final gray level gradient degree of the sub-window;
carrying out gray scale feature extraction processing on the sub-window of the face gray scale image to obtain a feature value to be detected of the sub-window; obtaining the final matching degree of the face gray level image according to the feature values to be detected and the illumination interference degree of all the sub-windows; and unlocking the intelligent lock according to the final matching degree.
Further, the obtaining the initial gradation gradient of the sub-window in each direction according to the variation of the gradation values of the pixel points in the same straight line in the sub-window and the adjacent window includes:
taking the pixel points of the sub-window and the adjacent window on the same straight line as target pixel points;
taking any adjacent three target pixel points as a target pixel point group;
taking the difference value of gray values of two adjacent target pixel points in the target pixel point group as a gray difference value, and calculating all gray difference values in the target pixel point group;
calculating the product value of all gray difference values in the target pixel point group as a judging parameter, if the judging parameter is larger than or equal to 0, taking a preset first constant as a gradual change parameter of the target pixel point group, and if the judging parameter is smaller than 0, taking a preset second constant as the gradual change parameter of the target pixel point group, wherein the preset first constant is larger than the preset second constant;
and respectively carrying out normalization processing on the accumulated values of the gradual change parameters of all the target pixel point groups on each straight line to obtain the initial gray level gradual change degree of the sub-window in each direction.
Further, correcting the initial gray level gradient degree of the sub-window at the same position in all the reference images according to the scale levels corresponding to all the reference images, and obtaining the final gray level gradient degree of the sub-window in each direction includes:
carrying out normalization processing on the scale level corresponding to each reference image to obtain the scale weight of the reference image;
taking the scale weight as the weight corresponding to the initial gray level gradient degree of the sub-window in the reference image;
and carrying out weighted summation on the initial gray level gradient degree of the sub-windows at the same position in all the reference images to obtain the final gray level gradient degree of the sub-windows in each direction.
Further, the obtaining the gradation direction of the sub-window according to the final gradation level includes:
and taking the direction corresponding to the maximum value of the final gray level gradient degree in the sub-window as the gray level gradient direction of the sub-window.
Further, the obtaining the illumination interference degree of the sub-window according to the difference of the gray level gradient directions between the sub-window and the adjacent window and the final gray level gradient degree of the sub-window includes:
taking the absolute value of the difference value between the gray scale gradient direction of the sub-window and the gray scale gradient direction of each adjacent window as a direction difference;
carrying out negative correlation normalization processing on all accumulated values of the direction differences to obtain a first interference parameter;
the final gray level gradient degree of the sub-window corresponding to the gray level gradient direction is used as a second interference parameter;
and taking the product value of the first interference parameter and the second interference parameter as the illumination interference degree of a sub-window.
Further, the step of performing gray scale feature extraction processing on the sub-window of the facial gray scale image to obtain a feature value to be measured of the sub-window includes:
based on an LBP algorithm, according to the difference between the gray value of the central pixel point of the sub-window and the gray values of other pixel points, an LBP binary code of the sub-window is obtained, and the LBP binary code is converted into a decimal numerical value to be used as a characteristic value to be measured of the sub-window.
Further, the obtaining the final matching degree of the face gray-scale image according to the feature values to be detected and the illumination interference degree of all the sub-windows includes:
acquiring a preset standard characteristic value of a sub-window of a user face image in the same position in a database;
carrying out negative correlation normalization processing on the absolute value of the difference value between the feature value to be detected and the preset standard feature value to obtain initial matching degree of a sub-window;
performing negative correlation mapping on the illumination interference degree to obtain the confidence degree of the sub-window;
taking the confidence as the weight of the initial matching degree of the corresponding sub-window;
and carrying out weighted summation on all the initial matching degrees to obtain the final matching degree of the face gray level image.
Further, unlocking the intelligent lock according to the final matching degree includes:
if the final matching degree is larger than a preset matching degree threshold, unlocking is successful, and if the final matching degree is not larger than the preset matching degree threshold, unlocking is failed.
Further, the performing multi-scale processing on the facial gray scale image to obtain reference images with the same size under different scales includes:
performing multi-scale processing on the face gray level image based on a Gaussian pyramid to obtain target images under different scales;
and carrying out interpolation processing on each target image to obtain reference images with the same size under different scales.
The invention also provides a full-automatic face biological recognition intelligent lock unlocking system, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one step of the full-automatic face biological recognition intelligent lock unlocking method when executing the computer program.
The invention has the following beneficial effects:
in the invention, considering the limitation that the characteristic extraction of the face of the user is easy to be influenced by illumination in the prior art by using an LBP algorithm, firstly, the face gray image of the face of the user is processed in a multi-scale manner, and the images are analyzed on different scales, so that the information in the images can be more comprehensively captured in the subsequent processing process; the method comprises the steps of dividing windows of reference images under different scales, analyzing sub-windows independently, analyzing the influence of illumination more accurately, primarily reflecting the gray scale change condition of the sub-windows in all directions caused by the illumination through the obtained initial gray scale gradient degree, considering that the influence of the illumination under different scales is different, correcting the initial gray scale gradient degree by combining the scales corresponding to the reference images, reflecting the gray scale change condition of the sub-windows in all directions more accurately through the obtained final gray scale gradient degree, reflecting the main direction of the gray scale change caused by the illumination in the sub-windows through the obtained gray scale gradient direction, and further reflecting the degree of the sub-windows affected by the illumination through the obtained illumination interference degree; the matching degree of the face gray level image is obtained by combining the illumination interference degree of the sub-window and the feature value to be detected, the unlocking of the intelligent lock is completed based on the matching degree, the influence of illumination on face recognition is reduced, the accuracy of face recognition is improved, and therefore the success rate of unlocking the follow-up intelligent lock is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a fully automatic face biometric intelligent lock unlocking method according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects thereof of a fully automatic face biological recognition intelligent lock unlocking method and system according to the invention, which are provided by the invention, with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a full-automatic face biological recognition intelligent lock unlocking method and a system.
Referring to fig. 1, a flowchart of a fully automatic face biometric intelligent lock unlocking method according to an embodiment of the present invention is shown, where the method includes:
step S1: and acquiring a facial gray image of the user, and performing multi-scale processing on the facial gray image to obtain reference images with the same size under different scales, wherein the reference images comprise the facial gray image.
The intelligent lock for face biological recognition can be used for shooting the face of a user, extracting the characteristic information of the face of the user and comparing the characteristic information with the facial characteristic information of the face of the user stored in the database in advance, so that an unlocking task is completed, but due to the influence of illumination or reflection, the matching degree between the characteristic of the face of the user extracted by the intelligent lock for face biological recognition and the facial characteristic of the user in the database is lower, the accuracy of face recognition is lower, and the success rate of unlocking the intelligent lock is reduced.
According to the embodiment of the invention, the user is shot through the camera carried by the intelligent lock equipment, the initial image of the user is acquired, and the accuracy of the subsequent face recognition is reduced because the initial image contains a large amount of background, so that the success rate of unlocking the intelligent lock is reduced, the acquired initial image is input into the semantic segmentation network, and the image of the face of the user in the initial image is output, and the semantic segmentation network is a technical means well known to a person skilled in the art and is not described in detail herein. In order to reduce the calculation amount of subsequent image processing and improve the processing speed, in one embodiment of the invention, the collected facial image of the user is subjected to graying processing and converted into a single-channel facial gray image. It should be noted that the graying process is a technical means well known to those skilled in the art, and will not be described herein.
In order to more comprehensively capture information in the face gray level image in a subsequent processing process and improve the accuracy of face recognition and the success rate of unlocking, the embodiment of the invention obtains the reference images under different scales by carrying out multi-scale processing on the face gray level image of the user, and can more comprehensively analyze the influence of illumination by combining the reference images with different scales in the subsequent processing process.
Preferably, in one embodiment of the present invention, the method for acquiring the reference image at different scales specifically includes:
the method comprises the steps of carrying out multi-scale processing on face gray images based on Gaussian pyramid to obtain target images under different scales, wherein the resolution of each target image is different after the multi-scale processing, so that interpolation processing is needed to be carried out on each target image to obtain reference images under different scales in order to ensure that sub-windows with the same positions exist in each target image when window division is carried out later, the resolution of each reference image is the same as the resolution of the face gray image, and the reference images contain the face gray image because the multi-scale processing is carried out based on the face gray image. It should be noted that, the interpolation processing of the gaussian pyramid and the image are all technical means well known to those skilled in the art, and are not described herein, wherein the interpolation processing is performed by using a bilinear interpolation method in one embodiment of the present invention, and the interpolation processing may be performed by using, for example, a nearest neighbor interpolation method or a cubic interpolation method in other embodiments of the present invention, which is not limited herein.
After the reference images with different scales are obtained, the reference images with different scales can be analyzed in the follow-up process, so that the influence of illumination on the reference images with different scales can be more comprehensively analyzed, and the accuracy of analyzing the influence of the illumination is improved.
Step S2: performing window division processing on all the reference images to obtain at least two sub-windows with the same size, and taking other sub-windows in a preset neighborhood range with the sub-windows as centers as adjacent windows; and (3) making straight lines in different directions through the central pixel point of the sub-window, and obtaining the initial gray level gradual change degree of the sub-window in each direction according to the gray level value change of the pixel points in the same straight line in the sub-window and the adjacent window.
In order to more accurately analyze the influence degree of illumination on a reference image, the embodiment of the invention obtains a plurality of sub-windows with the same size by dividing windows of the reference image, and in one embodiment of the invention, the sizes of the sub-windows are set as followsThe specific size of the sub-window can be set by an operator according to the specific implementation scene, the specific size is not limited, the gray value of the pixel point in the sub-window can be gradually increased or gradually decreased due to the influence of illumination, the main direction of the gray value gradient of the pixel point in the sub-window influenced by the illumination is possibly different in each sub-window, the central pixel point of the sub-window can be used as a straight line in different directions, the central pixel point of the sub-window can be used as a straight line with the included angle of 0, 45, 90 and 135 degrees with the horizontal direction respectively, and other sub-windows in a preset neighborhood range taking the sub-window as the center can be used as adjacent windows due to the larger influence range of illumination, and the preset neighborhood range is set as%>Namely, other sub-windows of 8 neighborhood around the sub-window are used as adjacent windows, the specific numerical value of the preset neighborhood range can be set by an implementer according to specific implementation scenes, the limitation is not made, the initial gray level gradual change degree of the sub-window in each direction is obtained according to the change of the gray level value of the pixel point on the same straight line in the sub-window and the adjacent window, and the condition that each direction of the sub-window is influenced by illumination is reflected through the initial gray level gradual change degree.
Preferably, in an embodiment of the present invention, the method for acquiring the initial gray scale gradient degree of the sub-window in each direction specifically includes:
taking the pixel points of the sub-window and the adjacent window on the same straight line as target pixel points; taking any adjacent three target pixel points as a target pixel point group; taking the difference value of gray values of two adjacent target pixel points in the target pixel point group as a gray difference value, and calculating all gray difference values in the target pixel point group; calculating the product value of all gray difference values in the target pixel point group as a judging parameter, if the judging parameter is larger than or equal to 0, taking a preset first constant as a gradual change parameter of the target pixel point group, and if the judging parameter is smaller than 0, taking a preset second constant as the gradual change parameter of the target pixel point group, wherein the preset first constant is larger than the preset second constant; and respectively carrying out normalization processing on the accumulated values of the gradient parameters of all the target pixel point groups on each straight line to obtain the initial gray level gradient degree of the sub-window in each direction. The expression of the initial gradation level may specifically be, for example:
wherein,indicate->The +.>The child window is at->Initial gradation degree in each direction; />Represents the +.o on the same straight line>Gradual change parameters of the target pixel point groups; />Representing the number of target pixels on the same line, wherein +.>Representing the number of target pixel groups; />Indicate->Gray value of target pixel located at one end in the group of target pixels, +.>Indicate->Gray value of target pixel located at middle position in target pixel group, +.>Indicate->Gray value of the target pixel located at the other end of the target pixel group, wherein +.>And->Is->Gray values of a pair of adjacent target pixels in the target pixel group, +.>And->Is->Gray values of another pair of adjacent target pixels in the target pixel group; />Representing a preset first constant,/->Representing a preset second constant, in one embodiment of the invention +.>Set to 1 @>Set to 0, & gt>And->The specific values of (2) can be set by the practitioner according to the specific implementation scenario, but the +.>;/>Representing the normalization function.
During the acquisition of the initial gradation level of the sub-window in each direction,and->All represent +.>Obtaining the judgment parameters of the target pixel point group by multiplying all the gray level differences in the target pixel point group, and judgingWhen the fixed parameter is greater than or equal to 0, the gradation conditions of the gradation values of the target pixels in the target pixel group are the same, the gradation parameters of the target pixel group are made to be preset first constants, when the judging parameter is less than 0, the gradation conditions of the gradation values of the target pixels in the target pixel group are made to be different, the gradation parameters of the target pixel group are made to be preset second constants, and in the embodiment of the invention, the preset first constants are greater than the preset second constants, so that the gradation parameters of all the target pixel groups are accumulated to obtain accumulated values>The larger the accumulated value, the more uniform the change of the gray scale caused by the influence of the illumination on the sub-window in the direction, the initial gray scale gradual change degree of the sub-window in the direction is +>The larger the gradation level is, the more the gradation level is limited to +.>In the range, the subsequent evaluation and analysis are convenient, and the accumulated value is +.>Normalization processing is carried out to obtain initial gray level gradient degree +.>
In one embodiment of the present invention, the normalization process may specifically be, for example, maximum and minimum normalization processes, and the normalization in the subsequent steps may be performed by using the maximum and minimum normalization processes, and in other embodiments of the present invention, other normalization methods may be selected according to a specific range of values, which will not be described herein.
After the initial gray level gradient degree of the pixel points in the sub-window in each direction is obtained, the degree of the sub-window of the reference image under different scales, which is influenced by illumination in each direction, can be analyzed preliminarily.
Step S3: correcting the initial gray level gradient degree of the sub-window at the same position in all the reference images according to the scale levels corresponding to all the reference images to obtain the final gray level gradient degree of the sub-window in each direction; obtaining the gray level gradient direction of the sub-window according to the final gray level gradient degree; and obtaining the illumination interference degree of the sub-window according to the difference of the gray level gradient directions between the sub-window and the adjacent window and the final gray level gradient degree of the sub-window.
The scale levels of the reference images in the Gaussian pyramid are different under different scales, wherein the scale levels are the levels of the reference images in the Gaussian pyramid, after the multi-scale treatment is carried out on the face gray level images, the influence degree of illumination on the reference images in different scales is different, and the larger the scale level is, the larger the smoothness degree is, the larger the reference value is, so that the initial gray level gradient degree of the sub-window in the same position in all the reference images can be corrected according to the scale levels corresponding to all the reference images, the final gray level gradient degree of the sub-window in each direction is obtained, and the influence of illumination on the gray level of the pixel point in the sub-window in each direction is reflected more accurately through the final gray level gradient degree.
Preferably, in one embodiment of the present invention, the method for acquiring the final gradation level of the sub-window in each direction specifically includes:
carrying out normalization processing on the scale level corresponding to each reference image to obtain the scale weight of the reference image; the scale weight is used as the weight corresponding to the initial gray level gradual change degree of the sub-window in the reference image; and carrying out weighted summation on the initial gray level gradient degree of the sub-windows at the same position in all the reference images to obtain the final gray level gradient degree of the sub-windows in each direction. The expression of the final gradation level may specifically be, for example:
wherein,the +.f. representing any one of the reference pictures>The child window is at->Final gradation degree in the individual directions; />Indicate->The +.>The child window is at->Initial gradation degree in each direction; />Indicate->The scale level of the reference image of each scale in the Gaussian pyramid, namely the scale of the reference image; />Representing the number of scale levels of the gaussian pyramid, which can also be understood as the number of reference images of different scales, with the facial grayscale image at the 0 th scale level of the gaussian pyramid; />Representing a regulatory factor, preventing a corresponding scale weight of 0 when the molecule is 0, in one embodiment of the invention the regulatory factor +.>The specific value of the adjustment factor is set to 1, which can be set by the practitioner according to the specific implementation scenario, and is not limited herein.
During the acquisition of the final gradation level of the sub-window in each direction,indicate->Scale weights of reference pictures at individual scales, wherein +.>Cumulative value for scale level corresponding to all reference pictures for scale level corresponding to the reference picture +.>The normalization processing is carried out, and as the influence degree of the illumination on the reference images under different scales is different, the larger the scale level of the reference image in the Gaussian pyramid is, the larger the smoothness degree of the reference image is, the initial gray level gradual change degree of a sub-window in the reference image under the scale is->The greater the reference value of (1), the greater the scale weight is therefore taken as the initial gray scale level of the sub-window in the corresponding reference image in one embodiment of the invention +.>Weighting and summing the initial gray level gradient degree of the sub-window at the same position in all reference images to obtain final gray level gradient degree of the sub-window in each direction>
After the final gray level gradient degree of each sub-window in each direction is obtained, the main direction of each sub-window affected by the illumination can be analyzed, and as the final gray level gradient degree is larger, the influence degree of the sub-window on the illumination in the direction is larger, preferably, in one embodiment of the invention, the direction corresponding to the maximum value of the final gray level gradient degree is taken as the gray level gradient direction of the sub-window, namely the main direction of the sub-window affected by the illumination, the influence range of the illumination is further considered, and after the influence of the illumination, the difference between the gray level gradient directions of the sub-window and the adjacent windows is smaller, so that the illumination interference degree of the sub-window can be obtained according to the difference of the gray level gradient directions between the sub-window and the adjacent windows and the final gray level gradient degree of the sub-window, and the final influence degree of the illumination on the sub-window is reflected through the illumination interference degree.
Preferably, in an embodiment of the present invention, the method for obtaining the illumination interference degree of the sub-window specifically includes:
taking the absolute value of the difference value between the gray scale gradient direction of the sub-window and the gray scale gradient direction of each adjacent window as the direction difference; carrying out negative correlation normalization processing on the accumulated values of all the direction differences to obtain a first interference parameter; the final gray level gradient degree of the sub-window corresponding to the gray level gradient direction is used as a second interference parameter; taking the product value of the first interference parameter and the second interference parameter as the illumination interference degree of the sub-window. The expression of the degree of interference of light may specifically be, for example:
wherein,the +.f. representing any one of the reference pictures>The illumination interference degree of the sub-window; />Indicate->The gray scale gradual change direction of the sub-window; />Indicate->No. of sub window>Gray scale gradation directions of adjacent windows; />Representing the number of adjacent windows, in one embodiment of the invention other sub-windows in the neighborhood of 8 around the sub-window are taken as adjacent windows, thus +.>;/>Indicate->The sub-window is +.>The corresponding final gray level gradient; />Expressed as natural constant->An exponential function of the base, for normalization processing of the negative correlation.
In the process of acquiring the illumination interference degree of the sub-window,the absolute value of the difference value representing the gray scale gradient direction of the sub-window and the gray scale gradient direction of each adjacent window, namely, the smaller the direction difference, the more consistent the gray scale gradient direction of the sub-window is with the gray scale gradient direction of the adjacent window, and the greater the degree of the sub-window affected by illumination is, the illumination interference degree +.>The larger and thusIn one embodiment of the invention the accumulated values of all directional differences are subjected to a normalization process of negative correlation to obtain a first interference parameter +.>;/>The final gray level gradient degree corresponding to the sub-window in the gray level gradient direction is represented, namely, the larger the second interference parameter is, the more uniform the change of the gray value of the pixel point of the sub-window in the gray level gradient direction is, and further, the greater the degree of the sub-window affected by illumination is, the illumination interference degree is shown as follows>The larger the value of the product of the first and second interference parameters is therefore taken as the illumination interference level of the sub-window +.>The greater the illumination interference, the greater the influence of illumination on the sub-window.
After the illumination interference degree of each sub-window is obtained, the face recognition matching process can be corrected based on the illumination interference degree in the follow-up process, so that the influence of illumination on the face recognition is reduced.
Step S4: carrying out gray scale feature extraction processing on the sub-window of the face gray scale image to obtain a feature value to be detected of the sub-window; obtaining the final matching degree of the face gray level image according to the feature values to be detected and the illumination interference degree of all the sub-windows; and unlocking the intelligent lock according to the final matching degree.
The intelligent lock of the embodiment of the invention is unlocked through face recognition, the feature data of the face gray level image of the face of the user needs to be extracted and matched with the feature data of the face of the user stored in the database, so that gray level feature extraction processing can be carried out on the sub-window of the face gray level image to obtain the feature value to be detected of the sub-window, and the unlocking task is completed through matching the feature value to be detected with the standard feature value in the database in the follow-up process.
Preferably, in an embodiment of the present invention, a method for acquiring a feature value to be measured of a sub-window specifically includes:
in the face recognition process, the LBP algorithm is generally used for extracting the characteristic information of the face of the user, so that based on the LBP algorithm, the LBP binary code of the sub-window can be obtained according to the difference between the gray value of the central pixel point of the sub-window and the gray values of other pixel points, and the LBP binary code is converted into decimal values to serve as the characteristic value to be detected of the sub-window for facilitating subsequent data processing, and the LBP binary code is processed in the subsequent stepsRepresenting the +.f in a facial gray scale image>The measured characteristic value of the sub-window.
After the feature value to be detected of each sub-window in the face gray level image is obtained, the feature value to be detected of each sub-window can be matched with the preset standard feature value of the sub-window in the same position of the face image of the user in the database, the initial matching degree of the sub-window is preliminarily obtained according to the difference between the feature value to be detected of each sub-window and the preset standard feature value of each sub-window in the face image of the user stored in the database is known data, and the following steps are performedRepresenting +.f. in the user's facial image in the database>The preset standard characteristic value of each sub-window.
Preferably, in an embodiment of the present invention, the method for obtaining the initial matching degree of the sub-window specifically includes:
acquiring a preset standard characteristic value of a sub-window of a user face image in the same position in a database; and carrying out negative correlation normalization processing on the absolute value of the difference value between the feature value to be detected and the preset standard feature value to obtain the initial matching degree of the sub-window. The expression of the initial matching degree may specifically be, for example:
wherein,representing the +.f in a facial gray scale image>Initial matching degree of sub-windows; />Representing the +.f in a facial gray scale image>The characteristic value to be measured of the sub window; />Representing +.f. in the user's facial image in the database>Presetting standard characteristic values of the sub-windows; />Expressed as natural constant->An exponential function of the base, for normalization processing of the negative correlation.
During the acquisition of the initial degree of matching of the sub-window,representing the absolute value of the difference between the measured characteristic value and the preset standard characteristic value,/for the test>The larger the difference between the characteristic of the sub-window in the face gray level image of the current user face and the characteristic of the sub-window in the same position of the user face image stored in the databaseThe larger the difference, the lower the initial matching degree of the sub-window, thus in one embodiment of the invention the pair +.>Performing normalization processing of negative correlation to obtain initial matching degree +.>
The initial matching degree of the sub-window is lower due to the influence of illumination, the accuracy of face recognition is reduced, and therefore the unlocking success rate of the intelligent lock is reduced, and the final matching degree of a user can be obtained based on the illumination interference degree of the sub-window in the face gray level image to correct the initial matching degree, so that the influence of illumination on face recognition is reduced, and the accuracy of face recognition is improved.
Preferably, in one embodiment of the present invention, the method for acquiring the final matching degree of the face gray image specifically includes:
performing negative correlation mapping on the illumination interference degree to obtain the confidence degree of the sub-window; taking the confidence coefficient as the weight of the initial matching degree of the corresponding sub-window; and carrying out weighted summation on all the initial matching degrees to obtain the final matching degree of the face gray level image. The expression of the final matching degree may specifically be, for example:
wherein,representing the final matching degree of the face gray-scale image; />The +.f. representing the gray-scale image of the face>The illumination interference degree of the sub-window; />Representing the +.f in a facial gray scale image>Initial matching degree of sub-windows; />The number of sub-windows in the face gray scale image is represented.
During the acquisition of the final degree of matching for the user,indicate->Confidence of sub-window, wherein->Indicate->The smaller the illumination interference degree of the sub-window, the smaller the illumination interference degree, the higher the credibility of the features extracted from the sub-window, because +.>Therefore, the confidence level can be used as the weight of the initial matching degree of the corresponding sub-window, and all the sub-windows are weighted and summed to obtain the final matching degree of the face gray level image +.>Thereby achieving the purpose of reducing the influence of illumination on face recognition.
Furthermore, the intelligent lock can be unlocked according to the obtained final matching degree, and preferably, in one embodiment of the present invention, unlocking the intelligent lock according to the final matching degree specifically includes:
the obtained final matching degree obviously reduces the influence of illumination on face recognition, so that the intelligent lock can be unlocked by setting a preset matching degree threshold value, if the final matching degree is larger than the preset matching degree threshold value, the unlocking is successful, and if the final matching degree is not larger than the preset matching degree threshold value, the unlocking is failed. In one embodiment of the present invention, the preset matching degree threshold is set to 0.8, and a specific value of the preset matching degree may be set by an implementer according to a specific implementation scenario, which is not limited herein.
The embodiment of the invention provides a fully-automatic face biological recognition intelligent lock unlocking system, which comprises a memory, a processor and a computer program, wherein the memory is used for storing the corresponding computer program, the processor is used for running the corresponding computer program, and the computer program can realize the method described in the steps S1-S4 when running in the processor.
In summary, in the embodiment of the invention, firstly, multi-scale processing is performed on a face gray level image of a face of a user to obtain reference images in different scales, window division is performed on all the reference images to obtain a plurality of sub-windows in the reference images, straight lines in different directions are made in the sub-windows, initial gray level gradient of the sub-windows in each direction is obtained according to the change of pixel point gray level values of the sub-windows and adjacent windows on the same straight line, then, the initial gray level gradient of the sub-windows in the same position in all the reference images is corrected according to the scales corresponding to the reference images by combining the reference images in all the scales corresponding to the reference images, final gray level gradient of the sub-windows in each direction is obtained, and the direction corresponding to the maximum value of the final gray level gradient is used as the gray level gradient direction of the sub-windows, and illumination interference degree of the sub-windows is further obtained according to the difference of gray level gradient directions between the sub-windows and the adjacent windows and the final gray level gradient degree of the sub-windows; and then extracting the characteristic value to be detected of the sub-window of the face gray level image, obtaining the initial matching degree of the sub-window according to the difference between the characteristic value to be detected and the standard characteristic value of the sub-window of the face image of the user in the same position in the database, correcting the initial matching degree based on the illumination interference degree to obtain the final matching degree of the user, and unlocking the intelligent lock according to the final matching degree.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The full-automatic face biological recognition intelligent lock unlocking method is characterized by comprising the following steps of:
acquiring a facial gray image of a user, and performing multi-scale processing on the facial gray image to obtain reference images with the same size under different scales, wherein the reference images comprise the facial gray image;
performing window division processing on all the reference images to obtain at least two sub-windows with the same size, and taking other sub-windows in a preset neighborhood range with the sub-windows as centers as adjacent windows; making straight lines in different directions through the central pixel point of the sub-window, and obtaining the initial gray level gradual change degree of the sub-window in each direction according to the change of the gray level value of the pixel point on the same straight line in the sub-window and the adjacent window;
correcting the initial gray level gradient degree of the sub-window at the same position in all the reference images according to the scale levels corresponding to all the reference images to obtain the final gray level gradient degree of the sub-window in each direction; obtaining the gray level gradient direction of the sub-window according to the final gray level gradient degree; obtaining the illumination interference degree of the sub-window according to the difference of the gray level gradient directions between the sub-window and the adjacent window and the final gray level gradient degree of the sub-window;
carrying out gray scale feature extraction processing on the sub-window of the face gray scale image to obtain a feature value to be detected of the sub-window; obtaining the final matching degree of the face gray level image according to the feature values to be detected and the illumination interference degree of all the sub-windows; and unlocking the intelligent lock according to the final matching degree.
2. The method for unlocking the full-automatic face biometric intelligent lock according to claim 1, wherein the obtaining the initial gradation gradient of the sub-window in each direction according to the change of the gradation values of the pixels on the same straight line in the sub-window and the adjacent windows comprises:
taking the pixel points of the sub-window and the adjacent window on the same straight line as target pixel points;
taking any adjacent three target pixel points as a target pixel point group;
taking the difference value of gray values of two adjacent target pixel points in the target pixel point group as a gray difference value, and calculating all gray difference values in the target pixel point group;
calculating the product value of all gray difference values in the target pixel point group as a judging parameter, if the judging parameter is larger than or equal to 0, taking a preset first constant as a gradual change parameter of the target pixel point group, and if the judging parameter is smaller than 0, taking a preset second constant as the gradual change parameter of the target pixel point group, wherein the preset first constant is larger than the preset second constant;
and respectively carrying out normalization processing on the accumulated values of the gradual change parameters of all the target pixel point groups on each straight line to obtain the initial gray level gradual change degree of the sub-window in each direction.
3. The method for unlocking the full-automatic face biometric intelligent lock according to claim 1, wherein the step of correcting the initial gradation level of the sub-window at the same position in all the reference images according to the scale levels corresponding to all the reference images to obtain the final gradation level of the sub-window in each direction comprises:
carrying out normalization processing on the scale level corresponding to each reference image to obtain the scale weight of the reference image;
taking the scale weight as the weight corresponding to the initial gray level gradient degree of the sub-window in the reference image;
and carrying out weighted summation on the initial gray level gradient degree of the sub-windows at the same position in all the reference images to obtain the final gray level gradient degree of the sub-windows in each direction.
4. The method for unlocking a full-automatic face biometric intelligent lock according to claim 1, wherein the obtaining the gradation direction of the sub-window according to the final gradation level comprises:
and taking the direction corresponding to the maximum value of the final gray level gradient degree in the sub-window as the gray level gradient direction of the sub-window.
5. The method for unlocking the full-automatic face biometric intelligent lock according to claim 1, wherein the obtaining the illumination interference degree of the sub-window according to the difference of the gray level gradient directions between the sub-window and the adjacent window and the final gray level gradient degree of the sub-window comprises:
taking the absolute value of the difference value between the gray scale gradient direction of the sub-window and the gray scale gradient direction of each adjacent window as a direction difference;
carrying out negative correlation normalization processing on all accumulated values of the direction differences to obtain a first interference parameter;
the final gray level gradient degree of the sub-window corresponding to the gray level gradient direction is used as a second interference parameter;
and taking the product value of the first interference parameter and the second interference parameter as the illumination interference degree of a sub-window.
6. The method for unlocking a full-automatic face biometric intelligent lock according to claim 1, wherein the step of performing gray feature extraction processing on the sub-window of the face gray level image to obtain the feature value to be measured of the sub-window comprises:
based on an LBP algorithm, according to the difference between the gray value of the central pixel point of the sub-window and the gray values of other pixel points, an LBP binary code of the sub-window is obtained, and the LBP binary code is converted into a decimal numerical value to be used as a characteristic value to be measured of the sub-window.
7. The method for unlocking a full-automatic face biometric intelligent lock according to claim 1, wherein obtaining the final matching degree of the face gray-scale image according to the feature values to be detected and the illumination interference degree of all sub-windows comprises:
acquiring a preset standard characteristic value of a sub-window of a user face image in the same position in a database;
carrying out negative correlation normalization processing on the absolute value of the difference value between the feature value to be detected and the preset standard feature value to obtain initial matching degree of a sub-window;
performing negative correlation mapping on the illumination interference degree to obtain the confidence degree of the sub-window;
taking the confidence as the weight of the initial matching degree of the corresponding sub-window;
and carrying out weighted summation on all the initial matching degrees to obtain the final matching degree of the face gray level image.
8. The method for unlocking a fully automatic face biometric intelligent lock according to claim 1, wherein unlocking the intelligent lock according to the final matching degree comprises:
if the final matching degree is larger than a preset matching degree threshold, unlocking is successful, and if the final matching degree is not larger than the preset matching degree threshold, unlocking is failed.
9. The method for unlocking a full-automatic face biometric intelligent lock according to claim 1, wherein the performing multi-scale processing on the face gray level image to obtain reference images of the same size under different scales comprises:
performing multi-scale processing on the face gray level image based on a Gaussian pyramid to obtain target images under different scales;
and carrying out interpolation processing on each target image to obtain reference images with the same size under different scales.
10. A fully automatic face biometric intelligent lock unlocking system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 9.
CN202311293906.2A 2023-10-09 2023-10-09 Full-automatic face biological recognition intelligent lock unlocking method and system Active CN117037343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311293906.2A CN117037343B (en) 2023-10-09 2023-10-09 Full-automatic face biological recognition intelligent lock unlocking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311293906.2A CN117037343B (en) 2023-10-09 2023-10-09 Full-automatic face biological recognition intelligent lock unlocking method and system

Publications (2)

Publication Number Publication Date
CN117037343A CN117037343A (en) 2023-11-10
CN117037343B true CN117037343B (en) 2023-12-12

Family

ID=88635865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311293906.2A Active CN117037343B (en) 2023-10-09 2023-10-09 Full-automatic face biological recognition intelligent lock unlocking method and system

Country Status (1)

Country Link
CN (1) CN117037343B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232390A (en) * 2019-06-13 2019-09-13 长安大学 Image characteristic extracting method under a kind of variation illumination
CN116363133A (en) * 2023-06-01 2023-06-30 无锡斯达新能源科技股份有限公司 Illuminator accessory defect detection method based on machine vision

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI424360B (en) * 2007-12-31 2014-01-21 Altek Corp Multi-directional face detection method
KR101192365B1 (en) * 2008-12-18 2012-10-17 한국전자통신연구원 System and method for detecting of face
KR101656373B1 (en) * 2014-10-15 2016-09-12 서울시립대학교 산학협력단 Face identifying method, face identifying apparatus and computer program executing the method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232390A (en) * 2019-06-13 2019-09-13 长安大学 Image characteristic extracting method under a kind of variation illumination
CN116363133A (en) * 2023-06-01 2023-06-30 无锡斯达新能源科技股份有限公司 Illuminator accessory defect detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的人脸检测算法;周敬利, 吴桂林, 余胜生;计算机工程(第11期);全文 *

Also Published As

Publication number Publication date
CN117037343A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN108520216B (en) Gait image-based identity recognition method
CN109299643B (en) Face recognition method and system based on large-posture alignment
CN111368683B (en) Face image feature extraction method and face recognition method based on modular constraint CenterFace
CN110532897A (en) The method and apparatus of components image recognition
CN110659586B (en) Gait recognition method based on identity-preserving cyclic generation type confrontation network
CN108629262A (en) Iris identification method and related device
CN111223063A (en) Finger vein image NLM denoising method based on texture features and binuclear function
CN111709305B (en) Face age identification method based on local image block
Song et al. Feature extraction and target recognition of moving image sequences
CN109344758B (en) Face recognition method based on improved local binary pattern
KR20080079798A (en) Method of face detection and recognition
Achban et al. Wrist hand vein recognition using local line binary pattern (LLBP)
CN117037343B (en) Full-automatic face biological recognition intelligent lock unlocking method and system
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN117372432A (en) Electronic cigarette surface defect detection method and system based on image segmentation
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN110826384A (en) System and method for enhancing iris recognition accuracy
CN115203663A (en) Small-visual-angle remote video gait accurate identification and identity authentication system
CN113658238A (en) Near-infrared vein image high-precision matching method based on improved feature detection
CN114973307A (en) Finger vein identification method and system for generating countermeasure and cosine ternary loss function
CN110533636B (en) Image analysis device
CN112509004A (en) Target searching method and device in infrared thermal imaging image
Punyani et al. Iris recognition system using morphology and sequential addition based grouping
CN111428643A (en) Finger vein image recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant