CN117275130B - Intelligent access control verification system based on face recognition - Google Patents

Intelligent access control verification system based on face recognition Download PDF

Info

Publication number
CN117275130B
CN117275130B CN202311532722.7A CN202311532722A CN117275130B CN 117275130 B CN117275130 B CN 117275130B CN 202311532722 A CN202311532722 A CN 202311532722A CN 117275130 B CN117275130 B CN 117275130B
Authority
CN
China
Prior art keywords
image
space
dog
point
retention degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311532722.7A
Other languages
Chinese (zh)
Other versions
CN117275130A (en
Inventor
刘宏康
王雯娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Finance College
Original Assignee
Changchun Finance College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Finance College filed Critical Changchun Finance College
Priority to CN202311532722.7A priority Critical patent/CN117275130B/en
Publication of CN117275130A publication Critical patent/CN117275130A/en
Application granted granted Critical
Publication of CN117275130B publication Critical patent/CN117275130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention relates to the technical field of face recognition, in particular to an intelligent access control verification system based on face recognition, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the following steps: acquiring a face area image of a face image to be identified, and further acquiring scale variation parameters of each sampling image; determining a first retention degree of each DOG space image based on the scale change parameters, and determining a second retention degree of each DOG space image based on the gray value of each pixel point in the DOG space image, thereby obtaining a feature retention degree; and obtaining the weight of each DOG space image based on the feature retention degree, and further determining the position of each feature point to obtain an access control verification result. According to the invention, the purpose of accurate identification is achieved by extracting the feature points with facial skeleton features, and the identification effect of access control verification is further improved.

Description

Intelligent access control verification system based on face recognition
Technical Field
The invention relates to the technical field of face recognition, in particular to an intelligent access control verification system based on face recognition.
Background
Face recognition is also commonly called face recognition and face recognition, is a biological recognition technology for carrying out identity recognition based on face feature information of people, and can be applied to the field of intelligent access control verification. Regarding the implementation mode of face recognition, the traditional face recognition technology is mainly based on the face recognition of visible light images, and when the ambient illumination changes, the recognition effect is drastically reduced, and the requirement of an actual system cannot be met. In order to overcome the defects of the conventional implementation manner, various technologies such as a neural network, face feature extraction, multi-mode face recognition, zero-sample face recognition and the like are proposed in the prior art, wherein face recognition based on the face feature extraction technology has higher recognition accuracy, such as SIFT (Scale Invariant Feature Transform ) operator.
And identifying and extracting facial features by using a SIFT operator, so that feature points obtained through feature extraction have basic features of the face, and the robustness of the feature points can be maintained under the condition of large environmental change such as illumination and the like. However, the tolerance of the SIFT operator to the micro-variation of the facial expression is low, namely the density of local feature points of the micro-variation of the facial expression is low, which results in different retaining degrees of the SIFT operator to the skeleton region under different scales in the process of extracting the feature points, and the extraction effect of the feature points obtained by the subsequent calculation based on different scales on the face region is limited, so that the face recognition accuracy is low in the access control verification process, and the access control verification effect is poor.
Disclosure of Invention
In order to solve the technical problem that the face recognition accuracy is low in the door access verification process due to the fact that the extraction effect of the feature points obtained by the SIFT operator on the face area is limited, the invention aims to provide an intelligent door access verification system based on face recognition, and the adopted technical scheme is as follows:
the embodiment of the invention provides an intelligent access control verification system based on face recognition, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the following steps:
acquiring a face image to be identified, and performing image preprocessing on the face image to be identified to acquire a face area image;
carrying out Gaussian pyramid ascending and descending sampling processing on the facial area image to obtain each sampling image of each layer; determining the scale change parameters of each sampling image of each layer according to the gray value of each pixel point in each sampling image;
performing scale change analysis according to the scale change parameters of each sampling image and each DOG space image of the same layer, and determining a first retention degree of each DOG space image;
for any DOG space image, determining the position of each marking space point according to the gray value of each pixel point in the DOG space image; performing bone feature fitting analysis according to the positions of the marked space points, and determining a second retention degree of the DOG space image;
Determining the feature retention degree of each DOG space image according to the first retention degree and the second retention degree of each DOG space image; obtaining the weight of each DOG space image according to the feature retention degree;
according to the weight of each DOG space image and each pixel point in each DOG space image, determining the position of each characteristic point by combining with a SIFT algorithm; and inputting the positions of the characteristic points into an access control system to obtain an access control verification result.
Further, determining a scale variation parameter of each sampled image of each layer according to the gray value of each pixel point in each sampled image, including:
for any sampling image of any layer, determining the gray variance of the sampling image according to the gray value of each pixel point in the sampling image; performing discrete Fourier transform processing on the sampled image to obtain each high-frequency part in the sampled image; counting the number of pixels of each high-frequency part and the number of pixels of the sampled image; determining the complexity of the sampled image according to the gray variance of the sampled image, the number of pixels of each high-frequency part and the number of pixels in the sampled image;
dividing each sampling image into two classes according to the complexity, setting Gaussian filter variance of each sampling image in each class, and taking the Gaussian filter variance as a scale change parameter of the corresponding sampling image.
Further, determining the complexity of the sampled image according to the gray variance of the sampled image, the number of pixels of each high frequency part and the number of pixels in the sampled image, includes:
the ratio of the number of pixels of the sampling image to the gray variance is determined as a first complex factor, the accumulated sum of the number of pixels of each high-frequency part is taken as a second complex factor, the first complex factor and the second complex factor are multiplied, and the multiplied value is taken as the complexity of the corresponding sampling image.
Further, performing scale change analysis according to the scale change parameters of each sampling image and each DOG space image of the same layer, and determining a first retention degree of each DOG space image, including:
calculating the absolute value of the difference between the scale change parameters of two adjacent sampling images in the same layer according to the scale change parameters of each sampling image in the same layer, and taking the absolute value of the difference between the scale change parameters as the corresponding scale change parameter of the DOG space image;
according to the scale change parameters of each DOG space image of the same layer, determining a first retention degree of each DOG space image, wherein a calculation formula of the first retention degree is as follows:
The method comprises the steps of carrying out a first treatment on the surface of the In (1) the->First degree of preservation for ith DOG aerial image of the same layer,/I>Scale change parameter of ith DOG space image of same layer, < ->Is the->Scale change parameters of the individual DOG spatial images, < ->For the preset minimum scale variation parameter, t is the preset maximum value of the scale difference range, exp is an exponential function based on a natural constant.
Further, determining the position of each mark space point according to the gray value of each pixel point in the DOG space image comprises the following steps:
performing Ojin threshold segmentation on the DOG space image to obtain each space point in the foreground region, and obtaining a gray average value corresponding to each space point in the foreground region; and marking the space points with the gray characteristic values smaller than or equal to the preset gray difference value to obtain each marked space point, and further obtaining the positions of each marked space point.
Further, performing a bone feature fitting analysis according to the positions of the marker spatial points to determine a second retention degree of the DOG spatial image, including:
according to the positions of all the marked space points in the DOG space image, a least square method circle fitting model is utilized to obtain fitting probability and fitting circle areas corresponding to the DOG space image; determining each marked space point in the fitting circle area as a high fitting rate point, and calculating the distance between any two high fitting rate points to further obtain variance values corresponding to all the distances; taking the sum of the variance value and the preset numerical value as a denominator of the second retention degree, and taking the fitting probability as a numerator of the second retention degree to obtain the second retention degree.
Further, determining the feature retention degree of each DOG-space image according to the first retention degree and the second retention degree of each DOG-space image includes:
for any DOG space image, calculating the product between the first retention degree and the first preset weight, further calculating the product between the second retention degree and the second preset weight, and taking the value obtained by adding the two products as the characteristic retention degree of the corresponding DOG space image.
Further, obtaining the weight of each DOG space image according to the feature retention degree comprises the following steps:
sequencing the feature retention degree of each DOG space image according to the sequence from big to small to obtain a feature retention degree sequence; dividing the feature preservation degree sequence into a preset number of subsequences, setting the weight of each DOG space image corresponding to the first subsequence as a first weight, setting the weight of each DOG space image corresponding to the second subsequence as a second weight, and setting the weight of each DOG space image corresponding to the third subsequence as a third weight; wherein the preset number is 3.
Further, the first weight is greater than the second weight, and the second weight is greater than the third weight.
Further, image preprocessing is performed on the face image to be identified to obtain a face area image, including:
graying treatment is carried out on the face image to be identified, and a gray image of the face image to be identified is obtained; performing image enhancement processing on the gray level image to obtain an image after the image enhancement processing; and (3) carrying out segmentation processing on the image after image enhancement by utilizing a semantic segmentation technology to obtain a face region image of the face image to be recognized.
The invention has the following beneficial effects:
the invention provides an intelligent access control verification system based on face recognition, which firstly determines the scale change parameters of each sampled image of each layer, and the determined scale change parameters are beneficial to effectively reserving facial skeleton characteristics when a DOG space is constructed subsequently; secondly, calculating the feature retention degree of the DOG space on the facial bone features under different scales by combining with the SIFT operator, obtaining the weight of the DOG space image by key points corresponding to the feature retention degree based on the feature retention degree, and obtaining the weight to help to retain the bone features of the facial region; meanwhile, the SIFT operator in the face recognition scene is improved to a certain extent through the weight, so that the finally output feature points can effectively extract the facial skeleton features, and the problem that deviation exists in DOG space key point acquisition caused by micro-variation of the face expression is solved; then, based on the weight of each DOG space image, the position of each characteristic point is determined by combining with a SIFT algorithm, so that the defect that deviation exists in the acquisition of the characteristic points due to operator low tolerance caused by micro-variation of facial expression is overcome; and finally, the positions of the characteristic points are input into the access control system to obtain an access control verification result, which is favorable for realizing the purpose of accurately identifying the face, and the verification effect of the access control verification system is improved to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an intelligent entrance guard verification system based on face recognition.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, features and effects of the technical solution according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The specific scene aimed by the invention is as follows: and in the process of calculating and extracting the feature points of the face image to be identified by using a SIFT operator, the output feature points cannot effectively retain the features of facial bones due to slight facial expression and other reasons, so that the door control verification effect is poor or the face identification cannot be realized.
In order to overcome the defects of the scene description, the embodiment provides an intelligent access control verification system based on face recognition, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the following steps:
acquiring a face image to be identified, and performing image preprocessing on the face image to be identified to acquire a face area image;
carrying out Gaussian pyramid ascending and descending sampling processing on the facial area image to obtain each sampling image of each layer; determining the scale change parameters of each sampling image of each layer according to the gray value of each pixel point in each sampling image;
performing scale change analysis according to the scale change parameters of each sampling image and each DOG space image of the same layer, and determining a first retention degree of each DOG space image;
for any DOG space image, determining the position of each marking space point according to the gray value of each pixel point in the DOG space image; performing bone feature fitting analysis according to the positions of the marked space points, and determining a second retention degree of the DOG space image;
Determining the feature retention degree of each DOG space image according to the first retention degree and the second retention degree of each DOG space image; obtaining the weight of each DOG space image according to the feature retention degree;
according to the weight of each DOG space image and each pixel point in each DOG space image, determining the position of each characteristic point by combining with a SIFT algorithm; and inputting the positions of the characteristic points into an access control system to obtain an access control verification result.
The following detailed development of each step is performed:
referring to fig. 1, there is shown an execution flow chart of the intelligent access control verification system based on face recognition, which comprises the following steps:
s1, acquiring a face image to be recognized, and performing image preprocessing on the face image to be recognized to obtain a face area image.
The identity information of a person is verified in the access verification system, and a face image of the person to be verified needs to be acquired, and is called as a face image to be recognized. In order to facilitate the subsequent analysis of the image characteristics of the face image to be identified, the face image to be identified is subjected to graying processing, and a gray image of the face image to be identified is obtained, and the implementation method of the graying processing includes but is not limited to: the implementation process of the graying process is a prior art, and will not be described in detail here.
Secondly, in order to improve the image quality of the gray level image of the face image to be recognized, the gray level image is subjected to contrast increasing treatment by utilizing histogram equalization, so that an image after image enhancement treatment is obtained, and the implementation process of the histogram equalization is the prior art and is not described in detail herein.
Meanwhile, in order to prevent the surrounding environment from causing adverse effects on the subsequent feature extraction, the background part needs to be removed by semantic segmentation, and the face area is reserved, namely, the image after image enhancement is subjected to segmentation processing by utilizing the semantic segmentation technology, so that the face area image of the face image to be identified is obtained.
It should be noted that, the network structure of semantic segmentation selects a convolutional neural network, a cross entropy function is adopted as a loss function in the training process, and a verification set is generally used to monitor the performance of the model, and the network architecture and super parameters are adjusted according to the performance index, so that iteration is continuously performed until convergence or the performance reaches the best. Model properties include, but are not limited to: ioU (Intersection over Union), dice Coefficient (Dice Coefficient), and the like. The implementation process of the semantic segmentation technology is the prior art, and is not within the scope of the present invention, and will not be described in detail here.
Thus far, the present embodiment obtains the face area image of the person to be authenticated.
S2, carrying out Gaussian pyramid ascending and descending sampling processing on the facial area image to obtain each sampling image of each layer; and determining the scale change parameters of each sampling image of each layer according to the gray value of each pixel point in each sampling image.
First, the face region image is subjected to Gaussian pyramid up-and-down sampling processing to obtain each sampled image of each layer.
In order to analyze the facial region image, the facial region image needs to be converted into facial region images with different sampling scales so as to overcome the defect that the extraction effect of the bone region is limited due to different reservation degrees of the SIFT operator on the bone region under different scales.
In this embodiment, according to a preset sampling step length and the number of layers of the gaussian pyramid, the face area image is subjected to up-sampling processing and down-sampling processing of the gaussian pyramid, so that each sampled image of each layer can be obtained, and the implementation process of the gaussian pyramid sampling processing is in the prior art, and no description is repeated here. Each layer of the gaussian pyramid contains gaussian filtered sample images of different dimensions, i.e. the layers of the gaussian pyramid differ in size from one sample image to another.
And secondly, determining the scale change parameters of each sampling image of each layer according to the gray value of each pixel point in each sampling image.
The higher the information content contained in the sampled image, the larger the scale change span of the sampled image is, and the purpose of the sampled image is to completely display the information in a limited number of scale spaces; conversely, the lower the amount of information contained in a sampled image, the smaller the span of dimensional changes in the sampled image, which aims to embody the information in detail in a limited number of dimensional spaces.
And a first sub-step of determining the complexity degree of each sampling image according to the gray value of each pixel point in each sampling image.
In order to display the information of the sampled image as much as possible, that is, to perform quantization processing, it is necessary to calculate the complexity of the data information included in the sampled image of the original image. In order to determine the complexity of the sampled image, it is necessary to analyze basic data relationships, such as the relationship between gray variance and the area of the sampled image, based on gray information of the sampled image.
Firstly, for any sampling image of any layer, determining gray variance of the sampling image according to gray value of each pixel point in the sampling image; performing discrete Fourier transform processing on the sampled image to obtain each high-frequency part in the sampled image; and counting the number of pixels of each high-frequency part and the number of pixels of the sampled image.
In this embodiment, taking any one of the sampled images of a single layer as an example, the complexity of the sampled image is analyzed; the number of pixels of the sampling image is the area of the sampling image; the calculation process of the gray variance and the discrete fourier transform of the sampled image is the prior art, and will not be described here. It should be noted that, the high-frequency part of the present embodiment defaults to a pixel point corresponding to a part of the FFT (Fast Fourier Transformation, fast fourier transform) stage, where the coefficient position is in the first 50% after the inverse transform.
And secondly, determining the complexity of the sampled image according to the gray variance of the sampled image, the number of pixels of each high-frequency part and the number of pixels in the sampled image.
The ratio of the number of pixels of the sampling image to the gray variance is determined as a first complex factor, the accumulated sum of the number of pixels of each high-frequency part is taken as a second complex factor, the first complex factor and the second complex factor are multiplied, and the multiplied value is taken as the complexity of the corresponding sampling image.
As an example, the calculation formula of the complexity of the sampled image may be:
the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For the complexity of the ith sample image, < +. >For the number of pixels of the ith sampled image,/->For the first complexity factor, +>Gray variance for the ith sample image, +.>For the second complexity factor, +>Also the sum of the pixel numbers of the high-frequency parts, +.>A range of values for the complexity of the i-th sampled image.
In a calculation formula of the complexity, the greater the complexity of the sampled image of the face region image; the larger the gray variance is, the higher the filtering intensity of the sampled image is, the greater the degree of filtering out the edge part of the obtained image is, and the smaller the complexity of the image is, so that the gray variance and the complexity are in a negative correlation relationship; the first complexity factor may represent a coefficient of variation of the sampled image, and the reason why it is a calculation factor of the complexity may be: the size of the sampled image is uncertain so the variable can be controlled with a first complexity factor. The more the number of pixels in each high frequency portion, the more details in the space domain, and correspondingly the higher the complexity, the more the sampled image has edge characteristics, so the second complexity factor and the complexity are positively correlated. k is an upper limit of the complexity of the sampled image, and different upper limits corresponding to the complexity of different sampled images are not specifically limited here.
And a second sub-step of determining the scale variation parameters of each sampling image according to the complexity degree of each sampling image.
Dividing each sampling image into two classes according to the complexity, setting Gaussian filter variance of each sampling image in each class, and taking the Gaussian filter variance as a scale change parameter of the corresponding sampling image.
In this embodiment, all the obtained complexity levels are sorted from high to low, the sorted values are divided into two types, the type with lower complexity is marked as uncomplicated, and the type with higher complexity is marked as complex. If the number of the sampling images in each class is 5, setting the scale change parameters corresponding to each sampling image without complexity, namely:the method comprises the steps of carrying out a first treatment on the surface of the Setting scale change parameters corresponding to each sampling image in a complex manner, namely:wherein->Representing the gaussian filter variance.
Thus far, the present embodiment obtains the scale variation parameters of the respective sampled images of each layer.
S3, performing scale change analysis according to the scale change parameters of each sampling image and each DOG space image of the same layer, and determining the first retention degree of each DOG space image.
In order to obtain the retention degree of the facial bone portions under different scales, each factor affecting the retention degree needs to be analyzed to obtain the final feature retention degree under different scale differences in the DOG (Difference of Gauss, gaussian difference) space. The feature retention degree comprises a first retention degree and a second retention degree, wherein the retention degree refers to retention degree of facial skeleton features, when the feature retention degree is higher, the weight is higher when the corresponding DOG space image key points are calculated, the main purpose of calculating the weight is to amplify the layer with higher feature retention degree originally, and main features can still be effectively retained after the DOG space images are obtained by making difference. First, a first degree of retention of each DOG-space image is calculated, including:
The first step, according to the scale change parameters of each sampling image of the same layer, determining the scale change parameters of each DOG space image.
In this embodiment, according to the scale variation parameters of each sampling image of the same layer, the absolute value of the difference between the scale variation parameters of two adjacent sampling images in the same layer is calculated, and the absolute value of the difference between the scale variation parameters is used as the scale variation parameter of the corresponding DOG space image. The DOG spatial image is obtained by differencing two adjacent sampling images in the same layer, the single layer comprises a plurality of Gaussian smoothed images, the images in the same layer are equal in size, and the determination mode of the DOG spatial image is the prior art and is not described in detail herein.
And a second step of determining a first retention degree of each DOG space image according to the scale change parameters of each DOG space image of the same layer.
In this embodiment, the larger the scale variation parameter, the smaller the weight of the central pixel in the convolution kernel, and the higher the retention degree of the detail part of the image, so the scale variation parameter and the first retention degree have a positive correlation.
As an example, the calculation formula of the first retention level may be:
The method comprises the steps of carrying out a first treatment on the surface of the In (1) the->First degree of preservation for ith DOG aerial image of the same layer,/I>Scale change parameter of ith DOG space image of same layer, < ->Is the->Scale change parameters of the individual DOG spatial images, < ->For the preset minimum scale variation parameter, t is the preset maximum value of the scale difference range, exp is an exponential function based on a natural constant.
In the calculation formula of the first retention degree, t is a preset maximum value of the scale difference range, based on the scale change parameters of each sampled image recorded in the step S2, the preset maximum value can be 128, and the preset maximum value can be set by an implementer according to specific practical conditions without specific limitation; the difference between the scale change parameter and the preset minimum scale change parameter can be used for measuring the size of the scale factor, if the difference value of the scale change parameter and the preset minimum scale change parameter is 0, the condition that the feature retention degree of the bone feature is the lowest under the scale change parameter is indicated, the preset minimum scale change parameter can be set to be 1.6, and the minimum scale change parameter can be set by an implementer according to specific practical conditions without specific limitation; by analyzing the scale variation difference between two adjacent DOG space images, the larger the scale variation difference is, the larger the scale span is, and the smaller the detail feature retention degree is. It should be noted that i in this embodiment is greater than 1, that is, the analysis of the first retention is performed from the second DOG aerial image of the same layer.
Thus far, the present embodiment obtains the first degree of retention of each DOG aerial image.
S4, determining the positions of all the mark space points according to the gray value of each pixel point in each DOG space image; and carrying out bone feature fitting analysis according to the positions of the marked space points, and determining the second retention degree of each DOG space image.
It should be noted that, after the first retention level is obtained, the first retention level may represent a scale factor that affects the feature retention level, and in order to further obtain the retention level of the face bone, analysis needs to be performed in combination with the specific feature of the face bone. Due to differences in reflection or absorption characteristics between facial bones and skin tissue, other facial features, the facial bone regions may be brighter or darker than the surrounding regions, i.e., exhibit significant gray level changes, while incorporating structural features of the facial bones. The structural features may be: the specific expression form of the DOG space of the features in the SIFT operator is obtained by the clearer boundary line, the relative positions between the boundary line and the contours of other structures of the face and the regular geometric structure, so that the feature retention degree of the DOG space under different scale differences in the layer is obtained, and the scale differences are scale change parameters.
And the first step, determining the position of each mark space point according to the gray value of each pixel point in each DOG space image.
Performing Ojin threshold segmentation on the DOG space image to obtain each space point in the foreground region, and obtaining a gray average value corresponding to each space point in the foreground region; and marking the space points with the gray characteristic values smaller than or equal to the preset gray difference value to obtain each marked space point, and further obtaining the positions of each marked space point.
In this embodiment, each independent pixel point in the DOG space image may be referred to as a space point, and is first based on the DOG space imageThe gray value of each pixel point in the image is divided into the oxford threshold values by using the oxford threshold values, and the implementation process of the oxford threshold values is the prior art and is not described in detail here; and selecting foreground pixel points after the Ojin threshold segmentation, and calculating the gray average value corresponding to all the foreground pixel points. Secondly, the gray value and the gray average value of each space point are differenced, and the obtained difference value is taken as the gray characteristic value of the corresponding space point and is recorded as The method comprises the steps of carrying out a first treatment on the surface of the Comparing the gray characteristic value of each space point with a preset gray difference value, and marking the space points with the gray characteristic value smaller than or equal to the preset gray difference value to obtain each marked space point; the preset gray level difference may be set to 30, and the practitioner may set the magnitude of the gray level difference according to specific practical situations, which is not limited herein. Finally, the position of each marking space point in the DOG space image is obtained.
And secondly, performing bone feature fitting analysis according to the positions of the marked space points, and determining a second retention degree of each DOG space image.
According to the positions of all the marked space points in the DOG space image, a least square method circle fitting model is utilized to obtain fitting probability and fitting circle areas corresponding to the DOG space image; determining each marked space point in the fitting circle area as a high fitting rate point, and calculating the distance between any two high fitting rate points to further obtain variance values corresponding to all the distances; taking the sum of the variance value and the preset numerical value as a denominator of the second retention degree, and taking the fitting probability as a numerator of the second retention degree to obtain the second retention degree.
In this embodiment, the implementation process of the least square method circle fitting model is the prior art, and will not be described in detail here. The fitting probability refers to the probability that the fitting circle area is close to a circle after each marking space point is subjected to least square circle fitting to carry out marking space point fitting, and the probability can be directly obtained in the least square circle fitting process; the fitting rate of each marked space point in the fitting circle area is high, and in order to quantify the facial skeleton structural characteristics, the marked space points with high fitting rate are analyzed, namely, each marked space point in the fitting circle area is determined to be a high fitting rate point.
As an example, the calculation formula of the second retention degree of the DOG space image may be:
the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->Second degree of retention of ith DOG spatial image for the same layer,/I>The fitting probability corresponding to the ith DOG space image of the same layer is set as a preset value, wherein 1 is the fitting probability corresponding to the ith DOG space image of the same layer; />The variance value corresponding to all distances of the ith DOG space image of the same layer, wherein the distances are Euclidean distances between any two points with high fitting rate, the Euclidean distances can be obtained through calculation based on the positions of the two points, and the detailed calculation process is the prior art and is not described in detail herein.
In the calculation formula of the second retention degree, the larger the fitting probability is, the closer the shape of the facial skeleton feature in the DOG space is to a circle is combined, and the larger the second retention degree is; the smaller the variance value corresponding to all the distances between the high fitting rate points, the more concentrated and stable the high fitting rate points are, the more reasonable the facial skeleton distribution is, and the greater the second retention degree of the DOG space image is; the value 1 in the denominator is to prevent a special case where the denominator is 0.
It should be noted that, the fitting rate refers to the fitting degree to the facial bone characteristics, the higher the fitting rate, the closer it is to the facial bone portions, and the higher the second retention degree at this time.
Thus far, the present embodiment obtains the second degree of retention of each DOG space image.
S5, determining the feature retention degree of each DOG space image according to the first retention degree and the second retention degree of each DOG space image; and obtaining the weight of each DOG space image according to the feature retention degree.
And a first step of determining the feature retention degree of each DOG space image according to the first retention degree and the second retention degree of each DOG space image.
For any DOG space image, calculating the product between the first retention degree and the first preset weight, further calculating the product between the second retention degree and the second preset weight, and taking the value obtained by adding the two products as the characteristic retention degree of the corresponding DOG space image.
In this embodiment, the importance degree of the first retention degree and the second retention degree are different, that is, the second retention degree representing the real feature of the face is amplified, so the first preset weight is smaller than the second preset weight, and the first preset weight and the second preset weight are added to be 1. In order to comprehensively quantify the feature retention degree of different DOG space images on facial bone features, the feature retention degree needs to be calculated by combining the first retention degree and the second retention degree, and a calculation formula of the feature retention degree can be as follows:
The method comprises the steps of carrying out a first treatment on the surface of the In (1) the->The degree of feature preservation for the ith DOG spatial image in the same layer,/for the same layer>For a first preset weight, +.>First degree of preservation for ith DOG aerial image of the same layer,/I>For a second preset weight, +.>Ith DOG space diagram for the same layerA second degree of retention of the image.
In the calculation formula of the feature preservation degree, if the first preset weight takes the experience value of 0.3, the second preset weight is 0.7, and the first preset weight can be determined by an implementer according to specific practical situations without specific limitation.
And secondly, obtaining the weight of each DOG space image according to the feature retention degree.
It should be noted that, after the feature retention degree of each DOG space image is obtained, different scale change parameters correspond to different feature retention degrees, and at this time, weight division needs to be performed according to the feature retention degree, that is, the weight of each DOG space image is determined.
Sequencing the feature retention degree of each DOG space image according to the sequence from big to small to obtain a feature retention degree sequence; dividing the feature preservation degree sequence into a preset number of subsequences, setting the weight of each DOG space image corresponding to the first subsequence as a first weight, setting the weight of each DOG space image corresponding to the second subsequence as a second weight, and setting the weight of each DOG space image corresponding to the third subsequence as a third weight; wherein the preset number is 3, the first weight is greater than the second weight, and the second weight is greater than the third weight.
In this embodiment, each DOG space image is classified into three types according to the feature preservation degree, namely, firstly, the low feature preservation degree, secondly, the medium feature preservation degree, and finally, the high feature preservation degree, the weights of the DOG space images corresponding to different types are different, the weights of the DOG space images corresponding to the same type are the same, and the weights of the DOG space images are the weights of the key points. The first weight takes the experience value of 0.5, the second weight takes the experience value of 0.3, the third weight takes the experience value of 0.2, the value after the addition of the first weight, the second weight and the third weight is 1, and the implementer can set the classification number, namely the preset number of subsequences and the weights corresponding to different types according to specific actual conditions, so that the method is not particularly limited.
Thus far, the embodiment obtains the weight of each DOG space image of the same layer.
S6, determining the position of each characteristic point according to the weight of each DOG space image and each pixel point in each DOG space image by combining with a SIFT algorithm; and inputting the positions of the characteristic points into an access control system to obtain an access control verification result.
It should be noted that, the conventional feature point extraction determines whether the point to be analyzed is a feature point by comparing the gray scale magnitude relationship between the point to be analyzed and 26 points. However, the embodiment converts the direct size comparison relation into the size comparison relation with added weight based on the feature retention degree, so that the finally obtained feature points can have the features of the skeleton region, and the defect that the extraction effect of the feature points on the face region is limited is overcome.
In this embodiment, taking a certain spatial point as an example, the gray value of the spatial point needs to be compared with the gray values of the spatial points in the eight neighborhoods in the DOG spatial image to which the spatial point belongs, and also the gray values of the 18 corresponding spatial points in the other two DOG spatial images in the same positions as the spatial point and the spatial points in the eight neighborhoods thereof need to be compared in size. And taking the space points compared with the space points as comparison space points, wherein each space point is provided with a plurality of corresponding comparison space points, and if the gray value of the space point is smaller or larger than the gray value of 26 comparison space points, judging the space point as a characteristic point.
In order to improve the accuracy of the obtained feature points, the weight of the DOG space image to which the space points belong needs to be combined, specifically: when the gray value is compared, the smaller the feature retention degree is, the smaller the weight is, and the larger tolerance is provided for the result after the comparison. Such as: the gray value of the space point A is 50, the weight of the DOG space image to which the space point A belongs is minimum, at this time, the gray value of 2 comparison space points is smaller than the space point A, but the other 24 comparison space points are larger than the space point A, the space point A is judged to be not the feature point according to the traditional feature point extraction method, but the weight determined by the feature retention degree can be known to be minimum, the tolerance range of the space image with low feature retention degree is increased, and the space point A can be judged to be the feature point.
Regarding the tolerance range of the space diagram, taking the space point corresponding to the minimum value of the feature preservation degree as an example to judge whether the space point is a feature point, and outputting the space point as the feature point when the gray values of all the comparison space points of the space point are smaller or larger than the gray value of the space point; when the gray value of each comparison space point is not satisfied, the gray value of each comparison space point is reduced according to the weight of each comparison space point, so that the gray value of the space point can be smaller than the smaller gray value, the comparison tolerance is increased, and the feature points with facial skeleton features are output.
And (3) referring to the judging process of whether the space point A is a feature point, based on the weight of each DOG space image, calculating and judging whether each pixel point in the DOG space is a feature point, outputting final feature points, wherein each obtained feature point is a feature point with facial skeleton features, and further determining the position of each feature point in the corresponding DOG space image. The feature point extraction process combined with the SIFT algorithm is the prior art, and is not within the scope of the present invention, and will not be described in detail here.
And finally, inputting the positions of the characteristic points into an access control system for matching identification to obtain an access control verification result of the personnel to be verified.
So far, the embodiment realizes intelligent access control verification based on the face recognition technology.
The invention provides an intelligent access control verification system based on face recognition, which determines scale change parameters between each layer in a pyramid by analyzing preprocessed images; the acquisition of the parameters is beneficial to effectively reserving facial bone characteristics when the DOG space is constructed later; by analyzing the reservation degree of the skeleton feature under each scale, the weight when calculating the key points is determined, and the weight is acquired to be favorable for calculating the key points through the DOG space, so that the key points have corresponding features, and the method is favorable for avoiding errors of the access control verification result to a certain extent.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention and are intended to be included within the scope of the invention.

Claims (6)

1. The intelligent access control verification system based on face recognition is characterized by comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the following steps:
acquiring a face image to be identified, and performing image preprocessing on the face image to be identified to acquire a face area image;
carrying out Gaussian pyramid ascending and descending sampling processing on the facial area image to obtain each sampling image of each layer; determining the scale change parameters of each sampling image of each layer according to the gray value of each pixel point in each sampling image;
performing scale change analysis according to the scale change parameters of each sampling image and each DOG space image of the same layer, and determining a first retention degree of each DOG space image;
for any DOG space image, determining the position of each marking space point according to the gray value of each pixel point in the DOG space image; performing bone feature fitting analysis according to the positions of the marked space points, and determining a second retention degree of the DOG space image;
determining the feature retention degree of each DOG space image according to the first retention degree and the second retention degree of each DOG space image; obtaining the weight of each DOG space image according to the feature retention degree;
According to the weight of each DOG space image and each pixel point in each DOG space image, determining the position of each characteristic point by combining with a SIFT algorithm; inputting the positions of the characteristic points into an access control system to obtain an access control verification result;
according to the gray value of each pixel point in each sampling image, determining the scale variation parameter of each sampling image of each layer comprises the following steps:
for any sampling image of any layer, determining the gray variance of the sampling image according to the gray value of each pixel point in the sampling image; performing discrete Fourier transform processing on the sampled image to obtain each high-frequency part in the sampled image; counting the number of pixels of each high-frequency part and the number of pixels of the sampled image; determining the complexity of the sampled image according to the gray variance of the sampled image, the number of pixels of each high-frequency part and the number of pixels in the sampled image;
dividing each sampling image into two classes according to the complexity, setting Gaussian filter variance of each sampling image in each class, and taking the Gaussian filter variance as a scale variation parameter of the corresponding sampling image;
performing scale change analysis according to scale change parameters of each sampling image and each DOG space image of the same layer, and determining a first retention degree of each DOG space image, wherein the method comprises the following steps:
Calculating the absolute value of the difference between the scale change parameters of two adjacent sampling images in the same layer according to the scale change parameters of each sampling image in the same layer, and taking the absolute value of the difference between the scale change parameters as the corresponding scale change parameter of the DOG space image;
according to the scale change parameters of each DOG space image of the same layer, determining a first retention degree of each DOG space image, wherein a calculation formula of the first retention degree is as follows:
the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->First degree of preservation for ith DOG aerial image of the same layer,/I>Scale change parameter of ith DOG space image of same layer, < ->Is the->Scale change parameters of the individual DOG spatial images, < ->For the preset minimum scale variation parameter, t is the preset maximum value of the scale difference range, exp is an exponential function based on a natural constant;
performing bone feature fitting analysis according to the positions of the marked space points to determine a second retention degree of the DOG space image, wherein the bone feature fitting analysis comprises the following steps:
according to the positions of all the marked space points in the DOG space image, a least square method circle fitting model is utilized to obtain fitting probability and fitting circle areas corresponding to the DOG space image; determining each marked space point in the fitting circle area as a high fitting rate point, and calculating the distance between any two high fitting rate points to further obtain variance values corresponding to all the distances; taking the sum of the variance value and the preset value as a denominator of the second retention degree, and taking the fitting probability as a numerator of the second retention degree to obtain the second retention degree;
Obtaining the weight of each DOG space image according to the feature retention degree, wherein the method comprises the following steps:
sequencing the feature retention degree of each DOG space image according to the sequence from big to small to obtain a feature retention degree sequence; dividing the feature preservation degree sequence into a preset number of subsequences, setting the weight of each DOG space image corresponding to the first subsequence as a first weight, setting the weight of each DOG space image corresponding to the second subsequence as a second weight, and setting the weight of each DOG space image corresponding to the third subsequence as a third weight; wherein the preset number is 3;
the determining the position of each feature point according to the weight of each DOG space image and each pixel point in each DOG space image by combining with a SIFT algorithm comprises the following steps:
the gray value of the space point is not only required to be compared with the gray values of the space points in the eight neighborhood in the DOG space image to which the space point belongs, but also is required to be compared with the gray values of 18 corresponding space points in the other two DOG space images at the same position as the space point and the space points in the eight neighborhood; wherein, the space point compared with the space point is taken as a comparison space point;
When the gray value is compared, the smaller the feature retention degree is, the smaller the weight is, and the larger tolerance is provided for the result after the comparison of the gray value; when the gray values of all the comparison space points of the space points are smaller or larger than the gray value of the space point, outputting the gray values as characteristic points; when the gray value of each comparison space point is not satisfied, the gray value of each comparison space point is reduced according to the weight of each comparison space point, so that the gray value of the space point is smaller than the smaller gray value, the comparison tolerance is increased, and the feature points with facial skeleton features are output.
2. The intelligent entrance guard verification system based on face recognition according to claim 1, wherein determining the complexity of the sampled image based on the gray variance of the sampled image, the number of pixels of each high frequency part, and the number of pixels in the sampled image comprises:
the ratio of the number of pixels of the sampling image to the gray variance is determined as a first complex factor, the accumulated sum of the number of pixels of each high-frequency part is taken as a second complex factor, the first complex factor and the second complex factor are multiplied, and the multiplied value is taken as the complexity of the corresponding sampling image.
3. The intelligent entrance guard verification system based on face recognition according to claim 1, wherein determining the position of each mark space point according to the gray value of each pixel point in the DOG space image comprises:
performing Ojin threshold segmentation on the DOG space image to obtain each space point in the foreground region, and obtaining a gray average value corresponding to each space point in the foreground region; and marking the space points with the gray characteristic values smaller than or equal to the preset gray difference value to obtain each marked space point, and further obtaining the positions of each marked space point.
4. The face recognition-based intelligent access control verification system according to claim 1, wherein determining the feature retention degree of each DOG-space image according to the first retention degree and the second retention degree of each DOG-space image comprises:
for any DOG space image, calculating the product between the first retention degree and the first preset weight, further calculating the product between the second retention degree and the second preset weight, and taking the value obtained by adding the two products as the characteristic retention degree of the corresponding DOG space image.
5. The face recognition-based intelligent access control verification system of claim 1, wherein the first weight is greater than the second weight, and the second weight is greater than the third weight.
6. The intelligent entrance guard verification system based on face recognition according to claim 1, wherein the image preprocessing is performed on the face image to be recognized to obtain a face area image, comprising:
graying treatment is carried out on the face image to be identified, and a gray image of the face image to be identified is obtained; performing image enhancement processing on the gray level image to obtain an image after the image enhancement processing; and (3) carrying out segmentation processing on the image after image enhancement by utilizing a semantic segmentation technology to obtain a face region image of the face image to be recognized.
CN202311532722.7A 2023-11-17 2023-11-17 Intelligent access control verification system based on face recognition Active CN117275130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311532722.7A CN117275130B (en) 2023-11-17 2023-11-17 Intelligent access control verification system based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311532722.7A CN117275130B (en) 2023-11-17 2023-11-17 Intelligent access control verification system based on face recognition

Publications (2)

Publication Number Publication Date
CN117275130A CN117275130A (en) 2023-12-22
CN117275130B true CN117275130B (en) 2024-01-26

Family

ID=89204776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311532722.7A Active CN117275130B (en) 2023-11-17 2023-11-17 Intelligent access control verification system based on face recognition

Country Status (1)

Country Link
CN (1) CN117275130B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140110044A (en) * 2012-01-02 2014-09-16 텔레콤 이탈리아 소시에떼 퍼 아찌오니 Image analysis
CN105022835A (en) * 2015-08-14 2015-11-04 武汉大学 Public safety recognition method and system for crowd sensing big data
CN110473181A (en) * 2019-07-31 2019-11-19 天津大学 Screen content image based on edge feature information without ginseng quality evaluating method
US11468354B1 (en) * 2019-12-10 2022-10-11 Amazon Technologies, Inc. Adaptive target presence probability estimation
CN116630613A (en) * 2023-04-13 2023-08-22 宁波大学 Quality evaluation method for dynamic scene multi-exposure fusion light field image
US20230316386A1 (en) * 2016-10-05 2023-10-05 Digimarc Corporation Image processing arrangements

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013102503A1 (en) * 2012-01-02 2013-07-11 Telecom Italia S.P.A. Method and system for image analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140110044A (en) * 2012-01-02 2014-09-16 텔레콤 이탈리아 소시에떼 퍼 아찌오니 Image analysis
CN105022835A (en) * 2015-08-14 2015-11-04 武汉大学 Public safety recognition method and system for crowd sensing big data
US20230316386A1 (en) * 2016-10-05 2023-10-05 Digimarc Corporation Image processing arrangements
CN110473181A (en) * 2019-07-31 2019-11-19 天津大学 Screen content image based on edge feature information without ginseng quality evaluating method
US11468354B1 (en) * 2019-12-10 2022-10-11 Amazon Technologies, Inc. Adaptive target presence probability estimation
CN116630613A (en) * 2023-04-13 2023-08-22 宁波大学 Quality evaluation method for dynamic scene multi-exposure fusion light field image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图像匹配方法研究综述;贾迪;《中国图象图形学报》(第05期);677-699 *
基于遥感图像的城市道路自动测绘方法研究;孙显;王宏琦;张正;黄宇;;《光学学报》(第01期);86-92 *

Also Published As

Publication number Publication date
CN117275130A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN115861135B (en) Image enhancement and recognition method applied to panoramic detection of box body
CN111462042B (en) Cancer prognosis analysis method and system
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN110188614A (en) It is a kind of based on skin crack segmentation NLM filtering refer to vein denoising method
CN110705565A (en) Lymph node tumor region identification method and device
CN110853009A (en) Retina pathology image analysis system based on machine learning
WO2020066257A1 (en) Classification device, classification method, program, and information recording medium
CN113222062A (en) Method, device and computer readable medium for tobacco leaf classification
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN110021019B (en) AI-assisted hair thickness distribution analysis method for AGA clinical image
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN116246174B (en) Sweet potato variety identification method based on image processing
CN114863189B (en) Intelligent image identification method based on big data
CN117115117A (en) Pathological image recognition method based on small sample, electronic equipment and storage medium
CN117275130B (en) Intelligent access control verification system based on face recognition
CN111598144B (en) Training method and device for image recognition model
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN111259914B (en) Hyperspectral extraction method for characteristic information of tea leaves
CN113420793A (en) Improved convolutional neural network ResNeSt 50-based gastric ring cell carcinoma classification method
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN115578400A (en) Image processing method, and training method and device of image segmentation network
CN112634226A (en) Head CT image detection device, method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant