CN112150692A - Access control method and system based on artificial intelligence - Google Patents

Access control method and system based on artificial intelligence Download PDF

Info

Publication number
CN112150692A
CN112150692A CN202011098006.9A CN202011098006A CN112150692A CN 112150692 A CN112150692 A CN 112150692A CN 202011098006 A CN202011098006 A CN 202011098006A CN 112150692 A CN112150692 A CN 112150692A
Authority
CN
China
Prior art keywords
image
face
module
access control
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011098006.9A
Other languages
Chinese (zh)
Inventor
吴喜庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011098006.9A priority Critical patent/CN112150692A/en
Publication of CN112150692A publication Critical patent/CN112150692A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to the technical field of entrance guard, in particular to an entrance guard control method and system based on artificial intelligence. Collecting an RGB image and a depth image; segmenting the RGB image to obtain a second image only containing face information; performing face key point detection on the second image to obtain 2D feature points, and mapping the 2D feature points to the depth image to obtain a 3D feature map containing the 3D feature points; outputting the facial expression categories of the RGB images through an image classification network; setting the radius of a clustering algorithm according to the facial expression category and the 3D feature points; obtaining a face region point set through clustering operation and judging whether to continue detection; dividing a plurality of interesting regions according to the region point set to cut the RGB image to obtain a third image; calculating the similarity of the third image and the data in the database; and constructing a matching model, and carrying out face recognition according to the matching model. The invention solves the problem that the face recognition of the entrance guard is influenced by the expression and partial shielding.

Description

Access control method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of entrance guard, in particular to an entrance guard control method and system based on artificial intelligence.
Background
The access control system is widely applied to office buildings or residential areas, and the access control system with the face recognition function has the effects of rapidness and convenience. However, the existing face recognition access control system is difficult to detect the shielding condition. Moreover, the expression of the face also has great influence on the detection result, and face recognition cannot be accurately performed.
In the prior art, the shielding situation can be detected, but the influence of the facial expression on the detection result is large, and the facial recognition cannot be effectively and accurately performed when the facial expression changes.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide an access control method and system based on artificial intelligence, wherein the adopted technical scheme is as follows:
the invention provides an entrance guard control method based on artificial intelligence, which comprises the following steps:
acquiring a first image, wherein the first image comprises an RGB image and a depth image;
segmenting the RGB image to obtain a second image containing face information;
performing face key point detection on the second image to obtain 2D feature points, and mapping the 2D feature points to the depth image to obtain a 3D feature map containing the 3D feature points;
outputting the facial expression categories of the RGB images through an image classification network;
setting the radius of a clustering algorithm according to the facial expression category and the 3D feature points; obtaining a face region point set through clustering operation; when the number of the area point sets is smaller than a preset first inspection threshold value, judging that the shielding is serious and not opening the entrance guard;
when the number of the region point sets is larger than the first experience threshold value, judging that the region point sets are not shielded, dividing a plurality of interested regions according to the region point sets, cutting the RGB image by using the interested regions, and interpolating to obtain a third image;
calculating the similarity between the third image and the face features in the database;
carrying out difference analysis on the facial expression categories and expression categories of a database, constructing a matching model, and carrying out face recognition according to the matching model;
and judging whether to open the access control according to the face recognition result.
Further, the setting of the radius of the clustering algorithm specifically includes:
setting d ═ xmax-xminWherein D is the horizontal degree of the human face in the 3D characteristic diagram, xmaxIs the maximum value, x, of the abscissa of the key point of the faceminThe minimum value of the horizontal coordinate of the key point of the human face is obtained;
radius r of initial clustering algorithm1Satisfies the following conditions:
Figure BDA0002724386860000021
wherein α is an empirical value;
when the expression is the normal expression, the radius r of the clustering algorithm satisfies the following condition:
r=r1
when the expression is the happy expression or the refractory expression, the radius r of the clustering algorithm satisfies:
r=r1(1+β)
wherein beta is an expansion coefficient, and specifically satisfies the following conditions:
β=kd
where k is the scaling factor and k > 0.
Further, the obtaining of the face region point set through the clustering operation specifically includes:
setting a second empirical threshold m for the number of keypoints2Traversing each face key point according to the radius r;
counting the number of points in the circular area with the clustering radius r by taking the key point as a center; with said number not less than said second empirical threshold m2The center of (1) is a central point, and the others are outliers;
calculating the distance between the central points, and connecting the central points when the distance is smaller than the clustering radius r to obtain an initial region point set; and distributing the outliers to the nearest initial region point set to finally obtain the region point set.
Further, the specific method for dividing the multiple regions of interest to crop the RGB image includes:
and generating a minimum circumscribed rectangle according to the central point coordinates of the points in the region point set, and taking the minimum circumscribed rectangle as the region of interest.
Further, the constructing a matching model specifically includes:
dividing each region point set into corresponding face regions according to the central point coordinates;
establishing a matching model:
Figure BDA0002724386860000022
wherein tau is the integral matching degree; g is the number of the face areas; g is the g-th personal face area; γ is the weight of the g-th personal face area; matchgIs the result of the difference analysis of the g-th individual face area.
Further, the method for dividing the face region specifically includes:
searching a nose central point according to the coordinates of the central point; the coordinates of the center point of the nose are the central coordinates of the whole face area;
and obtaining the center points of other regions of the face according to the coordinate position relationship of the nose center point.
Further, when the entrance guard is judged to be unopenable, identity authentication is carried out; if the verification is successful, opening the access control; and if the verification is successful and the judgment is that the image is not blocked, storing the facial features and the expression types of the third image into the database.
The invention also provides an entrance guard control system based on artificial intelligence, which comprises: the system comprises an image acquisition module, a semantic segmentation module, a feature point acquisition module, an image classification module, a clustering analysis module, an interested region division module, a similarity detection module, a database, a difference analysis module, a matching model, an access control module and an identity verification module;
the image acquisition module is used for acquiring a first image, and the first image comprises an RGB image and a depth image;
the semantic segmentation module is used for segmenting the RGB image to obtain a second image only containing face information;
the feature point acquisition module is used for acquiring 2D feature points through the second image and obtaining 3D feature points by combining the 2D feature points and the depth image;
the image classification module is used for processing the RGB image and outputting facial expression categories;
the cluster analysis module is used for setting the radius of a clustering algorithm through the 3D feature points, obtaining a face region point set through the clustering algorithm and judging whether the detection can be continued or not;
the interesting region dividing module is used for dividing a plurality of interesting regions according to the region point set, cutting the RGB image, and interpolating to obtain a third image which is as large as the original image;
the similarity detection module is used for calculating the similarity between the third image and the face features stored in the database;
the database stores the facial features and the corresponding expression categories;
the difference analysis module is used for carrying out difference analysis on the facial expression categories and the expression categories stored in the database to obtain required matching model parameters;
the matching model is used for calculating the matching degree of the human face according to the model;
the access control module is used for controlling the opening and closing of an access;
the identity authentication module is used for performing identity authentication when the entrance guard is judged to be unopened.
The invention has the following beneficial effects:
1. aiming at the influence of facial expression change during face recognition, the influence degree of the facial expression change on the face matching result is reduced and the matching accuracy is improved by classifying the facial expressions and adjusting the weight and the threshold of each region of the face based on the classification result.
2. The clustering radius is set according to the facial expression categories and the facial feature points, so that the accuracy of a facial region point set is improved, and the accuracy of clustering operation is greatly improved.
3. In the identity authentication process, if the authentication is successful and the condition that the facial features and the expression categories are not blocked is judged, the acquired facial features and the expression categories are stored in the database, the database information is updated, and the user does not need to acquire additional data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an access control method based on artificial intelligence according to an embodiment of the present invention;
fig. 2 is a block diagram of an access control system based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the method and system for entrance guard based on artificial intelligence according to the present invention will be made with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of an access control method and system based on artificial intelligence in detail with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of an access control method based on artificial intelligence according to an embodiment of the present invention is shown, where the method includes:
step S1: a first image is acquired, the first image including an RGB image and a depth image.
The face image is collected through the RGB-D camera, and the RGB image and the depth image of the face are obtained. The acquired RGB image and depth image are equivalent and point-by-point corresponding images.
Step S2: and segmenting the RGB image to obtain a second image containing face information.
In an application scene, the shielding objects are usually articles such as sunglasses, masks, scarves and the like, are obviously distinguished from human face features, and are classified at a pixel level through a semantic segmentation network. The semantic segmentation can adopt an encoding-decoding structure (Encoder-Decoder), RGB images are used as input, features are extracted through an Encoder, and a feature map is output. The decoder samples the characteristic graph and outputs a semantic segmentation graph. The pixel categories in the semantic segmentation graph are three categories including human faces, occlusion and irrelevant items. The mask is generated by semantic segmentation graph. And after the mask is subjected to binarization processing, multiplying the obtained binarized image by the RGB image point by point to obtain a second image containing the face information. Through the operation, irrelevant items are eliminated from the image, the influence of a face shielding area is ignored, the clustering of subsequent face key point detection results is facilitated, and more discrete points cannot appear during clustering operation.
The specific training method of the semantic segmentation network comprises the following steps: the training set adopts RGB images of a plurality of front-view human faces collected through a camera, wherein the RGB images comprise the condition that the human faces are shielded by common shielding objects such as sunglasses, masks, scarves and the like. Through pixel point labeling, the labeling categories are three types: face, occlusion, and irrelevant items. And sending the collected RGB images and the corresponding marking information into a network for training, wherein the loss function adopts a cross entropy loss function.
Step S3: and performing face key point detection on the second image to obtain 2D feature points, and mapping the 2D feature points to the depth image to obtain a 3D feature map containing the 3D feature points.
There are many methods for detecting face key points, such as face recognition framework (OpenFace), Deep Adaptation network (Deep Adaptation Networks), and the like. The method comprises the steps of obtaining 2D feature points and coordinates thereof through a common face detection task, obtaining depth information in corresponding depth images according to the coordinates, combining the depth information with a mapping relation of the depth images, enabling the 2D feature points to contain the corresponding depth information, and obtaining a 3D feature map containing the 3D feature points.
Step S4: and outputting the facial expression categories of the RGB images through an image classification network.
The image classification network takes the RGB image as input, extracts features through an encoder and outputs a feature map. And the characteristic diagram is sent into the full connection layer through a flatten function, and the labeled category is output through a softmax function. In one embodiment of the invention, the expression categories are set to three categories, including normal expressions, happy expressions, and refractory expressions. The refractory expression is set to 0, the normal expression to 1, and the happy expression to 2. And classifying the expressions to obtain classification information, so as to adaptively set the radius of a subsequent clustering algorithm, analyze the difference, match the weight threshold of the model and update the database according to the expression classification information.
The specific training mode of the image classification network is as follows: the training set selects RGB images of multiple front-view faces collected by a camera, wherein the RGB images comprise the condition that the faces have multiple expressions. Labeling each training image, wherein the labeling types are three types: normal expression, happy expression, and refractory expression. And training by adopting a cross entropy loss function.
Step S5: setting the radius of a clustering algorithm according to the facial expression category and the 3D feature points; and obtaining a face region point set through clustering operation and judging whether to continue detection.
Because the face is an orthographic image, and the starting points of the boundaries of the left side and the right side cannot be changed when the facial expression is changed, the setting is as follows:
d=xmax-xmin
wherein d is the transverse length of the human face in the image, and xmaxIs the maximum value, x, of the abscissa of the key point of the faceminIs the minimum value of the abscissa of the key point of the human face.
Setting initial cluster radiusr1Satisfies the following conditions:
Figure BDA0002724386860000061
where α is an empirical value and is set to 4 in one embodiment of the invention.
Adjusting the clustering radius according to the expression category:
when the expression is normal, the clustering radius r satisfies:
r=r1
when the expression is the happy expression or the difficult expression, the clustering radius r satisfies:
r=r1(1+β)
wherein beta is an expansion coefficient, and specifically satisfies the following conditions:
β=kd
where k is the scaling factor, k >0, set to 0.01 in one embodiment of the invention.
And after the clustering radius is obtained, clustering operation is carried out on the 3D characteristic graph. Setting a second empirical threshold m2Traversing each face key point according to the obtained cluster radius, taking the key point as a center, counting the number of points with the distance from the center to the center being less than the cluster radius, and taking the number not less than m2The center of (b) is defined as the center point, and the others are outliers. And judging whether the distance between the central points is smaller than the clustering radius, and if so, connecting the central points to obtain an initial region point set. And distributing the outliers to the nearest central point set to finally obtain a face region point set. By acquiring the face region point set, the face key points are divided into a plurality of category clusters, so that each region of the face is divided when the shielding condition and the face are matched in the follow-up judgment.
Counting the number N of the obtained region point sets, and setting a first verified threshold m1If N is less than m1If the occlusion is serious, the detection cannot be continued. First verified threshold m1Should ensure that there is sufficient face information, is set to 5 or more in one embodiment of the invention. Whether to continue detection is determined by judging the face shielding condition, and improvement is achievedThe efficiency and the safety of the face recognition are improved.
Step S6: and dividing a plurality of interested areas according to the area point set to cut the RGB image, and interpolating to obtain a third image.
The method for specifically dividing the region of interest comprises the following steps: and generating a minimum circumscribed rectangle by each set according to the coordinates of the points in the sets to serve as the region of interest of the human face. Taking a set as an example, counting x of two-dimensional coordinatesmax、xmin、ymax、yminTo (x)max,ymax),(xmax,ymin),(xmin,ymax),(xmin,ymin) The vertex of the minimum circumscribed rectangle is connected to obtain the minimum circumscribed rectangle.
And obtaining a third image which is as large as the original image through clipping of the minimum circumscribed rectangle and interpolation operation.
Step S7: and calculating the similarity between the third image and the face data in the database.
In one embodiment of the invention a twin network is used to calculate the similarity of the third image to the face data in the database. The twin network takes two branches in the training phase, which share weights and biases. The training set is a plurality of face front-view images and comprises a positive example and a negative example, namely the situation that the faces contained in the sample are the same person and the situation that the faces are not the same person as the faces in the sample is also included. And marking a training set, sending the training set and marking information into a network for training, and training by adopting a contrast loss function as a loss function. When the twin network is used, only one branch is needed, the encoder performs feature extraction on the third image, a one-dimensional feature vector is output through a full connection layer, Euclidean distance calculation is performed on the one-dimensional feature vector and the one-dimensional feature vector stored in the database, and the calculation result is the similarity. And comparing the face feature information of the third image with information stored in the database to obtain similarity, and calculating a subsequent matching model according to the similarity.
Step S8: and performing difference analysis according to the facial expression category and the expression category of the database, constructing a matching model, and performing face recognition according to the matching model.
In order to avoid the error of face matching caused by different facial expression categories, different weights and thresholds are set for the similarity of the regions, and a matching model is constructed.
And dividing the set into corresponding face regions according to the coordinate relation of the cluster central points. Setting a third empirical threshold m3For the nose center point, there are: the center point of the nose is in the center area of the face, and the mean difference value between the x coordinate of the center point of the nose and the maximum value and the minimum value of the cluster center point is less than m3And the minimum value of the maximum value of the y coordinate of the nose central point and the clustering central point is less than m3Thereby, the nose center point can be obtained. The relation between the center point of the nose and the center point coordinates of other face areas is easy to obtain the center point of the left eye, the center point of the right eye, the center point of the left eyebrow bone, the center point of the right eyebrow bone, the center point of the left mouth corner, the center point of the right mouth corner, the center point of the chin, the center point of the left cheek and the center point of the right cheek.
The specific setting method of the threshold of the matching model comprises the following steps: setting a fourth empirical threshold m4A fourth threshold represents a matching threshold of each face region, and when the similarity is greater than the fourth empirical threshold, the face regions are judged to be matched; when the expression difference value is 0, the threshold value is unchanged; when the expression difference absolute value is 1, setting a fifth experience threshold m5(ii) a Setting a sixth experience threshold m when the expression difference absolute value is 26(ii) a Wherein the following are satisfied:
m4>m5>m6
the specific weight setting method comprises the following steps: let the number of face regions be G and the initial weight of each region be gammag,γgSatisfies the following conditions:
Figure BDA0002724386860000071
because the influence degrees of the facial areas by the expressions are different, different adjustments are needed according to the influence degrees. The influence of the expressions on the left cheek area, the right cheek area and the nose area is small, and the weight is increased when the expressions are inconsistent; the left eye area, the right eye area, the left eyebrow area, the right eyebrow area, the left mouth corner area, the right mouth corner area and the chin area are greatly influenced by the expression, and when the expressions are inconsistent, the weight is reduced.
Let the number of smaller influence regions be G1The number of larger influence regions is G2And satisfies the following conditions:
G=G1+G2
weight gamma of the smaller area of influence1Satisfies the following conditions:
Figure BDA0002724386860000081
weight gamma of larger area of influence2Satisfies the following conditions:
Figure BDA0002724386860000082
setting a matching model according to the weight:
Figure BDA0002724386860000083
wherein tau is the integral matching degree; g is the g-th personal face area; γ is the weight of the g-th personal face area; matchgMatching result of the g-th personal face area is 1, and mismatching is 0;
setting a seventh empirical threshold m7When tau is more than or equal to m7And if not, judging that the face matching fails.
Step S9: and when the entrance guard is judged to be unopened, performing identity verification and updating the database.
When the user judges through the region point set that the detection cannot be continued, the fact that the facial information of the user is seriously shielded is shown, at the moment, the user is required to carry out identity verification, the access control can be opened after the verification is successful, and otherwise, the access control cannot be opened. And when the user face information is not shielded but is still not successfully identified, indicating that the user information is missing or incomplete, performing identity verification, and if the verification is successful, storing the acquired face features and the corresponding expression categories into a database and opening the access control.
In summary, in the embodiments of the present invention, by adjusting the weight and the threshold of the matching model based on the classification result through facial expression classification aiming at the influence of the expression change during face recognition, the influence of facial expression on face recognition can be overcome; by dividing the face area and constructing a matching model, the face recognition under the condition of small shielding area can be realized.
Referring to fig. 2, a block diagram of an access control system based on artificial intelligence according to an embodiment of the present invention is shown, which specifically includes: the system comprises an image acquisition module 101, a semantic segmentation module 102, a feature point acquisition module 103, an image classification module 104, a cluster analysis module 105, a region of interest division module 106, a similarity detection module 107, a database 108, a difference analysis module 109, a matching model 110, an access control module 111 and an identity verification module 112.
The image acquisition module 101 is configured to acquire a first image, where the first image includes an RGB image and a depth image.
The semantic segmentation module 102 is configured to segment the RGB image to obtain a second image only including face information.
The feature point obtaining module 103 is configured to obtain 2D feature points through the second image, and obtain 3D feature points by combining the 2D feature points and the depth image.
The image classification module 104 is configured to process the RGB images and output facial expression categories.
The cluster analysis module 105 is configured to set a radius of a clustering algorithm through the 3D feature points, obtain a face region point set through the clustering algorithm, and determine that the access control cannot be opened if it is determined that the detection cannot be continued according to the region point set.
The region-of-interest dividing module 106 is configured to divide a plurality of regions of interest according to the region point set, crop the RGB image, and interpolate to obtain a third image that is as large as the original image.
The similarity detection module 107 is configured to calculate a similarity between the third image and the facial features stored in the database.
The database 108 stores facial features and corresponding expression categories.
The difference analysis module 109 is configured to perform difference analysis on the collected facial expression categories and expression categories stored in the database, and acquire required matching model parameters.
The matching model 110 is used to calculate the degree of matching of the face according to the model.
The door access control module 111 is used for controlling the opening and closing of the door access.
The authentication module 112 is used for performing authentication when the entrance guard is determined not to be openable.
And when the identity authentication is successful and the situation that the identity authentication is not blocked is judged, storing the face characteristics and the corresponding expression categories of the third image acquired at this time in the database 108.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. An access control method based on artificial intelligence, characterized in that the method comprises:
acquiring a first image, wherein the first image comprises an RGB image and a depth image;
segmenting the RGB image to obtain a second image containing face information;
performing face key point detection on the second image to obtain 2D feature points, and mapping the 2D feature points to the depth image to obtain a 3D feature map containing the 3D feature points;
outputting the facial expression categories of the RGB images through an image classification network;
setting the radius of a clustering algorithm according to the facial expression category and the 3D feature points; obtaining a face region point set through clustering operation; when the number of the area point sets is smaller than a preset first inspection threshold value, judging that the shielding is serious and not opening the entrance guard;
when the number of the region point sets is larger than the first experience threshold value, judging that the region point sets are not shielded, dividing a plurality of interested regions according to the region point sets, cutting the RGB image by using the interested regions, and interpolating to obtain a third image;
calculating the similarity between the third image and the face features in the database;
carrying out difference analysis on the facial expression categories and expression categories of a database, constructing a matching model, and carrying out face recognition according to the matching model;
and judging whether to open the access control according to the face recognition result.
2. The access control method based on artificial intelligence of claim 1, wherein the setting of the radius of the clustering algorithm specifically comprises:
setting d ═ xmax-xminWherein D is the horizontal degree of the human face in the 3D characteristic diagram, xmaxIs the maximum value, x, of the abscissa of the key point of the faceminThe minimum value of the horizontal coordinate of the key point of the human face is obtained;
radius r of initial clustering algorithm1Satisfies the following conditions:
Figure FDA0002724386850000011
wherein α is an empirical value;
when the expression is the normal expression, the radius r of the clustering algorithm satisfies the following condition:
r=r1
when the expression is the happy expression or the refractory expression, the radius r of the clustering algorithm satisfies:
r=r1(1+β)
wherein beta is an expansion coefficient, and specifically satisfies the following conditions:
β=kd
where k is the scaling factor and k > 0.
3. The access control method based on artificial intelligence of claim 1, wherein the obtaining of the face region point set through clustering specifically comprises:
setting a second empirical threshold m for the number of keypoints2Traversing each face key point according to the radius r;
counting the number of points in the circular area with the clustering radius r by taking the key point as a center; with said number not less than said second empirical threshold m2The center of (1) is a central point, and the others are outliers;
calculating the distance between the central points, and connecting the central points when the distance is smaller than the clustering radius r to obtain an initial region point set; and distributing the outliers to the nearest initial region point set to finally obtain the region point set.
4. The artificial intelligence based access control method of claim 3, wherein the specific method for dividing the multiple regions of interest to crop the RGB image comprises:
and generating a minimum circumscribed rectangle according to the central point coordinates of the points in the region point set, and taking the minimum circumscribed rectangle as the region of interest.
5. The access control method based on artificial intelligence of claim 1, wherein the constructing the matching model specifically comprises:
dividing each region point set into corresponding face regions according to the central point coordinates;
establishing a matching model:
Figure FDA0002724386850000021
wherein tau is the integral matching degree; g is the number of the face areas; g is the g-th personal face area; γ is the weight of the g-th personal face area; matchgIs the result of the difference analysis of the g-th individual face area.
6. The access control method based on artificial intelligence of claim 5, wherein the method for dividing the face region specifically comprises:
searching a nose central point according to the coordinates of the central point; the coordinates of the center point of the nose are the central coordinates of the whole face area;
and obtaining the center points of other regions of the face according to the coordinate position relationship of the nose center point.
7. The artificial intelligence based access control method of claim 1, wherein when the access is determined to be unopenable, authentication is performed; if the verification is successful, opening the access control; and if the verification is successful and the judgment is that the image is not blocked, storing the facial features and the expression types of the third image into the database.
8. The utility model provides an access control system based on artificial intelligence which characterized in that includes: the system comprises an image acquisition module, a semantic segmentation module, a feature point acquisition module, an image classification module, a clustering analysis module, an interested region division module, a similarity detection module, a database, a difference analysis module, a matching model, an access control module and an identity verification module;
the image acquisition module is used for acquiring a first image, and the first image comprises an RGB image and a depth image;
the semantic segmentation module is used for segmenting the RGB image to obtain a second image only containing face information;
the feature point acquisition module is used for acquiring 2D feature points through the second image and obtaining 3D feature points by combining the 2D feature points and the depth image;
the image classification module is used for processing the RGB image and outputting facial expression categories;
the cluster analysis module is used for setting the radius of a clustering algorithm through the 3D feature points, obtaining a face region point set through the clustering algorithm and judging whether the detection can be continued or not;
the interesting region dividing module is used for dividing a plurality of interesting regions according to the region point set, cutting the RGB image, and interpolating to obtain a third image which is as large as the original image;
the similarity detection module is used for calculating the similarity between the third image and the face features stored in the database;
the database stores the facial features and the corresponding expression categories;
the difference analysis module is used for carrying out difference analysis on the facial expression categories and the expression categories stored in the database to obtain required matching model parameters;
the matching model is used for calculating the matching degree of the human face according to the model;
the access control module is used for controlling the opening and closing of an access;
the identity authentication module is used for performing identity authentication when the entrance guard is judged to be unopened.
CN202011098006.9A 2020-10-14 2020-10-14 Access control method and system based on artificial intelligence Withdrawn CN112150692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011098006.9A CN112150692A (en) 2020-10-14 2020-10-14 Access control method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011098006.9A CN112150692A (en) 2020-10-14 2020-10-14 Access control method and system based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112150692A true CN112150692A (en) 2020-12-29

Family

ID=73953081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011098006.9A Withdrawn CN112150692A (en) 2020-10-14 2020-10-14 Access control method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112150692A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800886A (en) * 2021-01-16 2021-05-14 江苏霆善科技有限公司 Face recognition system and method based on machine vision
CN113688768A (en) * 2021-08-31 2021-11-23 中国平安人寿保险股份有限公司 Human face detection method, device, equipment and medium based on artificial intelligence
CN113706638A (en) * 2021-10-28 2021-11-26 迈步医疗科技(江苏)有限公司 Intelligent control method and system for pharmaceutical mixer based on intelligent Internet of things
CN114092743A (en) * 2021-11-24 2022-02-25 开普云信息科技股份有限公司 Compliance detection method and device for sensitive picture, storage medium and equipment
CN117854194A (en) * 2024-03-07 2024-04-09 深圳市开拓者安防科技有限公司 Visual access control method and system based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262727A (en) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 Method for monitoring face image quality at client acquisition terminal in real time
CN103049736A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Face identification method based on maximum stable extremum area
CN105354563A (en) * 2015-12-14 2016-02-24 南京理工大学 Depth and color image combined human face shielding detection early-warning device and implementation method
CN109376693A (en) * 2018-11-22 2019-02-22 四川长虹电器股份有限公司 Method for detecting human face and system
CN110110681A (en) * 2019-05-14 2019-08-09 哈尔滨理工大学 It is a kind of for there is the face identification method blocked
JP2019204436A (en) * 2018-05-25 2019-11-28 日本電信電話株式会社 Clustering device, clustering method, program, and data structure
CN110570549A (en) * 2019-07-26 2019-12-13 华中科技大学 Intelligent unlocking method and corresponding device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262727A (en) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 Method for monitoring face image quality at client acquisition terminal in real time
CN103049736A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Face identification method based on maximum stable extremum area
CN105354563A (en) * 2015-12-14 2016-02-24 南京理工大学 Depth and color image combined human face shielding detection early-warning device and implementation method
JP2019204436A (en) * 2018-05-25 2019-11-28 日本電信電話株式会社 Clustering device, clustering method, program, and data structure
CN109376693A (en) * 2018-11-22 2019-02-22 四川长虹电器股份有限公司 Method for detecting human face and system
CN110110681A (en) * 2019-05-14 2019-08-09 哈尔滨理工大学 It is a kind of for there is the face identification method blocked
CN110570549A (en) * 2019-07-26 2019-12-13 华中科技大学 Intelligent unlocking method and corresponding device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800886A (en) * 2021-01-16 2021-05-14 江苏霆善科技有限公司 Face recognition system and method based on machine vision
CN112800886B (en) * 2021-01-16 2021-09-10 江苏霆善科技有限公司 Face recognition system and method based on machine vision
CN113688768A (en) * 2021-08-31 2021-11-23 中国平安人寿保险股份有限公司 Human face detection method, device, equipment and medium based on artificial intelligence
CN113706638A (en) * 2021-10-28 2021-11-26 迈步医疗科技(江苏)有限公司 Intelligent control method and system for pharmaceutical mixer based on intelligent Internet of things
CN113706638B (en) * 2021-10-28 2022-02-25 迈步医疗科技(江苏)有限公司 Intelligent control method and system for pharmaceutical mixer based on intelligent Internet of things
CN114092743A (en) * 2021-11-24 2022-02-25 开普云信息科技股份有限公司 Compliance detection method and device for sensitive picture, storage medium and equipment
CN117854194A (en) * 2024-03-07 2024-04-09 深圳市开拓者安防科技有限公司 Visual access control method and system based on artificial intelligence
CN117854194B (en) * 2024-03-07 2024-06-07 深圳市开拓者安防科技有限公司 Visual access control method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN108520219B (en) Multi-scale rapid face detection method based on convolutional neural network feature fusion
CN112150692A (en) Access control method and system based on artificial intelligence
CN105069400B (en) Facial image gender identifying system based on the sparse own coding of stack
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
US6661907B2 (en) Face detection in digital images
CN108197525A (en) Face image synthesis method and device
CN109522853B (en) Face datection and searching method towards monitor video
US20170161591A1 (en) System and method for deep-learning based object tracking
CN110084149B (en) Face verification method based on hard sample quadruple dynamic boundary loss function
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN107622280B (en) Modularized processing mode image saliency detection method based on scene classification
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN110232331B (en) Online face clustering method and system
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN110879985B (en) Anti-noise data face recognition model training method
CN111353343A (en) Business hall service standard quality inspection method based on video monitoring
CN112434647A (en) Human face living body detection method
CN111275058A (en) Safety helmet wearing and color identification method and device based on pedestrian re-identification
CN117877085A (en) Psychological analysis method based on micro-expression recognition
Karungaru et al. Face recognition in colour images using neural networks and genetic algorithms
CN113449694B (en) Android-based certificate compliance detection method and system
CN114155273B (en) Video image single-target tracking method combining historical track information
CN112818728B (en) Age identification method and related products
CN114898287A (en) Method and device for dinner plate detection early warning, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201229