CN113362221A - Face recognition system and face recognition method for entrance guard - Google Patents

Face recognition system and face recognition method for entrance guard Download PDF

Info

Publication number
CN113362221A
CN113362221A CN202110772015.XA CN202110772015A CN113362221A CN 113362221 A CN113362221 A CN 113362221A CN 202110772015 A CN202110772015 A CN 202110772015A CN 113362221 A CN113362221 A CN 113362221A
Authority
CN
China
Prior art keywords
face
image
illumination
face image
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110772015.XA
Other languages
Chinese (zh)
Inventor
杨帆
郝强
潘鑫淼
胡建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhenshi Intelligent Technology Co Ltd
Original Assignee
Nanjing Zhenshi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhenshi Intelligent Technology Co Ltd filed Critical Nanjing Zhenshi Intelligent Technology Co Ltd
Priority to CN202110772015.XA priority Critical patent/CN113362221A/en
Publication of CN113362221A publication Critical patent/CN113362221A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face recognition system and a face recognition method for entrance guard, which comprises the following steps: the system comprises at least one face image acquisition terminal and a server, wherein the face image acquisition terminal is located at the front end, the server is located at the cloud end, the face image acquisition terminal is provided with a camera device used for acquiring a face image and a first model used for face recognition, and a second model used for face recognition is deployed in the server. The face image acquisition terminal is provided with a face illumination quality judgment module for quickly judging the illumination quality of the acquired face image. The face image acquisition terminal judges the illumination quality of the acquired face image, if the face illumination quality Q is smaller than a preset quality threshold value Qm, the face image is sent to the server, a second model deployed in the server is used for face recognition, and a recognition result is fed back; and if the illumination quality Q of the face is greater than or equal to a preset quality threshold value Qm, carrying out face recognition locally by using a first model at the face image acquisition terminal, and outputting a recognition result.

Description

Face recognition system and face recognition method for entrance guard
The present application is a divisional application of an invention patent application with application number 2021104697344 entitled face illumination quality assessment method, system, server and computer readable medium filed on 29/4/2021 by the applicant
Technical Field
The invention relates to the technical field of computer vision, in particular to a face recognition system and a face recognition method for entrance guard, and solves the problem of image quality.
Background
The quality of the face image greatly affects the training of the face recognition model and the accuracy of real-time face recognition, and the quality of the face image is usually reflected in the face illumination quality, and represents the illumination quality of the face position. The existing human face illumination quality evaluation algorithm firstly detects the position of a human face, and intercepts a human face area according to a rectangular detection frame to calculate the illumination quality, in the evaluation process by adopting the method, the intercepted rectangular human face area can contain interference information such as hair background and the like, the color and brightness of the interference information are greatly different from the face, and the interference is relatively large when the illumination brightness of the face is calculated.
Meanwhile, when the illumination uniformity is calculated, the difference of the left part and the right part of the image can be compared, and under the condition of a side face, the proportion difference of the left part and the right part of the face in the image is large, so that the illumination uniformity is not accurately calculated.
Disclosure of Invention
The invention aims to provide a human face illumination quality evaluation method and system based on local affine transformation.
According to a first aspect of the present invention, a method for evaluating human face illumination quality based on local affine transformation is provided, including:
acquiring an input original image;
detecting key points of the human face in the original image;
cutting to obtain a first face image according to the key points of the face in the original image;
carrying out coordinate correction on the face key points in the original image, and matching the face key points to the first face image to obtain corrected face key points;
removing the background based on the first face image to obtain a second face image;
calculating the illumination brightness of the face based on the second face image;
affine transformation is carried out on the first face image to a standard posture according to standard front face subdivision processing based on a standard front face with bilateral symmetry and the corrected key points of the face, and a third face image is obtained;
calculating the global illumination uniformity of the face based on the third face image; and
and obtaining an illumination quality evaluation result of the face based on the global illumination uniformity of the face and the illumination brightness of the face.
Preferably, the cutting out the first face image according to the key points of the face in the original image includes:
determining a face boundary box according to coordinates of face key points in an original image;
and cutting the area in the human face boundary box, scaling to the size of L pixels by L pixels and graying to obtain a first human face image.
Preferably, the removing the background based on the first face image to obtain the second face image includes:
obtaining a convex hull M of the corrected key points of the human face, namely a mask of a human face region, wherein M is a binary image with the size of L × L pixels, the pixel value of the human face region is 1, and the pixel values of other regions are 0; and
and obtaining a second face image with the face area according to the mask of the face area and the correction of the key points of the face.
Preferably, the calculating the face illumination brightness based on the second face image includes:
and expressing the average illumination brightness by adopting the average pixel value of the face area and normalizing, and obtaining the face illumination brightness excluding the non-face area.
Preferably, the standard front face subdivision processing based on the standard front faces with bilateral symmetry includes:
detecting human face key points in the front face image by adopting the front face image which is symmetrical left and right, and cutting out the human face image of the front face image according to the human face key points;
correcting the face key points in the front face image into the face image of the front face image to obtain corrected face key points;
and dividing the modified face key points into K triangular sub-regions by adopting a triangulation algorithm, and forming a set by three vertexes of each sub-region after division.
Preferably, the affine transformation of the first face image to the standard posture to obtain a third face image includes:
triangulation is carried out on the corrected face key points of the first face image by adopting a triangulation algorithm, and the triangulation is divided into K triangular subregions;
sequentially carrying out affine transformation on the subareas of the modified face key points of the first face image into the shape of the subareas of the modified face key points of the front face image to obtain a subarea image after affine transformation; and
and re-splicing the sub-region images after affine transformation according to the three vertex coordinates of the sub-region of the face key point after correction of the front face image to obtain a third face image.
Preferably, the calculating the global illumination uniformity of the face based on the third face image includes:
averagely dividing the third face image into a left part and a right part, namely a left face part and a right part, wherein each part comprises half faces;
recording the left face part as a first image, and recording the right face part as a second image after horizontally turning;
moving pixel by pixel from the upper left corner to the lower right corner of the first image and the second image by adopting a sliding window;
calculating the local illumination uniformity of the sliding window on the basis of the average pixel value in the sliding window area every time; and
and calculating the global illumination uniformity of the human face by adopting a weighted summation mode.
Preferably, the obtaining of the illumination quality evaluation result of the face based on the face global illumination uniformity and the face illumination brightness includes:
the global illumination uniformity of the face and the illumination brightness of the face are combined, and the illumination quality evaluation result of the face is obtained by adopting a product mode.
According to a second aspect of the present invention, there is also provided a computer system comprising:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising the flow of the local affine transformation based face illumination quality assessment method as described above.
According to a third aspect of the present invention, there is also provided a computer-readable medium storing software, the software including instructions executable by one or more computers, the instructions causing the one or more computers to perform operations by such execution, the operations including a flow of the local affine transformation based face illumination quality assessment method as described above.
According to the fourth aspect of the present invention, a human face illumination quality evaluation apparatus based on local affine transformation is further provided, including:
a module for acquiring an input original image;
a module for detecting face key points in the original image;
a module for cutting out a first face image according to the key points of the face in the original image;
a module for correcting coordinates of the key points of the face in the original image, and matching the coordinates to the first face image to obtain corrected key points of the face;
a module for removing the background based on the first face image to obtain a second face image;
a module for calculating the illumination brightness of the face based on the second face image;
a module for affine transforming the first face image to a standard posture according to standard front face subdivision processing based on a standard front face with bilateral symmetry and the corrected key points of the face to obtain a third face image;
a module for calculating a global illumination uniformity of the face based on the third face image; and
and the module is used for obtaining the illumination quality evaluation result of the face based on the global illumination uniformity of the face and the illumination brightness of the face.
According to a fifth aspect of the invention, a server comprises:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising the aforementioned flow of the local affine transformation based face illumination quality assessment method.
According to the human face illumination quality evaluation method based on local affine transformation, the interference of non-facial areas is effectively eliminated through the methods of facial area cutting and affine transformation, the occupation ratio of left and right faces is balanced, the accuracy of human face image illumination quality evaluation is improved, and the real brightness condition of an image can be accurately reflected.
By combining the scheme of the invention, the convex hull of the key points of the human face is used for intercepting the human face area to calculate the illumination brightness value, so that background interference is eliminated; meanwhile, the affine transformation is used for correcting the face to the standard posture, so that the problem of inaccurate illumination uniformity evaluation under the condition of a side face is solved, and the illumination brightness and uniformity and the comprehensive illumination quality of the face are calculated more accurately.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent. In addition, all combinations of claimed subject matter are considered a part of the presently disclosed subject matter.
Drawings
The foregoing and other aspects, embodiments and features of the present teachings can be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.
The drawings are not necessarily all drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing.
Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a method for evaluating human face illumination quality based on local affine transformation according to a first embodiment of the present invention.
Fig. 2 is a test artwork according to an exemplary embodiment of the first embodiment of the present invention.
Fig. 3 is an example of a first face image cropped using face key points according to a first embodiment of the present invention.
Fig. 4 is an example of a second face image obtained after removing a background according to the first embodiment of the present invention.
Fig. 5 is an example of a third face image obtained by correcting a face to a standard pose according to the first embodiment of the present invention.
Fig. 6 is a schematic diagram of a face recognition system according to a first embodiment of the present invention.
Fig. 7 is a schematic diagram of a face feature pre-registration warehousing process of the face recognition system according to the first embodiment of the invention.
Fig. 8 is a flowchart of a face recognition process of the face recognition system according to the first embodiment of the present invention.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not necessarily intended to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
The method for evaluating the illumination quality of the human face based on the local affine transformation in the first embodiment of the present invention shown in fig. 1 is implemented by:
s101: acquiring an input original image;
s102: detecting key points of the human face in the original image;
s103: cutting to obtain a first face image according to the key points of the face in the original image;
s104: carrying out coordinate correction on the face key points in the original image, and matching the face key points to the first face image to obtain corrected face key points;
s105: removing the background based on the first face image to obtain a second face image;
s106: calculating the illumination brightness of the face based on the second face image;
s107: affine transformation is carried out on the first face image to a standard posture according to standard front face subdivision processing based on a standard front face with bilateral symmetry and the corrected key points of the face, and a third face image is obtained;
s108: calculating the global illumination uniformity of the face based on the third face image; and
s109: and obtaining an illumination quality evaluation result of the face based on the global illumination uniformity of the face and the illumination brightness of the face.
Therefore, when the illumination quality of the face is evaluated according to the face detection result, the illumination quality of the face is comprehensively evaluated through the illumination intensity and the illumination uniformity of the face position, and the problem of inaccurate evaluation of the illumination uniformity under the side face condition can be solved while the background interference is eliminated.
Preferably, the cutting out the first face image according to the key points of the face in the original image includes:
determining a face boundary box according to coordinates of face key points in an original image;
and cutting the area in the human face boundary box, scaling to the size of L pixels by L pixels and graying to obtain a first human face image.
Preferably, the removing the background based on the first face image to obtain the second face image includes:
obtaining a convex hull M of the corrected key points of the human face, namely a mask of a human face region, wherein M is a binary image with the size of L × L pixels, the pixel value of the human face region is 1, and the pixel values of other regions are 0; and
and obtaining a second face image with the face area according to the mask of the face area and the correction of the key points of the face.
Preferably, the calculating the face illumination brightness based on the second face image includes:
and expressing the average illumination brightness by adopting the average pixel value of the face area and normalizing, and obtaining the face illumination brightness excluding the non-face area.
Preferably, the standard front face subdivision processing based on the standard front faces with bilateral symmetry includes:
detecting human face key points in the front face image by adopting a standard front face image which is symmetrical left and right, and cutting out the human face image of the front face image according to the human face key points;
correcting the face key points in the front face image into the face image of the front face image to obtain corrected face key points;
and dividing the modified face key points into K triangular sub-regions by adopting a triangulation algorithm, and forming a set by three vertexes of each sub-region after division.
Preferably, the affine transformation of the first face image to the standard posture to obtain a third face image includes:
triangulation is carried out on the corrected face key points of the first face image by adopting a triangulation algorithm, and the triangulation is divided into K triangular subregions;
sequentially carrying out affine transformation on the subareas of the modified face key points of the first face image into the shape of the subareas of the modified face key points of the front face image to obtain a subarea image after affine transformation; and
and re-splicing the sub-region images after affine transformation according to the three vertex coordinates of the sub-region of the face key point after correction of the front face image to obtain a third face image.
Preferably, the calculating the global illumination uniformity of the face based on the third face image includes:
averagely dividing the third face image into a left part and a right part, namely a left face part and a right part, wherein each part comprises half faces;
recording the left face part as a first image, and recording the right face part as a second image after horizontally turning;
moving pixel by pixel from the upper left corner to the lower right corner of the first image and the second image by adopting a sliding window;
calculating the local illumination uniformity of the sliding window on the basis of the average pixel value in the sliding window area every time; and
and calculating the global illumination uniformity of the human face by adopting a weighted summation mode.
Preferably, the obtaining of the illumination quality evaluation result of the face based on the face global illumination uniformity and the face illumination brightness includes:
the global illumination uniformity of the face and the illumination brightness of the face are combined, and the illumination quality evaluation result of the face is obtained by adopting a product mode.
An exemplary implementation of the foregoing method will now be described in more detail with reference to the accompanying drawings.
Face keypoint detection
And performing key point detection output through a face key point detection model by taking an original image obtained from a front end or a server or an original image extracted from a video through a frame as input. The face key points usually include the face contour of the face and the position information of five sense organs.
In this example, the input image I is detected using a pre-trained face keypoint detection model (e.g., a Dlib tool)srcN face key points, denoted Psrc=(psrc,0,psrc,1,…,psrc,N-1);
Wherein p issrc,n=(xsrc,n,ysrc,n) And N is the nth face key point coordinate, wherein N is 0,1, … and N-1.
Data pre-processing
The data preprocessing comprises the steps of cutting an original image to obtain a first face image I, carrying out coordinate correction on face key points corresponding to the original image to match the first face image I, and obtaining the corrected face key points.
As an example, the process of cropping the original image to obtain the first face image I includes:
determining a face boundary box according to coordinates of face key points in an original image;
and cutting the area in the human face boundary frame, zooming to the size and graying to obtain a first human face image.
For example, a face bounding box is determined according to the highest, lowest, leftmost and rightmost points of the key points of the face, and the coordinate of the upper left corner of the bounding box is (x)left,ytop) The coordinate of the lower right corner is (x)right,ybottom) Is cut outAnd (5) zooming the area in the face boundary frame to a preset size of L pixels and graying to obtain an image I. Alternatively, the aforementioned pixel size may be selected to be 64 × 64.
And then, carrying out coordinate correction on the key points of the human face so as to match the image I.
The corrected key point is marked as P, wherein P ═ P (P)0,p1,…,pN-1)。
Wherein p isn=(xn,yn) Is the coordinates of the nth individual face keypoint of image I.
As an alternative, the coordinate correction method includes:
Figure BDA0003153998220000071
thus, the coordinates of the n-th personal face key point after correction are obtained.
Calculating the face region according to the key points of the face, and removing the background
As an example, using the convexHull method in the opencv image processing library, the convex hull M of the key point P is calculated, i.e. the mask of the face region is obtained. And M is a binary image with the size of L pixels by L, wherein the pixel value of the face region is 1, and the pixel values of other regions are 0.
Then, a face image with the background removed is obtained
Figure BDA0003153998220000072
Recording as a second face image;
Figure BDA0003153998220000073
calculating the illumination brightness of the face region
By way of example, using the average pixel value of the face region and normalizing to represent the average illumination brightness, the face illumination brightness Q of non-face regions is excludedLThe calculation method comprises the following steps:
Figure BDA0003153998220000074
wherein the content of the first and second substances,
Figure BDA0003153998220000075
representing a second face image
Figure BDA0003153998220000076
The pixel value of the x-th row and the y-th column in the convex hull M, and M (x, y) represents the pixel value of the x-th row and the y-th column in the convex hull M.
Construction of a Standard frontal face Subdivision
Obtaining a standard right-left symmetrical face image, and obtaining a face image I of the standard right-face image according to the face key point detection, the face image cutting and the coordinate correction of the embodiment of the inventionfAnd the coordinates P of key points of the face after correctionfNamely:
face key point detection is carried out on the front face image to obtain corresponding face key point coordinates, and on the basis, the front face image is cut according to the face key point coordinates to obtain the face image If
And the coordinates of the key points of the face of the front face image are corrected and matched with the face image IfObtaining the coordinates P of the key points of the face after correctionf
Then, adopting a Bowyer-Watson triangulation algorithm to carry out coordinate P of all the corrected face key pointsfDividing the three points into K triangular subregions, and forming a set T by three vertexes of each subregion after divisionf
Tf=[(af0,bf0,cf0),(af1,bf1,cf1),…,(af(K-1),bf(K-1),cf(K-1))]
Wherein a set of data (a)fm,bfm,cfm) Face image I as a standard frontal face imagefM sub-region tfmM-0, 1, …, K-1.
Affine transformation of human face to standard posture according to key points
And (3) dividing the key point P of the face image I into K triangular subregions by adopting a Bowyer-Watson triangulation algorithm, and recording the K triangular subregions as a set T:
T=[(a0,b0,c0),(a1,b1,c1),…,(aK-1,bK-1,cK-1)]
wherein a set of data (a)m,bm,cm) M-th sub-region t representing face image ImThree vertices of (a).
Finally, the applyAffiniTransform function of the opencv image processing library is adopted to sequentially convert the sub-region tmAffine transformation to a standard face subregion tfmObtaining a transformed subregion image t'm
Converting each sub-region image t 'after affine transformation'mAccording to sub-region tfmAnd the three vertex coordinates are spliced again and combined to obtain a new face image I ', and the new face image I' is recorded as a third face image.
Calculating the global illumination uniformity of the human face
And averagely dividing the affine-transformed face image I' into a left part and a right part, wherein each part comprises half faces.
Left face portion as image I'lAnd recorded as a first image.
The right face part is horizontally turned over to obtain an image I'rAnd recorded as a second image.
Calculating the local illumination uniformity by moving the first image and the second image from the upper left corner to the lower right corner pixel by using a sliding window, such as 8 multiplied by 8, moving the sliding window J times in total, and recording the local illumination uniformity of the sliding window J times as sj,j=0,1,…,J-1。
Then there are:
Figure BDA0003153998220000091
wherein, mul,jAnd mur,jRespectively being the first image and the second image in the jth sliding window region I'l,jAnd l'r,jThe average pixel value of (2).
Calculating the global illumination uniformity Q of the human face by adopting a weighted summation modeS
Figure BDA0003153998220000092
Wherein, wjRepresents a weight value, wj=ρj+c,ρjIs a j-th sliding window region I'l,jAnd l'r,jThe correlation coefficient of (2). c is a minimum value, preventing a weight value of 0.
Figure BDA0003153998220000093
Wherein, I'l,j(x, y) represents the pixel value of the first image at the x row and y column of the j sliding window area; i'r,j(x, y) denotes the pixel value of the second image at the x-th row and y-th column of the j-th sliding window region.
Comprehensively calculating illumination quality
As an example, the illumination quality of the human face is evaluated by comprehensively considering the illumination intensity and the uniformity.
Optionally, the illumination quality Q of the face image is calculated by combining the brightness and the uniformity in a product manner:
Q=QL·QS
wherein Q isSRepresenting the global illumination uniformity, Q, of the faceLIndicating the illumination intensity of the face excluding non-face regions.
Illumination quality test of face image
As shown in fig. 2, 3, 4, 5, the face illumination quality in the acquired fig. 2 test chart is tested.
First, a face area is cut out as shown in fig. 3. The prior art evaluation method is to calculate the illumination quality directly based on fig. 3. On the basis of fig. 3, the foregoing embodiment of the present invention further intercepts the face region (as fig. 4), eliminates the background interference, and calculates the brightness of the face region; the face is then corrected to a standard pose (see fig. 5) and the global illumination uniformity is calculated. The experimental result is compared with the illumination quality evaluation experimental result shown in the table below, and as can be seen from the experimental result, even if the illumination quality of the tested image is better, the evaluation value of the existing method is lower, and the invention can reflect the real brightness condition of the image more accurately.
Brightness of illumination Uniformity of illumination Quality of illumination
Existing methods 0.56 0.81 0.45
Method of the invention 0.75 0.89 0.67
In connection with fig. 1 and the implementation of the above-described first embodiment of the present invention, the present invention may also be configured to be implemented in the following manner.
Human face illumination quality evaluation device based on local affine transformation
According to the embodiment disclosed by the invention, the invention also provides a human face illumination quality evaluation device based on local affine transformation, which comprises the following steps:
a module for acquiring an input original image;
a module for detecting face key points in the original image;
a module for cutting out a first face image according to the key points of the face in the original image;
a module for correcting coordinates of the key points of the face in the original image, and matching the coordinates to the first face image to obtain corrected key points of the face;
a module for removing the background based on the first face image to obtain a second face image;
a module for calculating the illumination brightness of the face based on the second face image;
a module for affine transforming the first face image to a standard posture according to standard front face subdivision processing based on a standard front face with bilateral symmetry and the corrected key points of the face to obtain a third face image;
a module for calculating a global illumination uniformity of the face based on the third face image; and
and the module is used for obtaining the illumination quality evaluation result of the face based on the global illumination uniformity of the face and the illumination brightness of the face.
Server
According to an embodiment of the disclosure, there is also provided a server, including:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising the flow of the local affine transformation based human face illumination quality assessment method of any of the preceding embodiments, in particular the flow of the method shown in fig. 1.
Computer system
According to an embodiment of the present disclosure, there is also provided a computer system including:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising the flow of the local affine transformation based human face illumination quality assessment method of any of the preceding embodiments, in particular the flow of the method shown in fig. 1.
Computer readable medium storing software
According to the embodiment of the present disclosure, a computer-readable medium storing software is also provided, where the software includes instructions executable by one or more computers, and the instructions cause the one or more computers to execute operations including the flow of the local affine transformation based human face illumination quality assessment method of any of the foregoing embodiments, especially the flow of the method shown in fig. 1.
Face recognition system
The method for evaluating the illumination quality of the human face disclosed by the invention can be used for updating and processing the data of the human face feature database.
In the face recognition system, taking the face recognition system for entrance guard as an example, as shown in fig. 6, the face recognition system includes at least one face image acquisition terminal 100 located at the front end and a server 200 located at the cloud end. The server 200 may be implemented using a single server (e.g., a single blade server) or an array or combination of multiple servers (e.g., multiple blade servers).
The face image collecting terminal 100, in some embodiments, may be a camera device with a data interface, and is installed at an entrance of a gate of an access control system, and a lens of the camera device faces a shooting object to collect a face image, and the face image is transmitted to the server 200 in the cloud through the data interface to perform face recognition. In a preferred embodiment, the data interface of the camera device is a network communication interface, and can be connected to an intranet, or the base station node is connected to the internet to perform image transmission and communication.
In another embodiment, the facial image capturing terminal 100 may also be an intelligent recognition terminal integrated with a camera device, for example, an integrated terminal with a display screen, such as a terminal with a processor, a memory, and a network communication module, for example, an iOS or Android operating system-based recognition PAD, installed at an entrance position of a gate of an access control system, and configured to capture a facial image, and transmit the facial image to a cloud server through the network communication module for facial recognition processing.
In another embodiment, as mentioned above, the intelligent terminal may also deploy a face recognition model in its memory, and the face recognition module 110 as the local end implements offline face recognition processing on the local end to cope with face recognition processing in special situations, such as network interruption.
In the foregoing system, the face recognition model deployed in the server 200 is generally a large model with high robustness and high accuracy, while the face recognition model deployed in the intelligent recognition terminal is generally a small model capable of achieving rapid recognition, but the accuracy is relatively reduced compared with the large model. The specific application of the recognition model can be realized based on the existing face recognition algorithm.
In an alternative embodiment, in the facial image capturing terminal 100 of the present embodiment, a facial illumination quality determining module 120 may be configured to execute the process of the embodiment shown in fig. 1 to achieve fast determination of illumination quality of the captured facial image.
Therefore, when the feature acquisition of the face recognition database (i.e. the face bottom library) is performed in advance, as shown in fig. 7, the illumination quality of the face image of each shot object is evaluated, so that the image meeting the quality threshold Qm requirement is used as the storage image of the shot object, that is, Q is not less than Qm, and on the basis, the face feature value is extracted, and is associated with the identity information of the shot object to be used as the identification data for associated storage.
As shown in fig. 8, when the gate terminal collects the face image, if the intelligent recognition terminal of the integrated camera device is adopted to collect the image, the collected image is input into the face illumination quality judgment module to quickly judge the illumination quality, if Q is less than Qm, the image is sent to the server 200, the large model deployed in the server is used to perform face recognition, so that the influence of the illumination quality on the recognition result is reduced, the recognition accuracy is improved, and the server 200 feeds back the recognition result to the intelligent recognition terminal at the front end. If Q is more than or equal to Qm, the face recognition is carried out locally at the intelligent recognition terminal to output a recognition result, and the high-quality image is utilized for fast recognition, so that on one hand, the fast recognition is realized, and on the other hand, the recognition accuracy can be ensured.
As shown in fig. 7, on the premise that Q is greater than or equal to Qm, the lighting quality of the warehousing image of the object stored in the face recognition database is further judged according to the identification information of the shooting object (person) corresponding to the recognition result, if the lighting quality Q of the face picture obtained by shooting currently is greater than or equal to QmReal timeIllumination quality Q greater than the binned imageLibraryThen put QLibrary=QReal timeNamely, the face picture obtained by current shooting is used for replacing the image in the database, so as to realize the updating of the face recognition database.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.

Claims (10)

1. The utility model provides a face identification system for entrance guard, its characterized in that includes: at least one is located the face image acquisition terminal of front end to and be located the server in high in the clouds, face image acquisition terminal is provided with the camera device that is used for gathering the face image and is used for face identification's first model, dispose the second model that is used for face identification in the server, wherein:
the face image acquisition terminal is also provided with a face illumination quality judgment module for quickly judging the illumination quality of the acquired face image and comprehensively determining the illumination quality of the face based on the product of the face global illumination uniformity and the face illumination brightness; the face image acquisition terminal is set to input the acquired face image into the face illumination quality judgment module to quickly judge the illumination quality, if the face illumination quality Q is smaller than a preset quality threshold value Qm, the face image acquisition terminal is sent to the server to perform face identification by using a second model deployed in the server, and the server feeds back an identification result to the face image acquisition terminal at the front end; and if the illumination quality Q of the face is greater than or equal to a preset quality threshold value Qm, carrying out face recognition locally by using a first model at the face image acquisition terminal, and outputting a recognition result.
2. The face recognition system for entrance guard of claim 1, wherein the first model is a small model capable of realizing fast recognition.
3. The face recognition system for entrance guard of claim 1, wherein the second model is a large model with high robustness and high accuracy.
4. The door access control face recognition system according to claim 1, wherein a face recognition database used for face recognition is pre-constructed, and for each face image of a shot object, the illumination quality of the face is comprehensively determined based on the product of the face global illumination uniformity and the face illumination brightness, so that the image meeting the quality threshold Qm requirement is used as a warehouse-in image of the shot object, the face characteristic value is extracted on the basis of the warehouse-in image, and is associated with the identity information of the shot object and stored as the recognition data thereof in an associated manner.
5. According to claim 1The face recognition system for entrance guard is characterized in that the face image acquisition terminal further judges the illumination quality of the stored image of the object in the face recognition database under the condition that the illumination quality Q of the face is greater than or equal to a preset quality threshold Qm, and if the illumination quality Q of the face picture obtained by current shooting is greater than or equal to the preset quality threshold Qm, the illumination quality Q of the stored image of the object in the face recognition database is judgedReal timeIllumination quality Q greater than the binned imageLibraryThen put QLibrary=QReal timeNamely, the face picture obtained by current shooting is used for replacing the image in the database, so as to realize the updating of the face recognition database.
6. The door access control face recognition system according to claim 1, wherein the face image acquisition terminal is a terminal with a display screen integrated with a processor, a memory and a network communication module.
7. The face recognition system for entrance guard of claim 1, wherein the face image acquisition terminal is installed at an entrance position of a gate of the entrance guard system.
8. A face recognition method executed in a face recognition system deployed with at least one face image acquisition terminal at the front end and a server at the cloud end is characterized in that:
the face image acquisition terminal is provided with a camera device for acquiring a face image and a first model for face recognition; the face image acquisition terminal is also provided with a face illumination quality judgment module for quickly judging the illumination quality of the acquired face image and comprehensively determining the illumination quality of the face based on the product of the face global illumination uniformity and the face illumination brightness; a second model for face recognition is deployed in the server;
when the entrance guard gate terminal collects the face image, the face image collecting terminal inputs the collected face image into the face illumination quality judging module to quickly judge the illumination quality, if the face illumination quality Q is smaller than a preset quality threshold Qm, the face image is sent to a server, a second model deployed in the server is used for face recognition, and the server feeds back a recognition result to the front face image collecting terminal; and if the illumination quality Q of the face is greater than or equal to a preset quality threshold value Qm, carrying out face recognition locally by using a first model at the face image acquisition terminal, and outputting a recognition result.
9. The face recognition method of claim 8, wherein: the face recognition database used for face recognition is constructed in advance, for the face image of each shot object, the illumination quality of the face is comprehensively determined based on the product of the global illumination uniformity of the face and the illumination brightness of the face, so that the image meeting the quality threshold Qm requirement is used as a storage image of the shot object, the face characteristic value is extracted on the basis of the storage image, and is associated with the identity information of the shot object and stored as the recognition data.
10. The face recognition method of claim 8, wherein: the face image acquisition terminal further judges the illumination quality of the warehousing image of the object stored in the face recognition database under the condition that the face illumination quality Q is more than or equal to a preset quality threshold Qm, and if the illumination quality Q of the face picture obtained by current shooting is higher than or equal to the preset quality threshold QmReal timeIllumination quality Q greater than the binned imageLibraryThen put QLibrary=QReal timeNamely, the face picture obtained by current shooting is used for replacing the image in the database, so as to realize the updating of the face recognition database.
CN202110772015.XA 2021-04-29 2021-04-29 Face recognition system and face recognition method for entrance guard Withdrawn CN113362221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772015.XA CN113362221A (en) 2021-04-29 2021-04-29 Face recognition system and face recognition method for entrance guard

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110469734.4A CN112991159B (en) 2021-04-29 2021-04-29 Face illumination quality evaluation method, system, server and computer readable medium
CN202110772015.XA CN113362221A (en) 2021-04-29 2021-04-29 Face recognition system and face recognition method for entrance guard

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110469734.4A Division CN112991159B (en) 2021-04-29 2021-04-29 Face illumination quality evaluation method, system, server and computer readable medium

Publications (1)

Publication Number Publication Date
CN113362221A true CN113362221A (en) 2021-09-07

Family

ID=76340616

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110772015.XA Withdrawn CN113362221A (en) 2021-04-29 2021-04-29 Face recognition system and face recognition method for entrance guard
CN202110469734.4A Active CN112991159B (en) 2021-04-29 2021-04-29 Face illumination quality evaluation method, system, server and computer readable medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110469734.4A Active CN112991159B (en) 2021-04-29 2021-04-29 Face illumination quality evaluation method, system, server and computer readable medium

Country Status (1)

Country Link
CN (2) CN113362221A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627603A (en) * 2022-03-16 2022-06-14 北京物资学院 Warehouse safety early warning method and system
CN115346333A (en) * 2022-07-12 2022-11-15 北京声智科技有限公司 Information prompting method and device, AR glasses, cloud server and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558046B (en) * 2016-10-31 2018-02-06 深圳市飘飘宝贝有限公司 The quality determining method and detection means of a kind of certificate photo
JP7132041B2 (en) * 2018-09-03 2022-09-06 株式会社日立製作所 Color evaluation device and color evaluation method
CN111382618B (en) * 2018-12-28 2021-02-05 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
CN110826402B (en) * 2019-09-27 2024-03-29 深圳市华付信息技术有限公司 Face quality estimation method based on multitasking
CN111062272A (en) * 2019-11-29 2020-04-24 南京甄视智能科技有限公司 Image processing and pedestrian identification method and device based on color recovery and readable storage medium
CN111178337B (en) * 2020-01-07 2020-12-29 南京甄视智能科技有限公司 Human face key point data enhancement method, device and system and model training method
CN111860091A (en) * 2020-01-22 2020-10-30 北京嘀嘀无限科技发展有限公司 Face image evaluation method and system, server and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627603A (en) * 2022-03-16 2022-06-14 北京物资学院 Warehouse safety early warning method and system
CN115346333A (en) * 2022-07-12 2022-11-15 北京声智科技有限公司 Information prompting method and device, AR glasses, cloud server and storage medium

Also Published As

Publication number Publication date
CN112991159B (en) 2021-07-30
CN112991159A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN110826519B (en) Face shielding detection method and device, computer equipment and storage medium
JP7094702B2 (en) Image processing device and its method, program
EP2676224B1 (en) Image quality assessment
CN110569731B (en) Face recognition method and device and electronic equipment
CN105740780B (en) Method and device for detecting living human face
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
CN109087261B (en) Face correction method based on unlimited acquisition scene
JP2004078912A (en) Method for positioning face in digital color image
CN111160291B (en) Human eye detection method based on depth information and CNN
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
CN109711268B (en) Face image screening method and device
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN108154491B (en) Image reflection eliminating method
CA3136674C (en) Methods and systems for crack detection using a fully convolutional network
CN109190617B (en) Image rectangle detection method and device and storage medium
US11475707B2 (en) Method for extracting image of face detection and device thereof
WO2019010932A1 (en) Image region selection method and system favorable for fuzzy kernel estimation
WO2023093151A1 (en) Image screening method and apparatus, electronic device, and storage medium
CN111683221B (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN111461101A (en) Method, device and equipment for identifying work clothes mark and storage medium
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN110991256A (en) System and method for carrying out age estimation and/or gender identification based on face features
KR101904480B1 (en) Object recognition system and method considering camera distortion
CN116342519A (en) Image processing method based on machine learning
JP7386630B2 (en) Image processing device, control method and program for the image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210907

WW01 Invention patent application withdrawn after publication