CN111738242B - Face recognition method and system based on self-adaption and color normalization - Google Patents

Face recognition method and system based on self-adaption and color normalization Download PDF

Info

Publication number
CN111738242B
CN111738242B CN202010848894.5A CN202010848894A CN111738242B CN 111738242 B CN111738242 B CN 111738242B CN 202010848894 A CN202010848894 A CN 202010848894A CN 111738242 B CN111738242 B CN 111738242B
Authority
CN
China
Prior art keywords
face
image
face recognition
inputting
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010848894.5A
Other languages
Chinese (zh)
Other versions
CN111738242A (en
Inventor
陈晓莉
丁一帆
徐菁
杨世宏
徐云华
林建洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Ponshine Information Technology Co ltd
Original Assignee
Zhejiang Ponshine Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Ponshine Information Technology Co ltd filed Critical Zhejiang Ponshine Information Technology Co ltd
Priority to CN202010848894.5A priority Critical patent/CN111738242B/en
Publication of CN111738242A publication Critical patent/CN111738242A/en
Application granted granted Critical
Publication of CN111738242B publication Critical patent/CN111738242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method based on self-adaptation and color normalization, which comprises the following steps: s1, creating a face image data set; s2, carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images; s3, carrying out face positioning on the output new face image, and carrying out color normalization processing on the eye circumference area of the positioned face to obtain a processed face image; s4, inputting the processed face image into a face recognition network, calculating the Euclidean distance between the vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model; and S5, inputting the image to be recognized into a face recognition network model for face recognition to obtain a final recognition result.

Description

Face recognition method and system based on self-adaption and color normalization
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method and a face recognition system based on self-adaption and color normalization.
Background
In training a face detection network, because the face size in a sample image is not fixed due to different angles and focal lengths, in order to accurately locate all possible faces in the image, we usually enlarge/reduce the original image to form an image pyramid, which means that as many scales as possible are selected to form images with different resolutions, and when the enlargement/reduction scales are very close, redundancy of detection frames is caused.
For example, patent publication No. CN109684931A discloses a face recognition method based on color channels, which includes the following steps: s1: training a face recognition model and normalizing the frequencies of different color channels; s2: selecting a face image to be recognized and a face image in the stored information, calculating the similarity xB, xR, xG of two face images in different color channels by using the weights of the different face features, and substituting the similarity into the corresponding similarity? And obtaining the maximum probability max [ Nr (xR), Ng (xG) and Nb (xB) of the similarity in the probability functions Nr, Ng and Nb, and judging whether the face image to be recognized and the image in the stored information are the same person or not according to the relation between the maximum probability and a threshold value. The face recognition method based on the color sub-channels provided by the invention provides a more reliable and stable recognition result for the problem of glasses reflection commonly existing in face recognition. Although the above patent can identify the human face, it aims at the whole face detection in the detection, and is greatly interfered by light, shadow and the like, and the human face identification cannot be accurately performed.
In order to solve the problem, the invention designs an adaptive scale selection mechanism. Through experiments, the accuracy of face recognition can be improved through color normalization of the detected face image in the eye surrounding area.
Disclosure of Invention
The invention aims to provide a face recognition method and a face recognition system based on self-adaptation and color normalization, which aim to overcome the defects of the prior art, reduce the redundancy of a face detection frame through a proposed self-adaptation scale selection mechanism, reduce the influence caused by factors such as light, color channels and the like through color normalization of an eye surrounding area before a detected face image is input into a face recognition network, and improve the accuracy of face recognition.
In order to achieve the purpose, the invention adopts the following technical scheme:
a face recognition method based on self-adaptation and color normalization comprises the following steps:
s1, creating a face image data set;
s2, carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images;
s3, carrying out face positioning on the output new face image, and carrying out color normalization processing on the eye circumference area of the positioned face to obtain a processed face image;
s4, inputting the processed face image into a face recognition network, calculating Euclidean distance between a vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model;
and S5, inputting the image to be recognized into a face recognition network model for face recognition to obtain a final recognition result.
Further, the step S2 specifically includes:
s21, setting the image P in the face data set to be S ═ S according to a preset proportion1,S2...Sn]Performing image conversion, wherein the image P is converted into the resolution Hi×WiNew image P ofi(ii) a Wherein Hi=H*Si,Wi=W*Si
S22, new image P after conversioniInputting the data into a face detection network with 12 multiplied by 12 resolution to obtain dimension Hci×WciConfidence score graph CiAnd to CiCounting all elements in the list which are larger than a second preset threshold value to obtain the total number N of all elementsi
S23, calculating the total number NiAnd picture PiHas a resolution of Hi×WiRatio T ofiObtaining n ratios TiFor n ratios TiSorting is carried out, and T of the first three is selectediAs input data;
s24, selecting T of the first three in the sequenceiThe obtained candidate frames and other candidate frames are stored in a candidate list;
and S25, inputting the candidate frames in the candidate list into a face detection network with 48 × 48 resolution to adjust the detection frames, and taking the candidate frames with IoU smaller than a third preset threshold and confidence greater than a fourth preset threshold as final output face frames.
Further, the step S24 of adjusting the detection frame input into the 24 × 24 resolution face detection network is to adjust the detection frame to a size of 24 × 24 × 3, and obtain a 1 × 2D confidence array of whether there is a face and a 1 × 4D boundary information array that constrains the boundary of each candidate boundary frame.
Further, the step S25 of adjusting the detection frame input into the 48 × 48 resolution face detection network is to adjust the detection frame to a size of 48 × 48 × 3, and obtain a 1 × 2D confidence array of whether there is a face and a 1 × 4D boundary information array that constrains the boundary of each candidate boundary box.
Further, the second preset threshold is 0.9; the third preset threshold value is 0.7; the fourth preset threshold is 0.95.
Further, the step S3 specifically includes:
s31, cutting the output face frame, and positioning the face in the face frame to obtain a positioned face image;
s32, performing 2d smoothing on the positioned face image to obtain a periocular region with a main part of a positioning color from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625 ];
and S33, calculating three-channel mean values of all the face image colors, and smoothing the values of all the channels of the eye surrounding area of each face image to the mean value to obtain the processed face image.
Further, the step S4 is specifically:
defining triplets in a face recognition network as<a,p,n>Where a, p belong to the face image of the same person and n belongs to the face image of another person, will<a,p,n>Inputting the output O _ a, O _ p and O _ n into a face recognition network, and the output O _ a, O _ p and O _ n satisfy
Figure GDA0002767529150000031
Figure GDA0002767529150000032
And performing condition screening on the training sample to screen out proper triplet inputTraining is carried out in the face recognition, and a final face recognition network model is obtained.
Further, the performing of condition screening on the training samples specifically comprises: selecting
Figure GDA0002767529150000033
And when it is satisfied
Figure GDA0002767529150000034
Training samples of the conditions.
Further, in the step S5, the image to be recognized is input into the face recognition network model for face recognition, specifically, the image to be recognized is processed based on the steps S2-S4, so as to obtain a final recognition result.
Correspondingly, a face recognition system based on self-adaptation and color normalization is also provided, which comprises:
the creation module is used for creating a face image data set;
the first processing module is used for carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images;
the second processing module is used for carrying out face positioning on the output new face image and carrying out color normalization processing on the periocular region of the positioned face to obtain a processed face image;
the training module is used for inputting the processed face image into a face recognition network, calculating Euclidean distance between a vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model;
and the recognition module is used for inputting the image to be recognized into the face recognition network model for face recognition to obtain a final recognition result.
Compared with the prior art, the invention aims to reduce the redundancy of a face detection frame by the proposed adaptive scale selection mechanism before a face image is input into a face detection network, and reduce the influence caused by factors such as light, color channels and the like by color normalization of an eye surrounding area before the detected face image is input into the face recognition network, thereby improving the accuracy of face recognition.
Drawings
FIG. 1 is a flow chart of a face recognition method based on self-adaptation and color normalization according to an embodiment;
FIG. 2 is a schematic structural diagram of a training phase according to an embodiment;
FIG. 3 is a schematic structural diagram of an identification phase according to an embodiment;
FIG. 4 is a diagram illustrating the color normalization result of the eye region according to an embodiment;
FIG. 5 is a schematic view of a cut-away area of a face after positioning according to an embodiment;
fig. 6 is a structural diagram of a face recognition system based on adaptation and color normalization according to a third embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide a face recognition method and system based on self-adaption and color normalization aiming at the defects of the prior art.
Example one
The embodiment provides a face recognition method based on self-adaptation and color normalization, as shown in fig. 1, including the steps of:
s1, creating a face image data set;
s2, carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images;
s3, carrying out face positioning on the output new face image, and carrying out color normalization processing on the eye circumference area of the positioned face to obtain a processed face image;
s4, inputting the processed face image into a face recognition network, calculating Euclidean distance between a vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model;
and S5, inputting the image to be recognized into a face recognition network model for face recognition to obtain a final recognition result.
The method for face recognition based on self-adaptation and color normalization of the embodiment comprises a training phase and a recognition phase.
Before the image is subjected to face detection in a training stage, a self-adaptive scale selection mechanism is designed to reduce the redundant number of candidate face detection frames before network convolution, then the candidate face detection frames are sent to a face detection stage adopting multi-resolution screening, the detected face five sense organs are positioned in the face detection stage, 2d smoothing processing is carried out on the image by utilizing five points, color normalization processing of the peripheral region of the eye is carried out on the face image to be recognized, the face image is sent to a face recognition stage adopting triplet loss, the convolution characteristics are converted into vectors subjected to normalization through L2 in the face recognition stage, and training of face recognition is completed by comparing whether Euclidean distances between the image vector to be recognized and the known face expression are smaller than a certain threshold value or not. As shown in figure 2 of the drawings, in which,
in the recognition stage, the face image to be recognized is mapped to the Euclidean space through a convolutional network, and the face recognition is completed according to the principle that the face of the same person has a smaller distance and the face of different persons has a larger distance. As shown in fig. 3.
Steps S1-S4 of the present embodiment are training phases, and step S5 is an identification period.
In step S1, a face image data set is created.
Creation of a data set: an ASIA-FaceV5 asian face data set was used, containing 500 photos of each person, 5 photos of each person, for a total of 2500 photos. All face images were 16-bit color BMP files, the resolution of the images was 640 x 480, 80% of which were selected as the training set and 20% as the test set.
In step S2, an adaptive scale selection process is performed on the image in the face data set based on the face detection network, and a new face image is obtained.
The method specifically comprises the following steps: image adaptive scale selection and face detection training.
Image adaptive scale selection: because the size of the face possibly existing in the image is unknown, an adaptive scale selection mechanism is provided to meet the detection of different face sizes.
S21, setting the image P in the face data set to be S ═ S according to a preset proportion1,S2...Sn]Performing image conversion, wherein the image P is converted into the resolution Hi×WiNew image P ofi(ii) a Wherein Hi=H*Si,Wi=W*Si
Setting default scale ratio to S ═ S1,S2...Sn]Using each scale S in the seti(SiIs S1、S2.n) To the picture in the data set according to the scale SiPerforming a zoom-in and zoom-out process, in which case the image P in the data set can be converted to a resolution Hi×WiNew image P ofiIn which H isi=H*Si,Wi=W*Si
The face detection training comprises the following steps:
s22, new image P after conversioniInputting the data into a face detection network with 12 multiplied by 12 resolution to obtain dimension Hci×WciConfidence score graph CiAnd to CiCounting all elements in the list which are larger than a second preset threshold value to obtain the total number N of all elementsi
When a new image PiAfter determination, the new image P isiSending into 12 × 12 resolution network (the network is convolution network) to obtain dimension Hci×WciConfidence score graph CiAnd to CiCounting all elements in the total number N which is larger than a second preset threshold value 0.9 to obtain the total number Ni
In the present embodiment, all elements refer to points in the generation ROI region, and the total number is the number of all points.
In a 12 × 12 resolution convolutional network, the input image size is 12 × 12 × 3 (height × width × channel), the final output of the network is a 1 × 1 × 2 confidence map that gives whether a face is present in the 12 × 12 image and a 1 × 1 × 4 boundary regression map that gives the corresponding ROI boundary if a face is detected.
S23, calculating the total number NiAnd picture PiHas a resolution of Hi×WiRatio T ofiObtaining n ratios TiFor n ratios TiSorting is carried out, and T of the first three is selectediAs input data;
calculating the ratio TiThe formula of (1) is:
Figure GDA0002767529150000071
in step S21, the preset ratio is N, so there are N total numbers NiAnd n pictures PiThus, n ratios T are obtained by the above formulaiThen for n ratios TiSorting is carried out, T of the first three is selectediThe input is input into a 24 x 24 network, the aim is to reduce the detection frame redundancy, and the detection frame redundancy caused by adopting a zooming-in/zooming-out mode is solved.
S24, selecting T of the first three in the sequenceiThe obtained candidate frames and the adjusted detection frames are input into a 24 x 24 resolution face detection network to obtain candidate frames with confidence scores larger than a second preset threshold and IoU smaller than a third preset threshold, and the obtained candidate frames and the adjusted detection frames are combinedOther candidate boxes are stored in the candidate list;
scoring the confidence score chart CiAnd (4) inputting the first three images in the middle sequence into a face detection network with 24 multiplied by 24 resolution, and eliminating the low confidence coefficient without inputting the images into the face detection network.
In a 24 × 24 resolution network, the detection box is adjusted to a size of 24 × 24 × 3, a 1 × 2D confidence array giving whether a face exists and a 1 × 4D boundary information array restricting the boundary of each candidate boundary box are obtained, and the candidate box with a confidence score greater than 0.9 and IoU smaller than a third preset threshold of 0.7 is obtained and kept in the candidate list together with all other candidate boxes.
And S25, inputting the candidate frames in the candidate list into a face detection network with 48 × 48 resolution to adjust the detection frames, and taking the candidate frames with IoU smaller than a third preset threshold and confidence greater than a fourth preset threshold as final output face frames.
The candidate boxes in the candidate list are input to a face detection network of 48 x 48 resolution.
In a 48 × 48 resolution network, the detection frame is adjusted to a size of 48 × 48 × 3, resulting in a 1 × 2D confidence array and a 1 × 4D boundary information array that give whether a face exists. At this stage IoU is set to the third preset threshold 0.7, candidate boxes with confidence level greater than the fourth preset threshold 0.95 are output as final output, IoU is less than the third preset threshold 0.7.
In the embodiment, a group of scaled pictures is set, the scaled pictures are input into a 12 × 12 convolutional network, the score of each scaled picture is calculated, top3 (the first three) is selected and input into a 24 × 24 convolutional network, the load of a face detection network is reduced, and the detection time is shortened.
It should be noted that the face detection network in this embodiment is a concatenated convolutional network.
In step S3, face localization is performed on the output new face image, and color normalization processing is performed on the periocular region of the located face, so as to obtain a processed face image.
The method specifically comprises the following steps:
s31, cutting the output face frame, and positioning the face in the face frame to obtain a positioned face image;
cutting and positioning: the face detection network cuts the detected face frame, the cutting size is 224 multiplied by 224, the facial features in the image are positioned, and the positioned face position is centered.
S32, performing 2d smoothing on the positioned face image to obtain a periocular region with a main part of a positioning color from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625 ];
obtaining a face interval: after the image is cut and positioned, the image is subjected to 2d smoothing processing by utilizing five points, and a section from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625] of the main part of the positioned color is obtained.
Wherein, the ratio from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625] is the image; after the face detection, the cutting proportion of the area after the face positioning is 0.375 and 0.625, namely the interval from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625 ].
FIG. 5 is a schematic view of a cut-out region of a human face after positioning; the image in the left image box of fig. 5 is a face finally obtained after the original image is positioned; the diagram in the right diagram box of fig. 5 is the face obtained by clipping the positioned face according to the ratio of 0.375 to 0.625, wherein the vertex of the upper left corner of the box is [0.375,0.375 ]; the lower right corner of the box is located at [0.625,0.625], and then color processing operations are performed on this region.
In this embodiment, multiple experiments prove that the face can be completely covered by the cropping ratios of 0.375 and 0.625 after the selection and the positioning.
And S33, calculating three-channel mean values of all the face image colors, and smoothing the values of all the channels of the eye surrounding area of each face image to the mean value to obtain the processed face image.
Calculating the average value of the channels: and randomly sampling and calculating 500 three-channel color mean values of the human face, wherein the mean values are R:122.15, G:95.98 and B:80.90 respectively.
Color normalization processing of the eye periphery region: in the detection calculation process, the present embodiment specifically describes using the B channel as an example:
and smoothing, namely smoothing the value of each channel to a mean value by using (B-np (B [ left _ margin: right _ margin, left _ margin: right _ ma rgin ]) + B _ mean), wherein np.mean represents the averaging operation, B _ mean is the mean value of the B channel, and left _ margin and right _ margin are intervals from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625] of the image to be detected respectively. The result of color normalization of the periocular region is shown in fig. 4, with the image before processing on the left and the image after color normalization of the periocular region on the right.
In the embodiment, a face range is processed, because the face in the face image detected by the face detection network is generally in the range from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625], the eye circumference is selected because the part is close to the skin color of the face and is slightly interfered by light and shadow, so a series of operations of the average value of the color utilization in the interval are normalized, and the accuracy of face recognition can be improved.
In step S4, the processed face image is input into a face recognition network, and an euclidean distance between a vector of the processed face image and a prestored face image is calculated, and it is determined whether the euclidean distance is smaller than a first preset threshold value, if so, a training sample is obtained, and the training sample is input into the face recognition network for training, so as to obtain a final face recognition network model.
And (3) face recognition training: the face recognition network is a convolutional network using a tripletloss loss function by defining triplets as<a,p,n>Wherein a, p correspond to the same id, a, p belong to the face image of the same person in the face recognition network, n correspond to different id, belong to the face image of another person, the purpose is to separate a pair of positive examples and negative examples with a certain distance interval, make the distance between all faces of the same person very small, and the distance between a pair of face images from different persons very large, formulate even its output result O _ a, O _ p, O _ n, satisfy
Figure GDA0002767529150000091
Selecting
Figure GDA0002767529150000092
And when it is satisfied
Figure GDA0002767529150000093
The training samples are subjected to condition screening by using online _ learning, and proper triplets are selected from the training samples for training to obtain a final face recognition network model.
In step S5, the image to be recognized is input to the face recognition network model for face recognition, and the final recognition result is obtained.
The above-described processing of steps S2-S4 is performed on the pictures in the test data (image to be recognized) set, the number of detection frames of this image input to the face detection network is reduced by step S2, and face detection is performed by a concatenated convolutional network (face detection network); processing the section from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625] of the detected face image through the step S3, carrying out color normalization processing on the color of the section, sending the processed image into the trained face recognition network model of the step S4, calculating whether Euclidean distance between the image to be recognized and the image in the face library is smaller than a set threshold value through the trained face recognition network model, and further judging whether the image is the same identity, so as to obtain a final recognition result.
The face recognition method provided by the embodiment comprises 4 stages, namely, an image adaptive scale selection stage before face detection, a face detection stage, a color normalization processing stage of a periocular region before face recognition and a face recognition stage. In the first stage, because the size of a face possibly existing in an image is not fixed, the face is quickly found by setting different scaling ratios; in the second stage, 12 × 12, 24 × 24 and 48 × 48 multi-resolution convolutional networks are adopted for step-by-step screening, and a large number of non-face detection frames are eliminated while a high recall rate is kept; in the third stage, even the photos of the same person are greatly different due to the influence of factors such as illumination, expression, angle, shielding and the like in the images, the colors of the eye surrounding areas of the images to be recognized are normalized, and the richness of face feature extraction can be kept; in the fourth stage, a triple loss face recognition network is utilized, so that the difference between good face feature vectors can be learned, and a recognition result can be given quickly.
The embodiment aims to reduce the redundancy of a face detection frame through a proposed adaptive scale selection mechanism before a face image is input into a face detection network, and reduce the influence caused by factors such as light, color channels and the like through color normalization of an eye surrounding area before the detected face image is input into the face recognition network, so that the accuracy of face recognition is improved.
Example two
The difference between the face recognition method based on self-adaptation and color normalization provided by the embodiment and the embodiment one is that:
this example uses an ASIA-FaceV5 asian face data set with 2000 images as the training set and 500 images as the test set. The processing was performed using the method of example one, and the speed and accuracy comparison results using this method and not using this method are shown in table 1.
Figure GDA0002767529150000101
TABLE 1
According to the table 1, the detection speed is improved through the self-adaptive mechanism, and the color normalization of the eye surrounding area improves the accuracy rate of face recognition.
EXAMPLE III
The present embodiment provides a face recognition system based on adaptation and color normalization, as shown in fig. 6,
the method comprises the following steps:
a creation module 11 for creating a face image dataset;
the first processing module 12 is configured to perform adaptive scale selection processing on an image in a face data set based on a face detection network to obtain a new face image;
the second processing module 13 is configured to perform face positioning on the output new face image, and perform color normalization processing on the periocular region of the positioned face to obtain a processed face image;
the training module 14 is configured to input the processed face image into a face recognition network, calculate an euclidean distance between a vector of the processed face image and a prestored face image, determine whether the euclidean distance is smaller than a first preset threshold, if so, obtain a training sample, and input the training sample into the face recognition network for training to obtain a final face recognition network model;
and the recognition module 15 is configured to input the image to be recognized into a face recognition network model for face recognition, so as to obtain a final recognition result.
It should be noted that the face recognition system based on self-adaptation and color normalization provided in this embodiment is similar to the embodiment, and is not repeated here.
Compared with the prior art, the invention aims to reduce the redundancy of a face detection frame by the proposed adaptive scale selection mechanism before a face image is input into a face detection network, and reduce the influence caused by factors such as light, color channels and the like by color normalization of an eye surrounding area before the detected face image is input into the face recognition network, thereby improving the accuracy of face recognition.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A face recognition method based on self-adaptation and color normalization is characterized by comprising the following steps:
s1, creating a face image data set;
s2, carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images;
s3, carrying out face positioning on the output new face image, and carrying out color normalization processing on the eye circumference area of the positioned face to obtain a processed face image;
s4, inputting the processed face image into a face recognition network, calculating Euclidean distance between a vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model;
s5, inputting the image to be recognized into a face recognition network model for face recognition to obtain a final recognition result;
the step S2 specifically includes:
s21, setting the image P in the face data set to be S ═ S according to a preset proportion1,S2...Sn]Performing image conversion, wherein the image P is converted into the resolution Hi×WiNew image P ofi(ii) a Wherein Hi=H*Si,Wi=W*Si
S22, new image P after conversioniInputting the data into a face detection network with 12 multiplied by 12 resolution to obtain dimension Hci×WciConfidence score graph CiAnd to CiCounting all elements in the list which are larger than a second preset threshold value to obtain the total number N of all elementsi
S23, calculating the total number NiAnd picture PiHas a resolution of Hi×WiRatio T ofiObtaining n ratios TiFor n ratios TiSorting is carried out, and T of the first three is selectediAs input data;
s24, selecting T of the first three in the sequenceiIs input into a 24 x 24 resolution face detection network, and the detection frame is adjusted to obtain the resultCandidate frames with the confidence score larger than a second preset threshold and IoU smaller than a third preset threshold are stored in a candidate list;
and S25, inputting the candidate frames in the candidate list into a face detection network with 48 × 48 resolution to adjust the detection frames, and taking the candidate frames with IoU smaller than a third preset threshold and confidence greater than a fourth preset threshold as final output face frames.
2. The adaptive and color-normalization based face recognition method according to claim 1, wherein the step S24 of adjusting the detection box in the 24 × 24 resolution face detection network is to adjust the detection box to a size of 24 × 24 × 3, and obtain a 1 × 2D confidence array of whether there is a face and a 1 × 4D boundary information array for constraining the boundary of each candidate boundary box.
3. The adaptive and color-normalization based face recognition method of claim 1, wherein the step S25 of adjusting the detection box input into the 48 × 48 resolution face detection network is to adjust the detection box to a size of 48 × 48 × 3, and obtain a 1 × 2D confidence array of whether there is a face and a 1 × 4D boundary information array that constrains the boundary of each candidate boundary box.
4. The adaptive and color-normalization-based face recognition method according to claim 1, wherein the second preset threshold is 0.9; the third preset threshold value is 0.7; the fourth preset threshold is 0.95.
5. The method for face recognition based on self-adaptation and color normalization according to claim 1, wherein the step S3 specifically includes:
s31, cutting the output face frame, and positioning the face in the face frame to obtain a positioned face image;
s32, performing 2d smoothing on the positioned face image to obtain a periocular region with a main part of a positioning color from the upper left corner [0.375,0.375] to the lower right corner [0.625,0.625 ];
and S33, calculating three-channel mean values of all the face image colors, and smoothing the values of all the channels of the eye surrounding area of each face image to the mean value to obtain the processed face image.
6. The method for face recognition based on self-adaptation and color normalization according to claim 1, wherein the step S4 specifically comprises:
defining triplets in a face recognition network as<a,p,n>Where a, p belong to the face image of the same person and n belongs to the face image of another person, will<a,p,n>Inputting the output O _ a, O _ p and O _ n into a face recognition network, and the output O _ a, O _ p and O _ n satisfy
Figure FDA0002767529140000021
Figure FDA0002767529140000022
And performing condition screening on the training samples, screening out proper triplets, inputting the triplets into face recognition, and training to obtain a final face recognition network model.
7. The adaptive and color normalization-based face recognition method according to claim 6, wherein the conditional screening of the training samples specifically comprises: selecting
Figure FDA0002767529140000023
And when it is satisfied
Figure FDA0002767529140000024
Training samples of the conditions.
8. The adaptive and color-normalization based face recognition method according to claim 1, wherein the image to be recognized is input into the face recognition network model in step S5 for face recognition, specifically, the image to be recognized is processed based on steps S2-S4, and a final recognition result is obtained.
9. A face recognition system based on adaptation and color normalization, comprising:
the creation module is used for creating a face image data set;
the first processing module is used for carrying out self-adaptive scale selection processing on the images in the face data set based on a face detection network to obtain new face images;
the second processing module is used for carrying out face positioning on the output new face image and carrying out color normalization processing on the periocular region of the positioned face to obtain a processed face image;
the training module is used for inputting the processed face image into a face recognition network, calculating Euclidean distance between a vector of the processed face image and a prestored face image, judging whether the Euclidean distance is smaller than a first preset threshold value, if so, obtaining a training sample, and inputting the training sample into the face recognition network for training to obtain a final face recognition network model;
the recognition module is used for inputting the image to be recognized into a face recognition network model for face recognition to obtain a final recognition result;
the first processing module specifically includes:
the image P in the face data set is set as [ S ] according to a preset proportion1,S2...Sn]Performing image conversion, wherein the image P is converted into the resolution Hi×WiNew image P ofi(ii) a Wherein Hi=H*Si,Wi=W*Si
New image P after conversioniInputting the data into a face detection network with 12 multiplied by 12 resolution to obtain dimension Hci×WciConfidence score graph CiAnd to CiCounting all elements in the list which are larger than a second preset threshold value to obtain the total number N of all elementsi
The total number N is obtained by calculationiAnd picture PiHas a resolution of Hi×WiRatio T ofiObtaining n ratios TiFor n ratios TiSorting is carried out, and T of the first three is selectediAs input data;
will select T of the top threeiThe obtained candidate frames and other candidate frames are stored in a candidate list;
and inputting the candidate frames in the candidate list into a face detection network with 48 × 48 resolution to adjust the detection frames, and taking the candidate frames with IoU smaller than a third preset threshold and confidence coefficient larger than a fourth preset threshold as final output face frames.
CN202010848894.5A 2020-08-21 2020-08-21 Face recognition method and system based on self-adaption and color normalization Active CN111738242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010848894.5A CN111738242B (en) 2020-08-21 2020-08-21 Face recognition method and system based on self-adaption and color normalization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010848894.5A CN111738242B (en) 2020-08-21 2020-08-21 Face recognition method and system based on self-adaption and color normalization

Publications (2)

Publication Number Publication Date
CN111738242A CN111738242A (en) 2020-10-02
CN111738242B true CN111738242B (en) 2020-12-25

Family

ID=72658720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010848894.5A Active CN111738242B (en) 2020-08-21 2020-08-21 Face recognition method and system based on self-adaption and color normalization

Country Status (1)

Country Link
CN (1) CN111738242B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902986A (en) * 2012-06-13 2013-01-30 上海汇纳网络信息科技有限公司 Automatic gender identification system and method
CN103902958A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Method for face recognition
CN110717481B (en) * 2019-12-12 2020-04-07 浙江鹏信信息科技股份有限公司 Method for realizing face detection by using cascaded convolutional neural network

Also Published As

Publication number Publication date
CN111738242A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
JP4903854B2 (en) Object detection method in digital image
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
JP3999964B2 (en) Multi-mode digital image processing method for eye detection
CN101630363B (en) Rapid detection method of face in color image under complex background
JP4712563B2 (en) Face detection method, apparatus and program
JP4657934B2 (en) Face detection method, apparatus and program
JP2003030667A (en) Method for automatically locating eyes in image
US8325998B2 (en) Multidirectional face detection method
CN108491786B (en) Face detection method based on hierarchical network and cluster merging
JP2007047965A (en) Method and device for detecting object of digital image, and program
CN112784810B (en) Gesture recognition method, gesture recognition device, computer equipment and storage medium
JP4588575B2 (en) Method, apparatus and program for detecting multiple objects in digital image
JPWO2019026104A1 (en) Information processing apparatus, information processing program, and information processing method
CN112150493A (en) Semantic guidance-based screen area detection method in natural scene
CN111860587A (en) Method for detecting small target of picture
CN114283431B (en) Text detection method based on differentiable binarization
CN114565508B (en) Virtual reloading method and device
CN113658206A (en) Plant leaf segmentation method
CN111738242B (en) Face recognition method and system based on self-adaption and color normalization
CN112258536A (en) Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum
JP2011170890A (en) Face detecting method, face detection device, and program
JP4749884B2 (en) Learning method of face discriminating apparatus, face discriminating method and apparatus, and program
CN114821356B (en) Optical remote sensing target detection method for accurate positioning
CN112508168B (en) Frame regression neural network construction method based on automatic correction of prediction frame
JP4795737B2 (en) Face detection method, apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant