CN111723755B - Optimization method and system of face recognition base - Google Patents

Optimization method and system of face recognition base Download PDF

Info

Publication number
CN111723755B
CN111723755B CN202010586533.8A CN202010586533A CN111723755B CN 111723755 B CN111723755 B CN 111723755B CN 202010586533 A CN202010586533 A CN 202010586533A CN 111723755 B CN111723755 B CN 111723755B
Authority
CN
China
Prior art keywords
face
glasses
base
face image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010586533.8A
Other languages
Chinese (zh)
Other versions
CN111723755A (en
Inventor
杨帆
朱莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoshi Technology Jiangsu Co ltd
Original Assignee
Nanjing Zhenshi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhenshi Intelligent Technology Co Ltd filed Critical Nanjing Zhenshi Intelligent Technology Co Ltd
Priority to CN202010586533.8A priority Critical patent/CN111723755B/en
Publication of CN111723755A publication Critical patent/CN111723755A/en
Application granted granted Critical
Publication of CN111723755B publication Critical patent/CN111723755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an optimization method and a system of a face recognition base library, which comprises the following steps: acquiring a face image from a face recognition base; detecting whether a human face in the human face image wears glasses or not; responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image; respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and adopting a PCA algorithm to perform dimensionality reduction processing on the multi-dimensional base features to obtain final base features. The invention aims to optimize a face recognition base by synthesizing images worn with glasses, synthesize a plurality of pictures worn with different glasses according to an original base to form a multi-dimensional base, reduce the dimension of the features of the multi-dimensional base by PCA and reduce the influence of wearing glasses and different types of glasses on face recognition of a mask.

Description

Optimization method and system of face recognition base
Technical Field
The invention relates to the technical field of face recognition, in particular to a method and a system for optimizing a face recognition base.
Background
The human face recognition is a biological recognition technology for carrying out identity recognition based on human face feature information, the features of an input human face image are compared with the features of a bottom library human face image one by one, a bottom library image with the highest similarity with the features of the input image is found out, if the similarity is larger than a preset similarity threshold value, the bottom library image and the input image are the same person, otherwise, the identity of the input image cannot be determined.
The quality of the face image of the bottom library directly affects the face recognition effect, and a high-quality face photo needs to be adopted as the bottom library in a face recognition scene without wearing a mask. In a face recognition scene of wearing a mask, the mask covers important information such as the mouth and the nose of a human face, the recognition focuses on eye regions, and the recognition algorithm (particularly eye features) based on local attention enhancement is facilitated, but the wearing condition of glasses can generate large interference on face recognition. Because the human face base library is obtained by collecting human face pictures with higher quality to extract features, especially under the conditions of no glasses and no mask, under the human face recognition scene of wearing the mask, when the glasses information of the actually obtained human face images and the human face images in the base library are not matched, the phenomenon of false recognition or recognition failure is easy to occur.
Disclosure of Invention
In order to achieve the above object, a first aspect of the present invention provides a method for optimizing a face recognition base, including the following steps:
acquiring a face image from a face recognition base;
detecting whether a human face in the human face image wears glasses or not;
responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image;
respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and
and performing dimensionality reduction on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features.
Preferably, the synthesis of the face image and the glasses image comprises the steps of firstly extracting eye key points of the face, aligning the glasses with the eyes according to the eye key points, and then synthesizing the glasses and the face according to a mask image of the glasses to form the face image worn with the glasses, so as to generate the multi-dimensional face base image.
Preferably, the dimension reduction process includes:
for the multidimensional base features corresponding to one face feature extracted from the face recognition base, the multidimensional base features comprise (m-1) face image features with glasses and 1 original face image feature, and the multidimensional base features are X e to R m×n Each base library feature
Figure BDA0002554031060000021
The covariance matrix of X is calculated as shown below:
Figure BDA0002554031060000022
wherein the content of the first and second substances,
Figure BDA0002554031060000023
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At the maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
According to the second aspect of the present invention, there is provided a system for optimizing a face recognition base, including:
the image acquisition module is used for acquiring a face image from the face recognition base;
the detection module is used for detecting whether the human face in the human face image wears glasses or not;
a synthesizing module for synthesizing the face image and a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image in response to that the glasses are not worn on the face in the extracted face image;
the characteristic extraction module is used for respectively extracting the characteristics of the original face image and the corresponding synthesized face image wearing the glasses and generating the characteristics of the multi-dimensional base; and
and the dimension reduction module is used for performing dimension reduction processing on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features.
According to a third aspect of the present invention, there is provided a system for optimizing a face recognition base, including:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising:
acquiring a face image from a face recognition base;
detecting whether a human face in the human face image wears glasses or not;
responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image;
respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and
and performing dimensionality reduction on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent. Additionally, all combinations of claimed subject matter are considered a part of the presently disclosed subject matter.
The foregoing and other aspects, embodiments and features of the present teachings can be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is an exemplary flow of a method for optimization of a face recognition base in an exemplary embodiment of the invention;
FIG. 2 is a block diagram of an optimization system for a face recognition base of an exemplary embodiment of the present invention;
fig. 3 is a block diagram of a computer system on which an optimization method of a face recognition base is based according to an exemplary embodiment of the present invention.
Fig. 4a is a schematic view of the location of the eye-nose key points, and fig. 4b is a schematic view of a selected nasal bridge frame region.
Fig. 5a is a schematic diagram of a bridge frame region, fig. 5b is a schematic diagram of edge binarization, and fig. 5c is a schematic diagram of edge detection.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not necessarily intended to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. Additionally, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
The optimization method of the face recognition base library aims to optimize the face recognition base library by synthesizing images worn with glasses, synthesize a plurality of pictures worn with different glasses according to an original base library to form a multi-dimensional base library, and reduce the influence of wearing glasses and different types of glasses on face recognition worn with a mask by reducing the dimension of the features of the multi-dimensional base library through PCA.
With reference to fig. 1 to 3, the method for optimizing the face recognition base includes the following steps: acquiring a face image from a face recognition base; detecting whether a human face in the human face image wears glasses or not; responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image; respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and adopting a PCA algorithm to perform dimensionality reduction processing on the multi-dimensional base features to obtain final base features.
Therefore, in the optimization process of the embodiment of the invention, the main face base database characteristics are obtained by synthesizing the face base database pictures with a plurality of glasses and PCA dimension reduction, so that the influence of the glasses on face recognition of a mask can be greatly reduced.
As shown in fig. 1, corresponding processing is performed according to a detection result of whether a face in a face image wears glasses, wherein if the face in the face image taken out is worn glasses, face features are directly extracted for the face image taken out from the base library to serve as final base library features. And if the detected result is that the glasses are not worn, synthesizing the human face and the glasses to perform multi-maintenance image generation and feature extraction.
In order to ensure the accuracy of the optimization of the face bottom library, since the lenses are different in size and difficult to determine the region, the embodiment of the invention avoids the traditional dichotomy method for detecting the wearing of glasses (such as the traditional glasses recognizer of the mobilenet v3 two-classification network), and performs the glasses recognition by judging whether the bridge of the nose has a glasses frame. In the preferred embodiment, the state of wearing the glasses is detected through the edge information of the glasses frame, and the specific detection and identification process is as follows:
nose bridge frame region (x, y, w, h) is represented by left eye keypoint coordinates (x) 1 ,y 1 ) Coordinates of key points for the right eye (x) 2 ,y 2 ) And nose Key Point coordinates (x) 3 ,y 3 ) Determining, assuming that the coordinates of the central points of the key points of the left eye and the right eye are (x) c ,y c ) The calculation is shown as follows:
x c =(x 2 +x 1 )/2
y c =(y 2 +y 1 )/2
determining the width w according to the horizontal distance of key points of the two eyes, determining the height h according to the vertical distance between the central point of the key points of the left eye and the right eye and the nose, and finally calculating the coordinates (x, y) of the starting point of the nose bridge frame region:
w=(x 2 -x c )/2
h=y 3 -y c
x=x c -w/2
y=y c -h/3
and (3) detecting the edge information of the nose bridge frame region by using a canny edge detection method by combining the positions of the eye-nose key points in the graph of fig. 4a and the selected nose bridge frame region shown in fig. 4b to obtain an edge binary image of the region. As shown in fig. 5a-5c, wherein 5a is a nose bridge frame region schematic, fig. 5b is an edge binarization schematic, and fig. 5c is an edge detection schematic.
In combination with the drawings, the area with the glasses frame is white, and the rest is black. In the embodiment of the invention, in order to enhance the information of the edge of the spectacle frame, the edge binary image is subjected to morphological dilation processing to obtain a final edge detection image.
And then scanning a nose bridge frame region from top to bottom by adopting a sliding window with the width of w and the height of h/10, calculating the proportion of white pixel points in the sliding window, judging to wear glasses if the proportion exceeds half, and judging not to wear glasses and needing to perform face and glasses synthesis processing on the picture if the proportion does not exceed half.
Preferably, the synthesis of the face image and the glasses image comprises the steps of firstly extracting eye key points of the face, aligning the glasses with the eyes according to the eye key points, and then synthesizing the glasses and the face according to a mask image of the glasses to form the face image with the glasses, so that a multi-dimensional face base image is generated.
The extraction of the eye key points of the face image is realized by adopting a multi-scale face detection model of multi-task learning, and the eye key points are detected and extracted. Then, an alignment matrix is determined by matching the eye with the features of the key points of the glasses, and the synthesis of the two is performed based on the mask map of the glasses.
Preferably, the alignment matrix may be determined based on an affine transformation/perspective transformation.
Preferably, the dimension reduction process comprises:
for the multidimensional base features corresponding to one face feature extracted from the face recognition base, the multidimensional base features comprise (m-1) face image features with glasses and 1 original face image feature, and the multidimensional base features are X e to R m×n Each base library feature
Figure BDA0002554031060000051
The covariance matrix of X is calculated as follows:
Figure BDA0002554031060000052
wherein the content of the first and second substances,
Figure BDA0002554031060000053
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At the maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
With reference to fig. 2, according to the improvement of the present invention, there is provided an optimization system for a face recognition base, including:
the image acquisition module is used for acquiring a face image from the face recognition base;
the detection module is used for detecting whether the human face in the human face image wears glasses or not;
a synthesizing module for synthesizing the face image and a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image in response to that the glasses are not worn on the face in the extracted face image;
the characteristic extraction module is used for respectively extracting the characteristics of the original face image and the corresponding synthesized face image wearing the glasses and generating the characteristic of the multi-dimensional base; and
and the dimension reduction module is used for performing dimension reduction processing on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features.
Preferably, the synthesis module comprises:
a module for extracting eye key points of a human face;
means for aligning the eyewear with the eyes according to the eye keypoints;
and the module is used for synthesizing the glasses and the human face according to the mask image of the glasses to form a human face image wearing the glasses, so that a multi-dimensional human face base image is generated.
Preferably, the dimension reduction module is configured to perform the dimension reduction process in the following manner:
for the multidimensional base characteristics corresponding to one face characteristic extracted from the face recognition base, wherein the multidimensional base characteristics comprise (m-1) face image characteristics with glasses and 1 original face image characteristic, and the multidimensional base characteristics belong to X e R m×n Each base library feature
Figure BDA0002554031060000061
The covariance matrix of X is calculated as follows:
Figure BDA0002554031060000062
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002554031060000063
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
FIG. 3 is an exemplary computer system showing the implementation of an optimization scheme for a face recognition base. With reference to fig. 1, 2, and 3, according to an embodiment of the disclosure, a system for optimizing a face recognition base is further provided, including:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising:
acquiring a face image from a face recognition base;
detecting whether a human face in the human face image wears glasses or not;
responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image;
respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and
and performing dimensionality reduction on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features.
Although the invention has been described with reference to preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.

Claims (7)

1. A method for optimizing a face recognition base library is characterized by comprising the following steps:
acquiring a face image from a face recognition base;
detecting whether a human face in the human face image wears glasses or not;
synthesizing the face image and a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image in response to the fact that the face in the extracted face image is not provided with glasses;
respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and
performing dimensionality reduction processing on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features;
the multi-dimensional base features corresponding to one face feature extracted from the face recognition base comprise (m-1) face image features with glasses and 1 original face image featureThe multidimensional bottom library is characterized in that X belongs to R m×n Each base library feature
Figure FDA0003765488940000011
The covariance matrix of X is calculated as follows:
Figure FDA0003765488940000012
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003765488940000013
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At the maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
2. The method for optimizing a face recognition base according to claim 1, wherein in response to the face in the extracted face image wearing glasses, the face features are extracted directly for the face image extracted from the base as final base features.
3. The optimization method of the face recognition base library according to claim 1, wherein the synthesis of the face image and the glasses image comprises the steps of firstly extracting eye key points of the face, aligning the glasses with the eyes according to the eye key points, and then synthesizing the glasses and the face according to a mask image of the glasses to form a face image with the glasses, so as to generate a multi-dimensional face base library image.
4. An optimization system for a face recognition base, comprising:
the image acquisition module is used for acquiring a face image from the face recognition base;
the detection module is used for detecting whether the human face in the human face image wears glasses or not;
a synthesis module for synthesizing the face image with a plurality of randomly selected glasses images in response to the face in the extracted face image not wearing glasses to generate a corresponding multi-dimensional base face image;
the characteristic extraction module is used for respectively extracting the characteristics of the original face image and the corresponding synthesized face image wearing the glasses and generating the characteristic of the multi-dimensional base; and
the dimensionality reduction module is used for carrying out dimensionality reduction processing on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features;
wherein the dimension reduction module is configured to perform the dimension reduction processing in the following manner:
for the multidimensional base characteristics corresponding to one face characteristic extracted from the face recognition base, wherein the multidimensional base characteristics comprise (m-1) face image characteristics with glasses and 1 original face image characteristic, and the multidimensional base characteristics belong to X e R m×n Each base library feature
Figure FDA0003765488940000021
The covariance matrix of X is calculated as follows:
Figure FDA0003765488940000022
wherein the content of the first and second substances,
Figure FDA0003765488940000023
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At the maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
5. The system for optimizing a face recognition base library of claim 4, wherein the synthesis module comprises:
a module for extracting eye key points of a human face;
means for aligning the eyewear with the eyes according to the eye keypoints;
and the module is used for synthesizing the glasses and the human face according to the mask image of the glasses to form a human face image wearing the glasses, so that a multi-dimensional human face base image is generated.
6. An optimization system for a face recognition base, comprising:
one or more processors;
a memory storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising:
acquiring a face image from a face recognition base;
detecting whether the face in the face image wears glasses or not;
responding to the situation that the glasses are not worn on the face in the extracted face image, synthesizing the face image with a plurality of randomly selected glasses images to generate a corresponding multi-dimensional base face image;
respectively extracting the original face image and the corresponding synthesized features of the face image wearing the glasses to generate multi-dimensional base features; and
performing dimensionality reduction processing on the multi-dimensional base features by adopting a PCA algorithm to obtain final base features;
wherein the dimension reduction processing comprises the following steps:
for the multidimensional base features corresponding to one face feature extracted from the face recognition base, the multidimensional base features comprise (m-1) face image features with glasses and 1 original face image feature, and the multidimensional base features are X e to R m×n Each base library feature
Figure FDA0003765488940000031
The covariance matrix of X is calculated as follows:
Figure FDA0003765488940000032
wherein the content of the first and second substances,
Figure FDA0003765488940000033
performing characteristic decomposition on the covariance matrix C to obtain m eigenvalues lambda 12 ...λ m And m corresponding feature vectors u 1 ,u 2 ...u m At the maximum eigenvalue λ max Corresponding feature vector u ∈ R 1×m Is the first main component, Y ═ uX is the single dimension character Y ∈ R after dimension reduction 1×n
7. The optimization system of the face recognition base library according to claim 6, wherein the operation of synthesizing the face image and the glasses image comprises the following processes:
firstly, extracting eye key points of a human face, aligning glasses with the eyes according to the eye key points, and then synthesizing the glasses and the human face according to a mask image of the glasses to form a human face image wearing the glasses, thereby generating a multi-dimensional human face base image.
CN202010586533.8A 2020-07-19 2020-07-19 Optimization method and system of face recognition base Active CN111723755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010586533.8A CN111723755B (en) 2020-07-19 2020-07-19 Optimization method and system of face recognition base

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010586533.8A CN111723755B (en) 2020-07-19 2020-07-19 Optimization method and system of face recognition base

Publications (2)

Publication Number Publication Date
CN111723755A CN111723755A (en) 2020-09-29
CN111723755B true CN111723755B (en) 2022-09-06

Family

ID=72568618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010586533.8A Active CN111723755B (en) 2020-07-19 2020-07-19 Optimization method and system of face recognition base

Country Status (1)

Country Link
CN (1) CN111723755B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium
CN114429663B (en) * 2022-01-28 2023-10-20 北京百度网讯科技有限公司 Updating method of face base, face recognition method, device and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184253B (en) * 2015-09-01 2020-04-24 北京旷视科技有限公司 Face recognition method and face recognition system
CN108319943B (en) * 2018-04-25 2021-10-12 北京优创新港科技股份有限公司 Method for improving face recognition model performance under wearing condition
CN111062328B (en) * 2019-12-18 2023-10-03 中新智擎科技有限公司 Image processing method and device and intelligent robot

Also Published As

Publication number Publication date
CN111723755A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US10198623B2 (en) Three-dimensional facial recognition method and system
US8818034B2 (en) Face recognition apparatus and methods
CN109740572B (en) Human face living body detection method based on local color texture features
CN111144366A (en) Strange face clustering method based on joint face quality assessment
US10204284B2 (en) Object recognition utilizing feature alignment
CN114359998B (en) Identification method of face mask in wearing state
CN111723755B (en) Optimization method and system of face recognition base
Ryu et al. Coarse-to-fine classification for image-based face detection
CN110991258B (en) Face fusion feature extraction method and system
JP2013239211A (en) Image recognition system, recognition method thereof and program
Paul et al. Extraction of facial feature points using cumulative histogram
Marčetić et al. Deformable part-based robust face detection under occlusion by using face decomposition into face components
CN109145875B (en) Method and device for removing black frame glasses in face image
Işikdoğan et al. Automatic recognition of Turkish fingerspelling
Kwon Face recognition using depth and infrared pictures
Paul et al. Automatic adaptive facial feature extraction using CDF analysis
Thomas et al. Real Time Face Mask Detection and Recognition using Python
JP4061405B2 (en) Face image classification registration device
CN111723612A (en) Face recognition and face recognition network training method and device, and storage medium
CN112418085A (en) Facial expression recognition method under partial shielding working condition
Paul et al. Extraction of facial feature points using cumulative distribution function by varying single threshold group
Abboud et al. Quality based approach for adaptive face recognition
CN112069989B (en) Face information acquisition and recognition system and method based on SVD algorithm correction
Lin Face detection in non-uniform illumination conditions by using color and triangle-based approach
Campadelli et al. Eye localization for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000

Patentee after: Xiaoshi Technology (Jiangsu) Co.,Ltd.

Address before: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000

Patentee before: NANJING ZHENSHI INTELLIGENT TECHNOLOGY Co.,Ltd.