CN118038515B - Face recognition method - Google Patents

Face recognition method Download PDF

Info

Publication number
CN118038515B
CN118038515B CN202311850597.4A CN202311850597A CN118038515B CN 118038515 B CN118038515 B CN 118038515B CN 202311850597 A CN202311850597 A CN 202311850597A CN 118038515 B CN118038515 B CN 118038515B
Authority
CN
China
Prior art keywords
face
image
gray
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311850597.4A
Other languages
Chinese (zh)
Other versions
CN118038515A (en
Inventor
陈静
潘荣才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Fantai Digital Technology Research Institute Co ltd
Original Assignee
Nanjing Fantai Digital Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fantai Digital Technology Research Institute Co ltd filed Critical Nanjing Fantai Digital Technology Research Institute Co ltd
Priority to CN202311850597.4A priority Critical patent/CN118038515B/en
Publication of CN118038515A publication Critical patent/CN118038515A/en
Application granted granted Critical
Publication of CN118038515B publication Critical patent/CN118038515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face recognition method, which is characterized in that the influence effect of ambient light on a face is quantized by carrying out roughness calculation on the face, a face illumination texture map is further obtained by generating a face illumination overexposure image, and finally, a face image after highlight removal is obtained by carrying out pixel-by-pixel subtraction on a gray level image and the face illumination texture map, so that face recognition operation is further carried out on the face image after highlight removal. The method and the device can effectively remove the highlight part in the face recognition image, thereby improving the accuracy and efficiency of face recognition; according to the invention, the face illumination overexposure image is obtained according to the face roughness, the influence of illumination on the face is accurately amplified, and an accurate illumination texture map is further generated: the method for removing the influence of illumination on the human face only needs a single human face image, does not need any training image and illumination priori knowledge, and has good robustness and adaptability.

Description

Face recognition method
Technical Field
The invention relates to the field of face recognition, in particular to a face recognition method.
Background
Face shooting is carried out under a certain illumination environment, and a high-light area is often present in the shot face image. The highlight areas can cause the situation that part of the face area is excessively bright white due to bright reflection of light rays, and cover the local shape, color, texture and other characteristics of the face. The situation causes great interference to the detection and identification of the face image, and reduces the accuracy and reliability of the system.
Therefore, in order to solve the problem of the highlight region in the face image, highlight removal is an important means for improving the recognition rate of the face image. The goal of highlight removal is to reduce the excessive bright white effect of the highlight region in the image, while restoring the face color under non-highlight conditions. Thus, the face images can be more balanced and consistent, and the covered local features can be restored.
Common highlight removal methods include conventional image processing methods and deep learning-based methods. Conventional methods generally attenuate the effect of high light by filtering the image, adjusting brightness and contrast, etc., based on statistical properties of the image and a priori knowledge, such as gray scale distribution, background prediction, etc. And the deep learning-based method learns the mapping relation between the highlight region and the non-highlight region in the image by training a deep neural network, thereby repairing or recovering the highlight region.
The highlight removing process can improve the quality of the face image, reduce the interference of the highlight to the face recognition system, enable the system to extract the local features of the face more accurately, and realize more stable and reliable face image detection and recognition.
For example, in chinese patent with patent publication No. CN116664422a, a face image with highlight removed is obtained by constructing a face highlight image, and subtracting the face image and the face highlight image pixel by pixel.
In summary, the following problems exist in the face recognition field:
1. high light interference: the highlight region in the face image can cover local features of the face, so that the accuracy of face detection and recognition is reduced; the existence of the highlight region can influence the quality and consistency of the face image, so that the face features are difficult to extract and match;
2. the current common highlight removing technology needs a large amount of support of priori data, has higher calculation complexity, and increases the processing time of the system and the requirement of calculation resources.
Disclosure of Invention
The invention aims to provide a face recognition method for solving the problems in the background technology, and the specific thought is that the influence effect of ambient light on a face is quantized by carrying out roughness calculation on the face, a face illumination texture map is further obtained by generating a face illumination overexposure image, and finally, a face image with high light is obtained by carrying out pixel-by-pixel subtraction on a gray level image and the face illumination texture map, so that face recognition operation is further carried out on the face image with high light.
The invention provides a face recognition method, which specifically comprises the following steps:
s1: acquiring an image to be processed, and processing the image to be processed into a gray image;
s2: carrying out roughness calculation on the face in the gray level image to obtain the face roughness;
S3: obtaining a face illumination overexposure image according to the face roughness and the gray level image;
S4: obtaining a dark part gray level average value of a human face of the gray level image, and obtaining a human face illumination texture map according to the human face illumination overexposure image and the dark part gray level average value of the human face;
s5: the gray level image and the face illumination texture image are subtracted pixel by pixel to obtain a face image with highlight removed;
s6: converting the face image with highlight removed into a numerical feature vector through a feature extraction algorithm;
s7: and carrying out matching judgment on the numerical value characteristic vector and a standard numerical value characteristic vector in the database, and determining whether the numerical value characteristic vector and the standard numerical value characteristic vector are the same person.
The invention has the following beneficial effects:
1. The method can effectively remove the highlight part in the face recognition image, and effectively restore the shape, color, texture and other characteristics of the highlight covered face part of the highlight region, thereby improving the accuracy and efficiency of face recognition;
2. According to the invention, the face smooth graph is obtained by a Gaussian filtering technology, so that the rough points of the face can be effectively extracted, and the face roughness is further determined; smoothing the gray image through Gaussian filtering, so as to remove high-frequency noise and detail information in the image and obtain a smooth gray image; then, subtracting the original gray image and the smooth gray image pixel by pixel to obtain a rough dot diagram; the method highlights high-frequency detail information in the image, namely edges, textures and the like of the image; by subtracting pixel by pixel, the difference between the two images, i.e. the high frequency detail information, can be obtained. These differences are typically manifested as changes in edges, noise, or other details in the image; therefore, the obtained rough point diagram can be used for analyzing detail features such as edges, textures and the like in the image;
3. According to the invention, the face illumination overexposure image is obtained according to the face roughness, and the influence of illumination on the face is accurately amplified: the larger the face roughness is, the coarser the face is, the weaker the face light reflecting capability is, and the smaller the influence of light on the face is; conversely, the smaller the face roughness, the smoother the face, and the stronger the face's ability to reflect light;
4. The method for removing the influence of illumination on the human face only needs a single human face image, does not need any training image and illumination priori knowledge, and has good robustness and adaptability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a schematic diagram of the overall process of the present invention
FIG. 2 is a main flow chart of the present invention
FIG. 3 is an overall view of the present invention image schematic diagram of processing procedure
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Referring to fig. 1 to 3, several embodiments of the present invention are provided.
Example 1
The embodiment provides a face recognition method, which specifically comprises the following steps:
S1: acquiring an image to be processed, and processing the image to be processed into a gray image; the method comprises the following specific steps: carrying out average calculation on the values of RGB channels of a color image of an image to be processed to obtain a gray image; gray value= (red channel value of the pixel in the image to be processed + green channel value of the pixel in the image to be processed + blue channel value of the pixel in the image to be processed)/3;
S2: the method comprises the steps of performing roughness calculation on a face in a gray level image to obtain the face roughness, and specifically comprises the following steps:
S201: smoothing a gray image by adopting Gaussian filtering to obtain a smooth gray image, wherein the specific steps are that a sigma=10 and Gaussian kernels of windows 17×17 are adopted to sequentially carry out convolution operation with pixel points in a specified range in the gray image; the specified range is: the abscissa range is [9,M-8]; the ordinate range is [9,N-8]; wherein M represents the length of the gray image, namely the number of pixels of the gray image in the horizontal direction; n represents the width of the gray image, that is, the number of pixels of the gray image in the vertical direction;
s202: the gray level image and the smooth gray level image are subtracted pixel by pixel to obtain a rough point diagram, and a specific calculation formula is as follows: i C=IH-IP, wherein I H represents a grayscale image; i P denotes a smooth gray-scale map; i C represents a rough spot diagram;
s203: according to the rough point diagram, the face roughness is calculated by the following specific calculation method:
Wherein N C represents the number of pixels with gray value of 0 in the face contour in the rough point diagram, and N RL represents the total number of pixels in the face contour in the rough point diagram; the pixel points in the human face outline are obtained through a human face detection algorithm: such as the Viola-Jones algorithm, a deep learning based detector (e.g., RCNN, SSD, YOLO, etc.);
S3: according to the face roughness and the gray level image, obtaining a face illumination overexposure image, which specifically comprises the following steps:
S301: calculating a pixel point with a gray value of > H 0 in the background of the gray image as a light source point; the background of the gray level image is a face of the gray level image which is segmented by an image segmentation method, and the rest part of the gray level image of the face of the gray level image is the background of the gray level image; h 0 is a light source gray threshold, which is the minimum gray of the light source, and is a constant value, and when the gray of the pixel is greater than the light source gray threshold, determining that the pixel is the pixel of the light source; preferably, H 0 is 220;
s302: calculating the gray value of a pixel point in the face illumination overexposure image:
Wherein, H M (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination exposure image, and H Z (i, j) represents the gray value of the pixel with the coordinate (i, j) in the gray image; h G (u, v) represents the gray value of a pixel point with coordinates (u, v) in the gray image, (u, v) e C, C represents the set of coordinates of the light source point; alpha represents the face roughness, and N G represents the total number of light source points; the set of coordinates of the face is obtained through a face detection algorithm: such as the Viola-Jones algorithm, a deep learning-based detector (e.g., RCNN, SSD, YOLO, etc.), locating the position of the face in the image; This means that H G (u, v) is normalized, 255 is the maximum gray scale value, and therefore The interval range of (2) is [0,1]; The distance between the light source point with the coordinates of (u, v) and the pixel point with the coordinates of (i, j) in the gray level image is represented, the essence of the distance is the number of the pixel points between the light source point and the pixel point, and the smaller the distance is, the larger the effect of the light source point on the pixel point is indicated; conversely, the larger the distance is, the smaller the effect of the light source point on the pixel point is; the smaller the roughness alpha of the human face is, the smoother the human face is, and the greater the effect of the light source point on the pixel point is; the larger the face roughness alpha is, the more rough the face is, and the smaller the effect of the light source point on the pixel point is.
Wherein, the definition of the coordinates in the invention is as follows: the pixel points with coordinates (x, y) in the image A represent the pixel points in the x-th row and the y-th column in the image A, for example, the pixel points with coordinates (i, j) in the face illumination exposure image represent the pixel points in the i-th row and the j-th column in the face illumination exposure image.
The images to be processed, the gray level images, the face illumination overexposure images, the face illumination texture images and the face images after highlight removal are all in uniform size, that is, the total number of the transverse pixel points of the images is the same, and the total number of the longitudinal pixel points is the same. The coordinate system is thus already established, starting from the image entry to be processed.
S4: the method comprises the steps of obtaining a dark part gray level average value of a human face of a gray level image, obtaining a human face illumination texture map according to the human face illumination overexposure image and the dark part gray level average value of the human face, and specifically comprises the following steps:
s401: the gray average value H avg of the dark part of the gray image is calculated by the following steps:
wherein H A (p, q) represents the gray value of the pixel with the coordinates of (p, q) in the gray image, and (p, q) is the coordinates of the pixel with the brightness < H 1 in the face area in the gray image, and (p, q) ∈t represents the set of the coordinates of the pixel with the brightness < H 1 in the face area in the gray image; g A denotes the number of dark coordinate points of the grayscale image; h 1 is a bright portion threshold, which is a fixed value, and when the gray level of a pixel is above the bright portion threshold, the pixel is determined to be located at the bright portion of the face, and since the bright portion of the face affects the calculation of the whole gray level of the face, the pixel of the bright portion needs to be removed when the gray level average value of the dark portion is obtained; preferably, H 1 is 200; the bright part of the human face is the part of the human face contour which is greatly influenced by light;
S402: the face illumination texture map is obtained, and the calculation formula is as follows:
HW(i,j)=HM(i,j)-Havg
wherein, H W (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination texture map, H M (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination exposure image, and H avg represents the dark part gray average value of the gray image;
S5: the gray level image and the face illumination texture image are subtracted pixel by pixel to obtain a face image with highlight removed, and the specific method is H final(i,j)=HZ(i,j)-HW (i, j); wherein, H final (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face image after the highlight is removed, H Z (i, j) represents the gray value of the pixel with the coordinate (i, j) in the gray image, and H W (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination texture map;
s6: converting the face image with highlight removed into a numerical feature vector through a feature extraction algorithm;
s7: and carrying out matching judgment on the numerical value characteristic vector and a standard numerical value characteristic vector in the database, and determining whether the numerical value characteristic vector and the standard numerical value characteristic vector are the same person.
The standard numerical feature vector is a numerical feature vector of the face of the person stored in the database, and is obtained by processing the face entered into the system through a feature extraction algorithm.
It should be noted that the feature extraction algorithm of S6 may include:
1. Local Binary Pattern (LBP): LBP is a feature extraction method widely used for face recognition. Comparing the gray value of each pixel with surrounding neighborhood pixels to generate a binary code, and combining the binary codes of all pixels into a feature vector;
2. Principal Component Analysis (PCA): PCA is a feature extraction method based on statistics. Performing dimension reduction decomposition on the face image matrix to obtain a group of main components, and then projecting each face image onto the main components to obtain feature vectors representing the face;
3. Face key points (Landmark): by detecting key points in the face image, such as positions of eyes, nose, mouth and the like, coordinates of the key points are used as elements of the feature vector. The position information of the key points can provide the shape and structural characteristics of the human face;
4. deep learning model: by adopting a Convolutional Neural Network (CNN) or other deep learning architecture, high-level features in a face image are extracted by training a large-scale face data set and are used as numerical feature vectors to represent.
The methods are all mature prior art, and can convert the face image subjected to highlight removal treatment into a numerical feature vector. And selecting a proper feature extraction method, and extracting features with distinguishing and identifying capabilities from the face image so as to facilitate the subsequent face recognition task.
It should be noted that S7 may be implemented by the following method:
1. Euclidean distance (Euclidean Distance): and calculating the Euclidean distance between the face feature vector to be identified and the existing face feature vector. The smaller the euclidean distance, the more similar the two feature vectors are, possibly belonging to the same person. A threshold value may be set, and when the euclidean distance is smaller than the threshold value, the same person is determined;
2. Cosine similarity (Cosine Similarity): and calculating cosine similarity between the face feature vector to be identified and the existing face feature vector. The cosine similarity has a value range of [ -1,1], and a value closer to 1 indicates that the two feature vectors are more similar and possibly belong to the same person. A threshold may also be set, and when the cosine similarity is greater than the threshold, the same person is determined;
3. support vector machine (Support Vector Machine, SVM): and constructing a support vector machine classifier by using the existing face feature vectors, and inputting the face feature vectors to be identified into the classifier for classification. The method can learn a better boundary and distinguish the characteristic vectors of different people;
4. Deep learning model (e.g., siamese Network or Triplet Network): and using a deep learning model to perform matching judgment of the face features. The models can learn more complex feature expression, so that the recognition accuracy is improved;
The method is a mature prior art, and can be used for carrying out matching judgment between the face features to be identified and the existing face features so as to determine whether the person is the same person.
The standard numerical feature vector in the database in S7 refers to a numerical vector representing a face feature extracted from a set of pre-acquired face images by a specific feature extraction algorithm.
The standard numerical feature vectors in the database are obtained by training and learning a set of face images by a special face recognition system or algorithm. This process includes the following steps:
Step one: and (3) data acquisition: collecting a group of representative face image samples of target personnel through a camera or other equipment;
step two: face detection and alignment: preprocessing the acquired face image, and ensuring that the face in the image is in a standard position and posture through a face detection algorithm and an alignment algorithm;
Step three: feature extraction: extracting a group of numerical vectors representing the facial features from the aligned facial images by using a feature extraction algorithm;
Step four: and (3) constructing a database: and storing the standard numerical value feature vector obtained by the feature extraction and corresponding face identity information (such as name, ID and the like) into a database to construct a face recognition database.
In the implementation of S7, it is necessary to search the database for the standard numerical feature vector of the person to be identified according to the id of the person to be identified, and perform matching judgment on the numerical feature vector and the standard numerical feature vector in the database to determine whether the person is the same person.

Claims (3)

1. The face recognition method is characterized by comprising the following specific steps of:
s1: acquiring a face image to be processed, and processing the face image into a gray image;
s2: carrying out roughness calculation on the gray level image to obtain the roughness of the human face;
S3: obtaining a face illumination overexposure image according to the face roughness and the gray level image; the specific process is as follows:
S301: calculating a pixel point with a gray value of > H 0 in the background of the gray image, wherein H 0 is a light source gray threshold value and is a fixed value;
s302: calculating the gray value of a pixel point in the face illumination overexposure image:
Wherein, H M (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination exposure image, and H Z (i, j) represents the gray value of the pixel with the coordinate (i, j) in the gray image; h G (u, v) represents the gray value of a pixel point with coordinates (u, v) in the gray image, (u, v) e C, C represents the set of coordinates of the light source point; alpha represents the face roughness, and N G represents the total number of light source points;
S4: obtaining a dark part gray average value of the gray level image, and obtaining a face illumination texture map according to the dark part gray level average value of the face illumination overexposure image and the gray level image; the specific process is as follows:
s401: the gray average value H avg of the dark part of the gray image is calculated by the following steps:
Wherein, H A (p, q) represents a pixel point with coordinates of (p, q) in the gray image, (p, q) e T, T represents a set of coordinates of a pixel point with brightness < H 1 in a face region in the gray image, H 1 is a bright threshold value, and is a fixed value; g A denotes the number of dark coordinate points of the grayscale image;
S402: the face illumination texture map is obtained, and the calculation formula is as follows:
HW(i,j)=HM(i,j)-Havg
wherein, H W (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination texture map, H M (i, j) represents the gray value of the pixel with the coordinate (i, j) in the face illumination exposure image, and H avg represents the dark part gray average value of the gray image;
s5: the gray level image and the face illumination texture image are subtracted pixel by pixel to obtain a face image with highlight removed;
s6: converting the face image with highlight removed into a numerical feature vector through a feature extraction algorithm;
s7: and carrying out matching judgment on the numerical value characteristic vector and a standard numerical value characteristic vector in the database, and determining whether the numerical value characteristic vector and the standard numerical value characteristic vector are the same person.
2. The face recognition method according to claim 1, wherein the specific step of S2 includes:
S201: smoothing the gray level image by Gaussian filtering to obtain a smooth gray level image;
s202: the gray level image and the smooth gray level image are subtracted pixel by pixel to obtain a rough point diagram, and a specific calculation formula is as follows: i C=IH-IP, wherein I H represents a grayscale image; i P denotes a smooth gray-scale map; i C represents a rough spot diagram;
s203: according to the rough point diagram, the face roughness is calculated by the following specific calculation method:
Wherein N C represents the number of pixels with gray value of 0 in the face contour in the rough point diagram, and N RL represents the total number of pixels in the face contour in the rough point diagram.
3. The face recognition method according to claim 2, wherein the specific calculation method of S5 is as follows: h final(i,j)=HZ(i,j)-HW (i, j), wherein H final (i, j) represents a gray value of a pixel with coordinates (i, j) in the face image after the highlight is removed, H Z (i, j) represents a gray value of a pixel with coordinates (i, j) in the gray image, and H W (i, j) represents a gray value of a pixel with coordinates (i, j) in the face illumination texture map.
CN202311850597.4A 2023-12-28 2023-12-28 Face recognition method Active CN118038515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311850597.4A CN118038515B (en) 2023-12-28 2023-12-28 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311850597.4A CN118038515B (en) 2023-12-28 2023-12-28 Face recognition method

Publications (2)

Publication Number Publication Date
CN118038515A CN118038515A (en) 2024-05-14
CN118038515B true CN118038515B (en) 2024-08-02

Family

ID=90997857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311850597.4A Active CN118038515B (en) 2023-12-28 2023-12-28 Face recognition method

Country Status (1)

Country Link
CN (1) CN118038515B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118430046A (en) * 2024-05-15 2024-08-02 广东银讯信息服务有限公司 Face recognition data processing method and system before payment
CN118429316A (en) * 2024-05-15 2024-08-02 北京国信新源细胞生物科技有限公司 In-vitro induced pluripotent stem cell defect morphology detection and recognition processing method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120422A (en) * 2021-12-01 2022-03-01 重庆第二师范学院 Expression recognition method and device based on local image data fusion
CN116664422A (en) * 2023-05-19 2023-08-29 网易(杭州)网络有限公司 Image highlight processing method and device, electronic equipment and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI226020B (en) * 2003-05-16 2005-01-01 Benq Corp Device and method to determine exposure setting for image of scene with human-face area
CN105956582B (en) * 2016-06-24 2019-07-30 深圳市唯特视科技有限公司 A kind of face identification system based on three-dimensional data
CN107392866B (en) * 2017-07-07 2019-09-17 武汉科技大学 A kind of facial image local grain Enhancement Method of illumination robust
US11436704B2 (en) * 2019-01-14 2022-09-06 Nvidia Corporation Weighted normalized automatic white balancing
CN115170832A (en) * 2022-07-25 2022-10-11 江南大学 Weak texture surface microstructure feature extraction method based on visible light single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120422A (en) * 2021-12-01 2022-03-01 重庆第二师范学院 Expression recognition method and device based on local image data fusion
CN116664422A (en) * 2023-05-19 2023-08-29 网易(杭州)网络有限公司 Image highlight processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN118038515A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
CN118038515B (en) Face recognition method
CN115082419B (en) Blow-molded luggage production defect detection method
Shen et al. Improving OCR performance with background image elimination
CN111915704A (en) Apple hierarchical identification method based on deep learning
US20080193020A1 (en) Method for Facial Features Detection
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN114359998B (en) Identification method of face mask in wearing state
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN110689003A (en) Low-illumination imaging license plate recognition method and system, computer equipment and storage medium
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN111709305A (en) Face age identification method based on local image block
KR20080079798A (en) Method of face detection and recognition
CN112818983B (en) Method for judging character inversion by using picture acquaintance
CN108154116A (en) A kind of image-recognizing method and system
KR100703528B1 (en) Apparatus and method for recognizing an image
CN109145875B (en) Method and device for removing black frame glasses in face image
US20060088212A1 (en) Data analysis device and data recognition device
CN111626150A (en) Commodity identification method
CN114758139B (en) Method for detecting accumulated water in foundation pit
EP0632404B1 (en) Pattern recognition by generating and using zonal features and anti-features
KR100893086B1 (en) Method for detecting face robust to illumination change
CN114820707A (en) Calculation method for camera target automatic tracking
CN113505784A (en) Automatic nail annotation analysis method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant