CN108875623B - Face recognition method based on image feature fusion contrast technology - Google Patents
Face recognition method based on image feature fusion contrast technology Download PDFInfo
- Publication number
- CN108875623B CN108875623B CN201810593767.8A CN201810593767A CN108875623B CN 108875623 B CN108875623 B CN 108875623B CN 201810593767 A CN201810593767 A CN 201810593767A CN 108875623 B CN108875623 B CN 108875623B
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- feature
- eye
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Facsimile Image Signal Circuits (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Processing (AREA)
Abstract
The invention provides a face recognition method based on an image feature fusion contrast technology, which comprises the following steps: acquiring a real-time image sample by using electronic equipment; calculating the pixel gray value of an image sample to obtain a gray image, performing threshold segmentation on the gray image, then performing histogram equalization processing, and finally filtering independent noise by using a filtering method to obtain a preprocessed image sample; step three, performing portrait analysis on the image sample, performing feature extraction, calculating the area of the portrait in the image to obtain the eye proportion of the portrait, and correcting the eye vector; and step four, comparing the similarity of the original image of the target person with the comparison head portrait and identifying the target person.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method based on an image feature fusion and comparison technology.
Background
The invention relates to face recognition, which is a biological recognition technology for identity recognition based on face feature information of people, and especially plays an important role in arresting criminals and searching missing people in the police department. However, the current face recognition technology has a large acquisition range and a large number of recognized people, so that criminals are difficult to catch at will.
Disclosure of Invention
The invention provides a face recognition method based on an image feature fusion contrast technology, which is used for extracting face features and correcting important eye features, thereby improving the image quality and having higher matching accuracy.
The invention also designs and develops a face recognition method based on the image feature fusion contrast technology, which comprises the following steps:
acquiring a real-time image sample by using electronic equipment;
calculating the pixel gray value of an image sample to obtain a gray image, performing threshold segmentation on the gray image, then performing histogram equalization processing, and finally filtering independent noise by using a filtering method to obtain a preprocessed image sample;
step three, performing portrait analysis on the image sample, performing feature extraction, calculating the area of the portrait in the image to obtain the eye proportion of the portrait, and correcting the eye vector;
and step four, comparing the similarity of the original image of the target person and the comparison head portrait, and identifying the target person.
Preferably, the image sample is video or picture information.
Preferably, the calculation formula of the gray-level value of the pixel in the second step is:
where R is a red component contained in the image, G is a green component, and B is a blue component.
Preferably, the threshold-segmented binary image in the second step is:
wherein f (x, y) is an original grayscale image; g (x, y) is a binary image after threshold segmentation, and t is a gray value, namely a segmentation threshold.
Preferably, the histogram equalization process includes:
step a, listing the gray levels f of the original image and the transformed imagek(k ═ 0,1,2, · · L-1), where L is the total number of gray levels.
Step b, calculating the total occurrence number of each gray level of the histogram
Wherein n iskK is 0,1,2, L-1, n is the total number of pixels in the original image, L is the total number of gray levels, Pf(fk) Indicating the frequency of occurrence of the gray scale;
step c, calculating the cumulative distribution function
nkK is 0,1,2, L-1; n is the total number of pixels of the original image; l is the total number of gray levels;
step d, calculating the gray level g of the image after histogram equalizationi
gi=INT[(gmax-gmin)C(f)+gmin+0.5]
Wherein, giThe gray level of the image after histogram equalization, i ═ 0,1,2 ·, 255; INT is the rounding operation, gmaxIs the maximum value of the gray scale, gminMinimum value of gray scale
E, calculating the gray scale of the output image
niThe number of pixels of each gray level, i is 0,1,2, 255, and g is used for histogram equalization of the original imageiAnd fkThe image after histogram equalization can be obtained after mapping.
Preferably, the filtering method employs a median filtering algorithm.
Preferably, the third step includes:
a, a mathematical model is constructed by adopting a Principal Component Analysis (PCA) algorithm, a feature set of each part of a human face is obtained by using K-L transformation, the features form a coordinate system, each coordinate axis is a feature image, and the feature set at least comprises: eyes, nose, mouth, eye distance, eyebrows;
b, extracting areas corresponding to the characteristic eyes, and calculating the proportion of the eyes to the human face;
and C, comprehensively analyzing other characteristics in the characteristic set according to the area ratio of the two eyes to obtain the face angle and correcting the eye characteristic vector.
Preferably, the correction calculation formula in step C is:
wherein, ω isi(i, m) is the corrected eye corresponding feature vector, eiIs the area ratio of the two eyes, DiIs the eye distance, beta is the eye angle,s is the larger eye area of the two eyes, pi is the circumferential ratio,as a result of the characteristic scaling factor thereof,
wherein the content of the first and second substances,is its feature scale factor, the number of face features in the n feature set, zjIs a face feature vector, fjFor the eye's corresponding feature vector, λjAre equalization coefficients.
Preferably, the similarity determination between the original image and the comparison avatar in the fourth step includes:
calculating the Euclidean distance between the original image and the contrast image:
wherein Y is the characteristic vector set of the original image, D is the characteristic vector set of the contrast image, YiFor sheets corresponding to the original imageA feature vector, diThe single feature vector corresponding to the comparison image is obtained, and n is the number of the face features in the feature set;
when phi (Y, D) is less than or equal to sigma, the matching is considered to be successful and the recognition is completed;
wherein σ is a set characteristic threshold.
The invention has the advantages of
The face recognition method based on the image feature fusion contrast technology extracts the face features and corrects important eye features, improves the image quality and has higher matching accuracy.
Drawings
Fig. 1 is a flowchart of a face recognition method based on an image feature fusion and comparison technique according to the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
As shown in fig. 1, the present invention provides a face recognition method based on image feature fusion and comparison technology, which is implemented according to the following steps:
step S110: in the invention, firstly, an image is collected, the image is collected in a certain range of the position of a specific place by using an electronic eye, and the collected image comprises video or picture information.
Step S120: the image preprocessing is used for processing the acquired original image according to the following processes:
step S121: the method comprises the steps of graying an image, namely firstly inputting collected image data to obtain values of three components of RGB of an original image, then calculating a pixel gray value through a formula, and finally obtaining a gray image through the pixel gray value.
Wherein, R is a red component contained in the image, G is a green component, and B is a blue component;
step S122: and (4) binarization, namely changing the gray value of the gray image obtained in the step 2.1 into a black-and-white image with 0 and 255 left by a dynamic threshold method.
Wherein f (x, y) is an original grayscale image; g (x, y) is a binary image after threshold segmentation, and t is a gray value, namely a segmentation threshold
Step S123, histogram equalization, where the histogram equalization process includes:
step a, listing the gray levels f of the original image and the transformed imagek(k ═ 0,1,2, · · L-1), where L is the total number of gray levels.
B, calculating the total occurrence number of each gray level of the histogram
Wherein n iskThe number of pixels in each gray level of the original image (k is 0,1,2,. cndot.L-1), n is the total number of pixels in the original image, L is the total number of gray levels, P isf(fk) Indicating the frequency of occurrence of the gray scale;
step c. calculating cumulative distribution function
nkThe number of pixels of each gray level of the original image, (k ═ 0,1,2, · · L-1); n is the total number of pixels of the original image (k ═ 0,1,2, · · L-1); l is the total number of gray levels;
d, calculating the gray level g of the image after histogram equalizationi
gi=INT[(gmax-gmin)C(f)+gmin+0.5]
Wherein, giThe gray level of the image after histogram equalization, i ═ 0,1,2 ·, 255; INT is roundingOperation, gmaxIs the maximum value of the gray scale, gminMinimum value of gray scale
E, calculating the gray scale of the output image
niThe number of pixels of each gray level, i is 0,1,2, 255, and g is used for histogram equalization of the original imageiAnd fkThe image after histogram equalization can be obtained after mapping.
And step S124, median filtering. And removing independent noise from the image set obtained in the step S123. The implementation process comprises the following steps: firstly, comparing the template with the image obtained in the step 123, and then overlapping the center of the template with a certain pixel position in the image; the gray values of the corresponding pixels under the template are read, the gray values are arranged in a line from small to large, the middle one of the gray values is found, and then the gray value is assigned to the pixel corresponding to the center position of the template.
Step S130: the extraction of the human face characteristics is carried out,
step S131: and (3) extracting the features of the preprocessed image by adopting a Principal Component Analysis (PCA). And obtaining principal component diversity of each part of the human face by using K-L transformation, wherein the principal components form a coordinate system, and each coordinate axis is a characteristic face image. When in recognition, a group of projection vectors can be obtained by only carrying out space projection on the recognized image, and then the recognition is carried out by matching with the image of the human face library. These features form a coordinate system, each coordinate axis is a feature image, and the feature set at least includes: eyes, nose, mouth, eye distance, eyebrows;
assuming Y is a random variable of dimension n, then Y can be expressed as:
Conversion to matrix form:
Taking the vector as an orthogonal vector, obtaining the following formula
Since phi-type orthogonal vector is composed, phi should be an orthogonal matrix
ΦTΦ=I
Multiplying both sides by phi simultaneouslyTCan obtain
a=ΦTY
ai=Φi TY
In order to satisfy the condition that the vectors of the a vectors are not related to each other, the random vector matrix form is as follows:
R=E[YTY]
to obtain
R=ΦE[aTa]ΦT
In order to satisfy the complementary correlation between the components of a, the relational expression needs to be satisfied
Written in the form of a matrix and,
is transformed to obtain
RΦ=Φ
RΦj=λjΦj (j=1,2,···n)
λjIs a characteristic value of Y, [ phi ]jIs a feature vector.
S132, extracting the area corresponding to the characteristic eyes, and calculating the proportion of the eyes to the human face;
step 133, comprehensively analyzing other features in the feature set according to the area ratio of the two eyes to obtain a face angle, and correcting the eye feature vector, wherein the correction calculation formula is as follows:
wherein, ω isi(i, m) is the corrected eye corresponding feature vector, eiIs the area ratio of the two eyes, DiIs the eye distance, beta is the eye angle,s is the larger eye area of the two eyes, pi is the circumferential ratio,as a result of the characteristic scaling factor thereof,
wherein the content of the first and second substances,is the feature scale factor, the number of face features in the n feature set, phijIs a face feature vector, fjFor the eye's corresponding feature vector, λjThe value of the equalization coefficient is 0.813.
The step is 140: face recognition, calculating the Euclidean distance between the original image and the contrast image:
wherein Y is the characteristic vector set of the original image, D is the characteristic vector set of the contrast image, YiFor a single feature vector corresponding to the original image, diThe single feature vector corresponding to the comparison image is obtained, and n is the number of the face features in the feature set;
when phi (Y, D) is less than or equal to sigma, the matching is considered to be successful and the recognition is completed;
wherein, sigma is a set characteristic threshold value, and the value thereof is determined to be a general value according to the screening requirementThe mean of the results is calculated for all Euclidean distances in the comparison image library.
When the target person specific place disappears, the specific place position L collects the image and information S of the target person, and the specific place name. And then, acquiring the images in the range at any moment, wherein the acquisition information comprises the images, the positions of the images and the names of the positions. And once the target person enters a certain range of the position of the specific place again, identifying the target person by comparing the human face characteristics with the acquired existing image and the target person image.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.
Claims (7)
1. A face recognition method based on image feature fusion contrast technology is characterized by comprising the following steps:
acquiring a real-time image sample by using electronic equipment;
calculating the pixel gray value of an image sample to obtain a gray image, performing threshold segmentation on the gray image, then performing histogram equalization processing, and finally filtering independent noise by using a filtering method to obtain a preprocessed image sample;
step three, performing portrait analysis on the image sample, performing feature extraction, calculating the area of the portrait in the image to obtain the eye proportion of the portrait, and correcting the eye vector;
step four, comparing the similarity of the original image of the target person and the comparison head portrait, and identifying the target person;
the third step comprises:
a, a mathematical model is constructed by adopting a Principal Component Analysis (PCA) algorithm, a feature set of each part of a human face is obtained by using K-L transformation, the features form a coordinate system, each coordinate axis is a feature image, and the feature set at least comprises: eyes, nose, mouth, eye distance, eyebrows;
b, extracting areas corresponding to the characteristic eyes, and calculating the proportion of the eyes to the human face;
step C, comprehensively analyzing other characteristics in the characteristic set according to the area ratio of the two eyes to obtain the face angle and correcting the eye characteristic vector;
the correction calculation formula in the step C is as follows:
wherein, ω isi(i, m) is the corrected eye corresponding feature vector, eiIs the area ratio of the two eyes, DiIs the eye distance, beta is the eye angle,s is the larger eye area of the two eyes, pi is the circumferential ratio,as a result of the characteristic scaling factor thereof,
2. The method for recognizing the human face based on the image feature fusion and comparison technology as claimed in claim 1, wherein the image sample is video or picture information.
4. The face recognition method based on image feature fusion contrast technology according to claim 2, wherein the threshold-segmented binary image in the second step is:
wherein f (x, y) is an original grayscale image; g (x, y) is a binary image after threshold segmentation, and t is a gray value, namely a segmentation threshold.
5. The face recognition method based on image feature fusion contrast technology according to claim 2, wherein the histogram equalization process comprises:
step a, listing the gray levels f of the original image and the transformed imagekK is 0,1,2, … L-1, where L is the total number of gray levels;
step b, calculating the total occurrence number of each gray level of the histogram
Wherein n iskK is 0,1,2, … L-1, n is the total number of pixels in the original image, L is the total number of gray levels, P is the total number of gray levelsf(fk) Indicating the frequency of occurrence of the gray scale;
step c. calculating cumulative distribution function
nkK is 0,1,2, … L-1 for the number of pixels of each gray level of the original image; n is the total number of pixels of the original image; l is the total number of gray levels;
step d, calculating the gray level g of the image after histogram equalizationi
gi=INT[(gmax-gmin)C(f)+gmin+0.5];
Wherein, giI is the gray scale of the image after histogram equalization, 0,1,2 …, 255; INT is the rounding operation, gmaxIs the maximum value of the gray scale, gminIs the minimum value of gray scale;
e, calculating the gray scale of the output image
Wherein n isiI is 0,1,2 …,255 for each number of pixels in the gray scale.
6. The face recognition method based on the image feature fusion contrast technology according to claim 1, wherein the filtering method adopts a median filtering algorithm.
7. The face recognition method based on image feature fusion and comparison technology according to claim 1, wherein the similarity determination between the original image and the comparison head portrait in the fourth step comprises:
calculating the Euclidean distance between the original image and the contrast image:
wherein Y is the characteristic vector set of the original image, D is the characteristic vector set of the contrast image, YiFor a single feature vector corresponding to the original image, diThe single feature vector corresponding to the comparison image is obtained, and n is the number of the face features in the feature set;
when phi (Y, D) is less than or equal to sigma, the matching is considered to be successful and the recognition is completed;
wherein σ is a set characteristic threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593767.8A CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593767.8A CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875623A CN108875623A (en) | 2018-11-23 |
CN108875623B true CN108875623B (en) | 2020-11-10 |
Family
ID=64337944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810593767.8A Expired - Fee Related CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875623B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533609B (en) * | 2019-08-16 | 2022-05-27 | 域鑫科技(惠州)有限公司 | Image enhancement method, device and storage medium suitable for endoscope |
CN111914632B (en) * | 2020-06-19 | 2024-01-05 | 广州杰赛科技股份有限公司 | Face recognition method, device and storage medium |
CN113052497A (en) * | 2021-02-02 | 2021-06-29 | 浙江工业大学 | Criminal worker risk prediction method based on dynamic and static feature fusion learning |
CN114155480A (en) * | 2022-02-10 | 2022-03-08 | 北京智视数策科技发展有限公司 | Vulgar action recognition method |
CN114821712A (en) * | 2022-04-07 | 2022-07-29 | 上海应用技术大学 | Face recognition image fusion method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750526A (en) * | 2012-06-25 | 2012-10-24 | 黑龙江科技学院 | Identity verification and recognition method based on face image |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100724932B1 (en) * | 2005-08-02 | 2007-06-04 | 삼성전자주식회사 | apparatus and method for extracting human face in a image |
-
2018
- 2018-06-11 CN CN201810593767.8A patent/CN108875623B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750526A (en) * | 2012-06-25 | 2012-10-24 | 黑龙江科技学院 | Identity verification and recognition method based on face image |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
Also Published As
Publication number | Publication date |
---|---|
CN108875623A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875623B (en) | Face recognition method based on image feature fusion contrast technology | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN108491784B (en) | Single person close-up real-time identification and automatic screenshot method for large live broadcast scene | |
CN107135664B (en) | Face recognition method and face recognition device | |
KR20160143494A (en) | Saliency information acquisition apparatus and saliency information acquisition method | |
CN107194317B (en) | Violent behavior detection method based on grid clustering analysis | |
CN109948566B (en) | Double-flow face anti-fraud detection method based on weight fusion and feature selection | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
KR20210025020A (en) | Face image recognition using pseudo images | |
JP2004348674A (en) | Region detection method and its device | |
KR20170006355A (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
CN109725721B (en) | Human eye positioning method and system for naked eye 3D display system | |
CN110458792B (en) | Method and device for evaluating quality of face image | |
CN111199197B (en) | Image extraction method and processing equipment for face recognition | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN111259756A (en) | Pedestrian re-identification method based on local high-frequency features and mixed metric learning | |
CN116229528A (en) | Living body palm vein detection method, device, equipment and storage medium | |
CN111259792A (en) | Face living body detection method based on DWT-LBP-DCT characteristics | |
CN109003247B (en) | Method for removing color image mixed noise | |
CN116912604B (en) | Model training method, image recognition device and computer storage medium | |
CN107729863B (en) | Human finger vein recognition method | |
CN109165551B (en) | Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics | |
CN109886320B (en) | Human femoral X-ray intelligent recognition method and system | |
CN113901916A (en) | Visual optical flow feature-based facial fraud action identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201110 Termination date: 20210611 |