CN115862121B - Face quick matching method based on multimedia resource library - Google Patents
Face quick matching method based on multimedia resource library Download PDFInfo
- Publication number
- CN115862121B CN115862121B CN202310152207.XA CN202310152207A CN115862121B CN 115862121 B CN115862121 B CN 115862121B CN 202310152207 A CN202310152207 A CN 202310152207A CN 115862121 B CN115862121 B CN 115862121B
- Authority
- CN
- China
- Prior art keywords
- gray level
- gray
- image
- region
- levels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000002708 enhancing effect Effects 0.000 claims abstract description 9
- 230000001815 facial effect Effects 0.000 claims abstract description 9
- 230000003068 static effect Effects 0.000 claims abstract description 6
- 238000013519 translation Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to a face quick matching method based on a multimedia resource library, which comprises the following steps: intercepting a static image in a multimedia resource library, acquiring a corresponding gray level image, and acquiring a gray level histogram of the gray level image; marking a head region in the gray scale image as a region of interest; defining the adjustment coefficient of each gray level according to whether the gray level belongs to the concerned area, constructing a mapping function according to the adjustment coefficients of different gray levels and the gray level histogram, and obtaining non-key gray levels in the gray level histogram based on the mapping function; acquiring the interval width of each gray level in the concerned region according to the number of non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and enhancing the gray level image based on the interval width to obtain an enhanced image; extracting facial features based on the enhanced image to perform face matching; the face matching method and the face matching device can improve the accuracy of face matching.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a face quick matching method based on a multimedia resource library.
Background
Face matching is a biological recognition technology for carrying out identity recognition based on facial feature information of people. The method has wide application in the fields of community access control, safe payment, employee attendance, intelligent ticket checking and the like, and the existing face matching mode has various modes, and can be basically divided into three steps: and recognizing a face image, selecting a face position by a frame, extracting features of the face, and finally carrying out face matching with a resource library, wherein if the matching result meets a certain threshold, the face matching is successful.
However, the existing face matching is real-time face matching, and the actual face recognition can also be applied to person tracking and recognition in monitoring videos or other multimedia images. The real-time face matching can flexibly change the distance and angle between the lens and the face and even the illumination environment, and the recorded multimedia image can only carry out face recognition on the basis of the determined shooting distance and definition, so that the recognition accuracy can be greatly reduced.
The multimedia image is usually pre-processed by the preprocessing module, but the image details are always lost by the actual conventional preprocessing algorithm, such as the most commonly used histogram equalization enhancement contrast, and although the fuzzy face is enhanced and can be identified from unrecognizable to identifiable, the enhanced face features may be incomplete and deformed, which may cause errors in face matching.
Disclosure of Invention
In order to solve the problem of error in face matching caused by poor histogram equalization enhanced image effect, the invention aims to provide a face quick matching method based on a multimedia resource library, and the adopted technical scheme is as follows:
the invention provides a face quick matching method based on a multimedia resource library, which comprises the following steps:
intercepting a static image in a multimedia resource library, acquiring a corresponding gray level image, and acquiring a gray level histogram of the gray level image;
marking a head region in the grayscale image as a region of interest; defining the adjustment coefficient of each gray level according to whether the gray level belongs to the concerned area, constructing a mapping function according to the adjustment coefficient of different gray levels and the gray level histogram, and obtaining non-key gray levels in the gray level histogram based on the mapping function;
acquiring the interval width of each gray level in the concerned region according to the number of the non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and enhancing the gray level image based on the interval width to obtain an enhanced image;
and extracting facial features based on the enhanced image to perform face matching.
Preferably, the step of customizing the adjustment coefficient of each gray level according to whether the gray level belongs to the region of interest includes:
obtaining the occurrence probability corresponding to different gray levels based on the gray level histogram, taking any gray level as a first gray level, and if the first gray level is the gray level in the concerned region, the adjusting coefficient of the first gray level is the reciprocal of the product of the corresponding occurrence probability and a preset constant;
if the first gray level is not the gray level in the concerned area, acquiring a polynomial fitting curve of the gray histogram, and calculating the derivative of the polynomial fitting curve at the first gray level, if the derivative is smaller than zero, the adjustment coefficient of the first gray level is the ratio of the occurrence probability of the first gray level to the occurrence probability of the previous gray level; if the derivative is not less than zero, the adjustment coefficient of the first gray level is 1.
Preferably, the expression of the mapping function is:
wherein ,indicate->A mapping function of the individual gray levels;Representing the>First ∈th gray level before>Probability of occurrence corresponding to the individual gray levels;Is a preset constant;Representing the>First ∈th gray level before>And the adjustment coefficients of the gray levels.
Preferably, the step of obtaining the non-emphasized gray level in the gray histogram based on the mapping function includes:
taking any gray level as a target gray level, rounding the mapping result of the target gray level obtained based on the mapping function to obtain a first result, and rounding the mapping result of the previous adjacent gray level of the target gray level to obtain a second result; if the first result is the same as the second result, the target gray level corresponding to the first result is a non-key gray level;
the gray level with zero occurrence probability in the gray histogram is a non-key gray level.
Preferably, the step of acquiring the interval width of each gray level in the region of interest based on the number of non-emphasized gray levels and the gray level difference of the adjacent gray levels in the region of interest includes:
counting the maximum gray level and the minimum gray level except the non-key gray level in the gray level histogram; constructing a calculation formula of interval width from the difference between the maximum gray level and the minimum gray level, the number of non-key gray levels and the gray level difference of adjacent gray levels in the region of interest, wherein the calculation formula of interval width is as follows:
wherein ,representing the%>The width of the interval of the individual gray levels;Representing the number of all non-emphasized gray levels in the gray level histogram;Representing the%>Gray values of the individual gray levels;Representing the%>Gray values of the individual gray levels;Representing a maximum gray level in the gray histogram other than the non-emphasized gray level;Representing the minimum gray level in the gray histogram except for the non-emphasized gray level.
Preferably, the step of enhancing the gray image based on the interval width to obtain an enhanced image includes:
the interval width of each gray level in the concerned area is rounded downwards, the gray level is translated leftwards for the first gray level in the concerned area, and the translation scale is the same as the interval width of the first gray level after rounding downwards;
for any gray level except the first gray level in the concerned region, marking the gray level as a mark gray level, calculating the difference value between the mark gray level and the adjacent previous gray level in the concerned region, and if the difference value is larger than the interval width of the mark gray level after rounding downwards, translating the mark gray level leftwards, wherein the translation scale is the same as the interval width of the mark gray level after rounding downwards; if the difference value is not greater than the interval width of the mark gray level after downward rounding, shifting the mark gray level to the right, wherein the shifting scale is the same as the interval width of the mark gray level after downward rounding; and similarly, carrying out translational stretching on all gray levels of each gray level in the concerned region to obtain an enhanced image after the gray image is enhanced.
Preferably, the step of extracting facial features based on the enhanced image for face matching includes:
filtering and removing isolated points in the enhanced image, and acquiring characteristic points in the enhanced image after the isolated points are removed based on an LSD algorithm; extracting image feature points of an object to be identified, matching the image feature points with feature points in an enhanced image, and successfully matching faces when the number of the feature points reaches a preset proportion.
Preferably, the step of marking the head region in the grayscale image as the region of interest includes:
dividing the gray level image through a trained classifier to obtain a head region, and carrying out convolution processing on the head region by using a convolution check with a preset size to obtain a convolution value of each pixel point in the head region; calculating the mean square error of convolution values corresponding to all pixel points in the head region, and if the mean square error after normalization processing is not smaller than a preset threshold value, taking the head region as a concerned region;
the method for acquiring the convolution value of each pixel point comprises the following steps: and taking the pixel point as the center of a convolution kernel, wherein the gray average value of all the pixel points in the convolution kernel range is the convolution value of the pixel point.
The invention has the following beneficial effects: according to the embodiment of the invention, the enhancement effect of the histogram equalization enhancement image is improved by optimizing and modifying the existing histogram equalization, and the success rate and the accuracy of face matching can be effectively improved by carrying out subsequent face matching based on the enhancement image; when the histogram equalization is optimized and modified, the adjustment coefficients of different gray levels are customized according to whether the gray levels belong to the attention area or not by acquiring the attention area in the gray level image, so that the analysis of the image is more focused on the attention area, more effective information is reserved as much as possible, a mapping function is constructed through the adjustment coefficients corresponding to the different gray levels and the gray level histogram corresponding to the gray level image, gray levels in the gray level histogram are screened based on the mapping function, non-key gray levels in the gray level image are obtained, and the mapping result is interfered through the adjustment coefficients of the different gray levels, so that the loss of the effective information in the histogram equalization processing is avoided; the interval width of each gray level in the concerned region is further obtained based on the number of all non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and then the gray level image is enhanced through the interval width to obtain an enhanced image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a face quick matching method based on a multimedia resource library according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a specific implementation, structure, characteristics and effects of a face rapid matching method based on a multimedia resource library according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the face quick matching method based on the multimedia resource library provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a face quick matching method based on a multimedia resource library according to an embodiment of the present invention is shown, and the method includes the following steps:
step S100, a static image is intercepted in a multimedia resource library, a corresponding gray level image is obtained, and a gray level histogram of the gray level image is obtained.
Face recognition can be applied to person tracking and recognition in monitoring videos or other multimedia images, the distance, angle and even illumination environment between a lens and a face can be flexibly changed when real-time face matching is performed, however, the recorded multimedia images can only be subjected to face recognition based on a fixed shooting distance and definition, recognition accuracy is poor, so that the multimedia images are usually enhanced, but when common histogram equalization is performed for image enhancement, image details are lost while contrast is enhanced, although the enhanced images can be recognized, enhancement effect is poor, the enhanced face features can be incomplete and deformed, and follow-up face matching is not facilitated.
In the embodiment, through adjusting the histogram equalization process, firstly, a static image of a related person is intercepted from a multimedia resource library, and for the convenience of analysis and processing, the static image is subjected to gray processing to obtain a corresponding gray image, so that the calculated amount is reduced, redundant color information is inhibited, and the subsequent extraction of face features is facilitated; and then acquiring a gray level histogram corresponding to each gray level image, wherein the abscissa of the gray level histogram is different gray levels, the ordinate is the occurrence probability of each gray level in the gray level image, the acquiring method of the gray level histogram is a known means, and in the embodiment, the detailed description is omitted, and subsequent analysis processing is performed based on the gray level histogram corresponding to each gray level image.
Step S200 of marking a face region in the grayscale image as a region of interest; and defining the adjustment coefficient of each gray level according to whether the gray level belongs to the concerned region, constructing a mapping function according to the adjustment coefficients of different gray levels and the gray level histogram, and obtaining the non-key gray level in the gray level histogram based on the mapping function.
In order to accurately identify the face information, preprocessing of the image is an indispensable ring, development of face recognition algorithms is enough perfected so far, various potential problems still exist due to different algorithm operation conditions, namely, clear face matching cases of the image, errors or recognition errors hardly exist, and face matching failure only occurs under the conditions of lower image quality and more complex environment, so that the preprocessing of the image needs to be emphasized in the process of optimizing the face matching to ensure the operation environment of the face matching algorithm.
The conventional image enhancement algorithm stretches or equalizes gray levels based on a gray level histogram to achieve the effect of enhancing image contrast, and histogram equalization eliminates less gray levels in an image based on a mapping function and accumulates the less gray levels on more gray levels, so that intervals among the gray levels become larger and contrast is enhanced; however, the gray level to be eliminated may be detailed information of the face of the person, and although the image becomes relatively clear after elimination, the face is distorted, so when the face matching is optimized, the gray image is initially divided to obtain the face region.
Dividing the gray level image through a trained classifier to obtain a head region, and carrying out convolution processing on the head region by utilizing a convolution check of a preset size to obtain a convolution value of each pixel point in the head region; calculating the mean square error of convolution values corresponding to all pixel points in the head region, and if the mean square error after normalization processing is not smaller than a preset threshold value, taking the head region as a concerned region; the method for acquiring the convolution value of each pixel point comprises the following steps: and taking the pixel point as the center of a convolution kernel, wherein the gray average value of all the pixel points in the convolution kernel range is the convolution value of the pixel point.
Specifically, a classifier is set for dividing a face region in a gray level image, wherein the classifier consists of a character contour recognition network after training and a face recognition module; the figure outline recognition network is trained by using Human Pose Evaluator human outline recognition image data and is used for recognizing the figure outline in the image, 6 parts of the figure outline are represented by the head, the trunk, the left and right big arms and the left and right small arms, and each part is represented by a line segment; the human body contour recognition network is a CNN neural network actually, the loss function adopts a cross entropy loss function, a large number of character images marked according to a Human Pose Evaluator mode are input into the neural network for training, the pixel points of the character contour are marked as 1 in the training process, the pixel points of the non-character contour are marked as 0, and the detailed training process is a known technology and is not repeated in detail; whereby the contour of the person in each gray scale image can be obtained.
Then, a head area is framed in the character outline of the gray image, the head area is obtained when the character outline is obtained, the character outline information is different due to different character facing angles, for example, the front face and the back face of the character are not provided with face information, therefore, the embodiment of the invention adopts a convolution check with the size of 5*5 to carry out traversal convolution on the framed head area, no element value exists in convolution kernel, and the convolution process is to sum and average gray values of pixel points in the 5*5 range of each pixel point to be used as convolution values of the pixel points at the central position, so that convolution values corresponding to all the pixel points in the head area are obtained; and then calculating the mean square error of the convolution values corresponding to all the pixel points in the head region, and carrying out normalization processing on the mean square error to obtain the convolution mean square error corresponding to the head region.
Since no face information exists on the back of the head region, the gray information of the back region is single, whether the head region is a face region is judged by comparing the convolution mean square error corresponding to the head region with an empirical threshold value, the empirical threshold value is 0.3 in the embodiment, an implementer can adjust the method in other embodiments, and when the convolution mean square error of the head region is smaller than 0.3, the head region is judged to have no face information; when the convolution mean square error of the head region is not less than 0.3, it is determined that the head region has face information, and the head region having face information is referred to as a region of interest.
The step of determining face information by the convolution value is only to acquire the region of interest in which the face information of the person exists, and it is not necessary to determine detailed face information, so that the gray-scale image may be roughly divided as long as the gray-scale image is not distorted to such an extent that it is completely unrecognizable.
Further, an adjustment coefficient is set to control a histogram equalization mapping function to keep and eliminate gray levels in a gray level image, the gray level contained in a concerned region is marked as a concerned gray level because the gray level histogram does not show position information, the concerned gray level is marked in the gray level histogram, the essence of the histogram equalization enhancement image quality is to enlarge the difference between the gray levels, and the image directly subjected to the histogram equalization can lose detail information, so that the embodiment of the invention ensures that the gray level difference is as large as possible on the basis that the concerned gray level can be kept, so as to control the histogram mapping function to eliminate and keep the gray level, and the mapping function is specifically as follows:
wherein ,representing a mapping function;Indicate->The probability of occurrence of a gray level correspondence, i.e.>Probability of occurrence of individual gray levels in a gray image;In the embodiment of the invention, the value is the number of different gray levels, namely the number of gray levels appearing in the gray image, and the maximum value is 256;Indicate->And the adjustment coefficients of the gray levels.
Mapping functionThe essence of (a) is an increasing function, accumulating one gray level at a time corresponding +.>The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining mapping results corresponding to different gray levels in the gray level image, when +.>Rounding and rounding of the mapping result of the individual gray levels with +.>When the rounding of the mapping result of the gray levels is not equal, then +.>The individual gray levels are reserved gray levels; conversely, when->Rounding and rounding of the mapping result of the individual gray levels with +.>When the rounding of the mapping result of the gray levels is equal, the +.>The gray level is eliminated, that is, partial gray level in the gray level histogram is reserved through the mapping result, and partial gray level in the gray level histogram is subjected to sacrificial treatmentAnd (3) drawing gaps are reserved in the histogram, so that the subsequent operation of gray scale drawing is facilitated, and the eliminated gray scale is marked as a non-key gray scale.
Obtaining the occurrence probability corresponding to different gray levels based on the gray level histogram, taking any gray level as a first gray level, and if the first gray level is the gray level in the concerned region, taking the adjusting coefficient of the first gray level as the reciprocal of the product of the corresponding occurrence probability and a preset constant; if the first gray level is not the gray level in the concerned area, acquiring a polynomial fitting curve of a gray histogram, and calculating the derivative of the polynomial fitting curve at the first gray level, wherein if the derivative is smaller than zero, the adjustment coefficient of the first gray level is the ratio of the occurrence probability of the first gray level to the occurrence probability of the previous gray level; if the derivative is not less than zero, the adjustment coefficient of the first gray level is 1.
The adjustment coefficients are specifically:
wherein , indicate->Gray values of the individual gray levels;Representing the%>Gray values of the gray levels of interest;Indicate->Probability of occurrence of individual gray levels;Representation ofFirst->Probability of occurrence corresponding to the individual gray levels;A polynomial fitting curve corresponding to the gray level histogram is represented at +.>The result of the derivative at the individual gray levels.
When (when)When, i.e. when +.>Gray values of the gray levels and +.>When the gray values of the gray levels of interest are equal, the first +.>The gray level is concerned, and the gray level should be reserved in a way that the gray level is necessarily amplified, but the mapping result cannot be directly determined to be amplified by more than two times to realize the reservation, so the most stable way is selected to be the (th)>The adjustment coefficient corresponding to the gray level is set to +.>Making its mapping result equal to 1 in any case; when-> andWhen indicate->The probability of occurrence of each gray level in the gray level histogram is smaller than that of the adjacent previous gray level, the ordinate value of the gray level histogram is the probability of occurrence of different gray levels, and the +.>The gray level is not the focus gray level, so at this point +.>The gray level is not necessarily the gray level to be preserved and the probability of the distribution of pixels is smaller for the neighboring previous gray level, then +.>The gray level is a gray level that can be considered for sacrifice in order to facilitate the subsequent stretching of the image contrast, thus will be +.>The adjustment coefficient corresponding to the gray level is set to +.>Since the precondition at this time is a polynomial fitting curve corresponding to the gray level histogramIn->The result of the derivation at the individual gray levels is negative, i.e. +.>Thus when->When the probability of the distribution of the gray levels is too small compared to its neighboring preceding gray level, the corresponding adjustment coefficient +.>Will take a small value, the +.>The number of gray levels is considered to be gray levels that can be eliminated to achieve gray stretching and do not affect the region of interest information; when adjusting the coefficient->When the mapping result is close to 1, the mapping result does not have large change, and the originally reserved gray level can be reserved. It should be noted that when-> andAt the time->The adjustment coefficient of each gray level is 1.
As an example, assume the first derived based on a mapping functionThe mapping result for the respective gray level is 50.2, i.e. is added to the +.>The accumulated result of the gray levels is 50.2, and the value after rounding is 50; if based on mapping function get +>The mapping result corresponding to the gray level is 49.8, then +.>The value after rounding of the mapping result corresponding to the gray level is also 50, the +.>Gray level and->The mapping results of the individual gray levels are equal after roundingFirst->The individual gray levels should be eliminated; but if at this time->The gray level is the gray level of interest, then at +.>The mapping result of the individual grey levels is given +.>The mapping result for the individual gray levels is 49.8+1=50.8, the rounding result at this time is 51, and +.>The rounding mapping result 50 of the gray levels is different, so +.>The individual gray levels are preserved.
Step S300, the interval width of each gray level in the concerned region is obtained according to the number of non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and the gray image is enhanced based on the interval width to obtain an enhanced image.
Acquiring gray levels to be eliminated in the gray level histogram in the step S200, counting the number of all non-key gray levels in the gray level histogram, wherein the non-key gray levels in the embodiment of the invention comprise gray levels which have no occurrence probability in the gray level histogram except the gray levels to be eliminated, namely, the gray level with the value of 0 on the ordinate in the gray level histogram is also the non-key gray level, and the interval width is allocated to each concerned gray level based on the number of the non-key gray levels and the gray level difference between adjacent gray levels in the concerned region, and counting the maximum gray level and the minimum gray level except the non-key gray level in the gray level histogram; constructing a calculation formula of interval width from the difference between the maximum gray level and the minimum gray level, the number of non-key gray levels and the gray level difference of adjacent gray levels in the concerned region, wherein the specific calculation of the interval width is as follows:
wherein ,representing the%>The width of the interval of the gray levels, i.e. +.>The interval width of the gray level of interest;Representing the number of all non-emphasized gray levels in the gray level histogram;Representing the%>Gray values of the individual gray levels;Representing the%>Gray values of the individual gray levels;Representing a maximum gray level in the gray histogram other than the non-emphasized gray level;Representing the minimum gray level in the gray histogram except for the non-emphasized gray level.
Representing the%>The (th) of the gray levels adjacent thereto>The gray level difference between the gray levels, the arrangement of the gray levels in the gray level histogram is sequentially increased, thus +.>The gray level must be greater than the firstGray levels;The difference between the maximum gray level and the minimum gray level except for the non-key gray level on the gray level histogram, namely the maximum interval of the gray levels on the gray level histogram is used for normalizing the gray level difference between the adjacent gray levels, wherein the purpose of normalizing by the maximum interval is to consider that limitation exists when gray stretching is carried out on a gray level image;The smaller the value of (2), the smaller the interval between the adjacent gray levels, and the more the interval needs to be stretched;The larger the value of (2), the less stretch is required for some interval, thus +.>As the stretching degree, the number of non-emphasized gray levels is the total amount which can be stretched and distributed, and the product of the stretching degree and the number of non-emphasized gray levels is +.>To obtain a specific stretching amount, i.e. < ->The width of the interval of the individual gray levels.
The interval width of each gray level in the concerned area is rounded downwards, the gray level is translated leftwards for the first gray level in the concerned area, and the translation scale is the same as the interval width of the first gray level after rounding downwards; for any gray level except the first gray level in the concerned area, marking the gray level as a mark gray level, calculating the difference value between the mark gray level and the adjacent previous gray level in the concerned area, and if the difference value is larger than the interval width of the mark gray level after downward rounding, translating the mark gray level to the left, wherein the translation scale is the same as the interval width of the mark gray level after downward rounding; if the difference value is not greater than the interval width of the mark gray level after downward rounding, translating the mark gray level to the right, wherein the translation scale is the same as the interval width of the mark gray level after downward rounding; and carrying out translation stretching on all gray levels in the concerned region to obtain an enhanced image after the gray level image is enhanced.
Specifically, the interval width of each concerned gray level is rounded downwards, so that the rounded interval width corresponding to each concerned gray level in the concerned region is obtained; if the gray level of one gray level image is distributed between 0 and 150, the space where the gray level can be horizontally stretched to the right has 155 gray levels, and the stretched gray level image increases the contrast, but the original gray level image has serious distortion and overexposure; accordingly, the excessive shift of the gray level to the left causes serious distortion and excessive darkness, so that the stretching and shifting are performed in the original gray distribution interval of the gray image in the embodiment.
Sequentially from the first gray level in the region of interest, whenWhen (I)>The value is +.>Obtaining the corresponding interval width +.>Will->The gray level is shifted left +.>Gray levels; when->When the user is required to judge the->Gray level and->Whether the interval between the individual gray levels is larger than the corresponding interval width +.>If->Gray level and->The interval between the grey levels is larger than +.>The interval width corresponding to the individual gray levels +.>Will->The gray level is shifted left +.>Gray levels; if%>Gray level and->The interval between the gray levels is not more than +.>The interval width corresponding to the individual gray levels +.>Will->Shift right gray level +.>Gray levels; when->The interval width corresponding to the individual gray levels +.>When equal to 0, indicate +.>The contrast at each gray level is enough, stretching is not needed, and the like, each gray level in the concerned region is analyzed and processed until stretching is completed on all gray levels in the concerned region, namely, the gray level image is stretched and enhanced, and the image after the stretching and enhancing is recorded as an enhanced image.
The gap width in the stretching operation is hereinThe values are obtained after the downward rounding treatment.
Step S400, face matching is performed based on the face features extracted from the enhanced image.
The enhanced image subjected to histogram equalization and stretching retains all the gray levels of interest, and the gaps among all the gray levels of interest in the interest area are adaptively stretched within an allowable range, so that the purpose of enhancing the contrast of the interest area is achieved; however, it is inevitable that isolated points exist in the enhanced image of the gray level image, and the original isolated points are more prominent after stretching enhancement, so that all the isolated points in the enhanced image are processed first; filtering and removing isolated points in the enhanced image, and acquiring characteristic points in the enhanced image after removing the isolated points based on an LSD algorithm; and extracting image feature points of the object to be identified, matching the image feature points with feature points in the enhanced image, and successfully matching the human face when the number of the feature points is up to a preset proportion.
Specifically, in the embodiment of the present invention, the filter of 3*3 is used to verify the isolated point, that is, the gray values of all the pixel points in the range of the isolated point 3*3 are summed and averaged, and the average value is given to the isolated point to update the gray value of the isolated point; the method for judging the isolated point comprises the following steps: for any pixel point, if the eight neighborhood pixel points of the pixel point do not have the neighborhood pixel points with the same gray value as the pixel point, the pixel point is judged to be an isolated point.
Further, face matching is performed based on the enhanced image after the isolated points are eliminated; for the attention area in the enhanced image, the embodiment of the invention utilizes a Canny operator to detect the facial outline and the five sense organs edge, then utilizes an LSD algorithm to extract at least 68 feature points from the face, the nature of the LSD algorithm is to detect the local outline in the image, the feature points are the end points on two sides of a stable outline, the feature points are not affected by the small expression change of the face or the orientation angle of the person, at least 68 basic feature points are obtained by referring to the existing face detection Dlib library, and the application range and the reliability are higher; when the faces are matched, facial image feature points of objects to be identified are obtained in advance, then the feature points of the faces are matched in the interesting areas, namely the feature points of the faces, in each multimedia image of the multimedia resource library in a mode of image pyramid, translation and rotation, and when the number of the feature point matching reaches a preset proportion, the faces are successfully matched; in this embodiment, the preset ratio is set to 95%, and in other embodiments, the practitioner can adjust the ratio by himself; the Canny operator and the LSD algorithm are all known means and are not described in detail.
In summary, in the embodiment of the present invention, the gray histogram of the gray image is obtained by capturing the still image and obtaining the corresponding gray image in the multimedia resource library; marking a head region in the gray scale image as a region of interest; defining the adjustment coefficient of each gray level according to whether the gray level belongs to the concerned area, constructing a mapping function according to the adjustment coefficients of different gray levels and the gray level histogram, and obtaining non-key gray levels in the gray level histogram based on the mapping function; acquiring the interval width of each gray level in the concerned region according to the number of non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and enhancing the gray level image based on the interval width to obtain an enhanced image; extracting facial features based on the enhanced image to perform face matching; the reliability and the accuracy of face matching are effectively improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the present invention is not intended to be limiting, but rather, any modifications, equivalents, improvements, etc. that fall within the principles of the present invention are intended to be included within the scope of the present invention.
Claims (5)
1. A face quick matching method based on a multimedia resource library is characterized by comprising the following steps:
intercepting a static image in a multimedia resource library, acquiring a corresponding gray level image, and acquiring a gray level histogram of the gray level image;
marking a head region in the grayscale image as a region of interest; defining the adjustment coefficient of each gray level according to whether the gray level belongs to the gray level in the concerned area, constructing a mapping function according to the adjustment coefficients of different gray levels and the gray level histogram, and obtaining non-key gray levels in the gray level histogram based on the mapping function;
acquiring the interval width of each gray level in the concerned region according to the number of the non-key gray levels and the gray level difference of the adjacent gray levels in the concerned region, and enhancing the gray level image based on the interval width to obtain an enhanced image;
extracting facial features based on the enhanced image to perform face matching;
the step of obtaining the non-emphasized gray level in the gray level histogram based on the mapping function includes:
taking any gray level as a target gray level, rounding the mapping result of the target gray level obtained based on the mapping function to obtain a first result, and rounding the mapping result of the previous adjacent gray level of the target gray level to obtain a second result; if the first result is the same as the second result, the target gray level corresponding to the first result is a non-key gray level;
the gray level with zero occurrence probability in the gray histogram is a non-key gray level;
the step of acquiring a gap width of each gray level in the region of interest based on the number of non-emphasized gray levels and the gray level difference of adjacent gray levels in the region of interest includes:
counting the maximum gray level and the minimum gray level except the non-key gray level in the gray level histogram; constructing a calculation formula of interval width from the difference between the maximum gray level and the minimum gray level, the number of non-key gray levels and the gray level difference of adjacent gray levels in the region of interest, wherein the calculation formula of interval width is as follows:
wherein ,representing the%>The width of the interval of the individual gray levels;Representing the number of all non-emphasized gray levels in the gray level histogram;Representing the%>Gray values of the individual gray levels;Representing the first in the region of interestGray values of the individual gray levels;Representing a maximum gray level in the gray histogram other than the non-emphasized gray level;Representing a minimum gray level in the gray histogram other than the non-emphasized gray level;
the step of enhancing the gray image based on the interval width to obtain an enhanced image comprises the following steps:
the interval width of each gray level in the concerned area is rounded downwards, the gray level is translated leftwards for the first gray level in the concerned area, and the translation scale is the same as the interval width of the first gray level after rounding downwards;
for any gray level except the first gray level in the concerned region, marking the gray level as a mark gray level, calculating the difference value between the mark gray level and the adjacent previous gray level in the concerned region, and if the difference value is larger than the interval width of the mark gray level after rounding downwards, translating the mark gray level leftwards, wherein the translation scale is the same as the interval width of the mark gray level after rounding downwards; if the difference value is not greater than the interval width of the mark gray level after downward rounding, shifting the mark gray level to the right, wherein the shifting scale is the same as the interval width of the mark gray level after downward rounding; and similarly, carrying out translational stretching on each gray level in the concerned region to obtain an enhanced image after the gray level image is enhanced.
2. The method for fast face matching based on a multimedia resource library according to claim 1, wherein the step of customizing the adjustment coefficient of each gray level according to whether the gray level belongs to the gray level in the attention area comprises:
obtaining the occurrence probability corresponding to different gray levels based on the gray level histogram, taking any gray level as a first gray level, and if the first gray level is the gray level in the concerned region, the adjusting coefficient of the first gray level is the reciprocal of the product of the corresponding occurrence probability and a preset constant;
if the first gray level is not the gray level in the concerned area, acquiring a polynomial fitting curve of the gray histogram, and calculating the derivative of the polynomial fitting curve at the first gray level, if the derivative is smaller than zero, the adjustment coefficient of the first gray level is the ratio of the occurrence probability of the first gray level to the occurrence probability of the previous gray level; if the derivative is not less than zero, the adjustment coefficient of the first gray level is 1.
3. The method for fast matching a face based on a multimedia resource library according to claim 2, wherein the expression of the mapping function is:
wherein ,indicate->A mapping function of the individual gray levels;Representing the>First ∈th gray level before>Probability of occurrence corresponding to the individual gray levels;Is a preset constant;Representing the>First ∈th gray level before>And the adjustment coefficients of the gray levels.
4. The method for fast face matching based on multimedia resources according to claim 1, wherein the step of extracting facial features based on the enhanced image for face matching comprises:
filtering and removing isolated points in the enhanced image, obtaining feature points in the enhanced image after the isolated points are removed, extracting image feature points of an object to be identified, matching the image feature points of the object to be identified with the feature points in the enhanced image, and when the number of the feature point matching reaches a preset proportion, successfully matching the faces.
5. The method for fast face matching based on a multimedia resource library according to claim 1, wherein said step of marking a head region in said gray scale image as a region of interest comprises:
dividing the gray level image through a trained classifier to obtain a head region, and carrying out convolution processing on the head region by using a convolution check with a preset size to obtain a convolution value of each pixel point in the head region; calculating the mean square error of convolution values corresponding to all pixel points in the head region, and if the mean square error after normalization processing is not smaller than a preset threshold value, taking the head region as a concerned region;
the method for acquiring the convolution value of each pixel point comprises the following steps: and taking the pixel point as the center of a convolution kernel, wherein the gray average value of all the pixel points in the convolution kernel range is the convolution value of the pixel point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310152207.XA CN115862121B (en) | 2023-02-23 | 2023-02-23 | Face quick matching method based on multimedia resource library |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310152207.XA CN115862121B (en) | 2023-02-23 | 2023-02-23 | Face quick matching method based on multimedia resource library |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115862121A CN115862121A (en) | 2023-03-28 |
CN115862121B true CN115862121B (en) | 2023-05-09 |
Family
ID=85658646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310152207.XA Active CN115862121B (en) | 2023-02-23 | 2023-02-23 | Face quick matching method based on multimedia resource library |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115862121B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116718353B (en) * | 2023-06-01 | 2024-05-28 | 信利光电股份有限公司 | Automatic optical detection method and device for display module |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867176A (en) * | 2012-09-11 | 2013-01-09 | 清华大学深圳研究生院 | Face image normalizing method |
CN106778613A (en) * | 2016-12-16 | 2017-05-31 | 广东工业大学 | A kind of auth method and device based on the matching of face cut zone |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254674B2 (en) * | 2004-10-28 | 2012-08-28 | DigitalOptics Corporation Europe Limited | Analyzing partial face regions for red-eye detection in acquired digital images |
US7840066B1 (en) * | 2005-11-15 | 2010-11-23 | University Of Tennessee Research Foundation | Method of enhancing a digital image by gray-level grouping |
CN101561874B (en) * | 2008-07-17 | 2011-10-26 | 清华大学 | Method for recognizing face images |
US20100278423A1 (en) * | 2009-04-30 | 2010-11-04 | Yuji Itoh | Methods and systems for contrast enhancement |
CN104809450B (en) * | 2015-05-14 | 2018-01-26 | 郑州大学 | Wrist vena identification system based on online extreme learning machine |
CN111832405A (en) * | 2020-06-05 | 2020-10-27 | 天津大学 | Face recognition method based on HOG and depth residual error network |
-
2023
- 2023-02-23 CN CN202310152207.XA patent/CN115862121B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867176A (en) * | 2012-09-11 | 2013-01-09 | 清华大学深圳研究生院 | Face image normalizing method |
CN106778613A (en) * | 2016-12-16 | 2017-05-31 | 广东工业大学 | A kind of auth method and device based on the matching of face cut zone |
Also Published As
Publication number | Publication date |
---|---|
CN115862121A (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112819772B (en) | High-precision rapid pattern detection and recognition method | |
Miri et al. | Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction | |
CN109165593B (en) | Feature extraction and matching and template update for biometric authentication | |
US7072523B2 (en) | System and method for fingerprint image enhancement using partitioned least-squared filters | |
JP4529172B2 (en) | Method and apparatus for detecting red eye region in digital image | |
CN107135664B (en) | Face recognition method and face recognition device | |
CN109685045B (en) | Moving target video tracking method and system | |
CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
CN106127741B (en) | Non-reference picture quality appraisement method based on improvement natural scene statistical model | |
CN110458792B (en) | Method and device for evaluating quality of face image | |
CN110059634B (en) | Large-scene face snapshot method | |
CN111340824A (en) | Image feature segmentation method based on data mining | |
EP3327623B1 (en) | Biometric method | |
CN115862121B (en) | Face quick matching method based on multimedia resource library | |
CN105678245A (en) | Target position identification method based on Haar features | |
CN117522719B (en) | Bronchoscope image auxiliary optimization system based on machine learning | |
CN112883824A (en) | Finger vein feature recognition device for intelligent blood sampling and recognition method thereof | |
CN112434638A (en) | Facial expression recognition, classification and analysis system | |
CN115830319A (en) | Strabismus iris segmentation method based on attention mechanism and verification method | |
CN117496019B (en) | Image animation processing method and system for driving static image | |
CN111696090A (en) | Method for evaluating quality of face image in unconstrained environment | |
CN104966271B (en) | Image de-noising method based on biological vision receptive field mechanism | |
Kumar et al. | A comparative study on filters with special reference to retinal images | |
CN112541859A (en) | Illumination self-adaptive face image enhancement method | |
Pasha et al. | An Efficient Novel Approach for Iris Recognition and Segmentation Based on the Utilization of Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |