CN112883759A - Method for detecting image noise of biological characteristic part - Google Patents

Method for detecting image noise of biological characteristic part Download PDF

Info

Publication number
CN112883759A
CN112883759A CN201911197942.2A CN201911197942A CN112883759A CN 112883759 A CN112883759 A CN 112883759A CN 201911197942 A CN201911197942 A CN 201911197942A CN 112883759 A CN112883759 A CN 112883759A
Authority
CN
China
Prior art keywords
image
effective
area
region
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911197942.2A
Other languages
Chinese (zh)
Other versions
CN112883759B (en
Inventor
华丛一
任志浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911197942.2A priority Critical patent/CN112883759B/en
Publication of CN112883759A publication Critical patent/CN112883759A/en
Application granted granted Critical
Publication of CN112883759B publication Critical patent/CN112883759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses a method for detecting image noise of a biological characteristic part, which is characterized by comprising the steps of extracting image data in an effective area based on the image including the biological characteristic part to obtain effective area image data, wherein the effective area comprises a biological characteristic part image except an area judged by interference noise; and performing convolution calculation on the pixel values of the pixels in the effective area image data and a convolution kernel to obtain convolution values, calculating the average value of each convolution value to obtain the noise degree representing the image noise of the biological characteristic portion, and if the noise degree is greater than a preset noise threshold value, judging that the biological characteristic portion image is a noise image. The method can quickly realize the calculation of the noise degree of the effective area based on the single-frame image, the processing flow is simple and efficient, and the scene applicability is very wide.

Description

Method for detecting image noise of biological characteristic part
Technical Field
The invention relates to the field of digital image processing, in particular to a method for detecting image noise of a biological feature part image.
Background
Image noise refers to unnecessary or redundant interference information in the image data, such as "snow" noise that is typically generated without light shortage or the like. The presence of noise seriously affects the quality of the image and must be corrected before the image enhancement process and classification process.
Faces, palm prints, fingerprints, etc. have been widely used as one of the biometric features. Taking a face image as an example, the quality of the face image has a direct influence on the actual effects of a face detection algorithm, a face recognition algorithm, and a face living body detection algorithm, and also relates to the operations of auxiliary modules, such as exposure control, gain control, wide dynamic setting, and the like, and an important criterion in the quality of the face image is noise.
In the prior art, the detection and noise elimination of image noise are based on a global image, namely, processing is performed based on all data information in the image, the method has universality, but the method has poor effect when being applied to a biological feature part image, and some detection schemes with excellent effect have high complexity and consume a lot of time.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting noise in an image of a biometric region, so as to quickly detect noise in the image including the biometric region.
The invention provides a method for detecting image noise of a biological characteristic part, which comprises the following steps,
extracting image data in an effective area based on the image including the biological characteristic part to obtain effective area image data, wherein the effective area includes the biological characteristic part image except the area judged by the interference noise;
performing convolution calculation on the pixel values of the pixels in the effective area image data and a convolution kernel to obtain convolution values,
and calculating the average value of all the convolution values to obtain the noise degree representing the image noise of the biological characteristic part, and if the noise degree is greater than a preset noise threshold value, judging that the biological characteristic part image is a noise image.
Wherein the image including the biological characteristic part is a single frame RGB image, the method further comprises,
and carrying out gray level processing on the biological characteristic part image or carrying out gray level processing on the effective area image to obtain brightness image data.
Preferably, the method further includes performing equalization processing on the brightness image data.
Preferably, the part image including the biological features is a face image including face features; the effective region includes face images other than eyes.
Preferably, the extracting image data within the effective region includes,
acquiring left eye pupil coordinate, right eye pupil coordinate, mouth left mouth angle coordinate and right mouth angle coordinate in the face image, calculating the mean value of 4 coordinates to obtain the face center position,
determining the width of an effective rectangular area according to the distance between the pupils, determining the height of the effective rectangular area according to the width to obtain an effective rectangular area at least limited except the area above the eyes,
determining the position of the effective rectangular region based on the face center position and the range defined by the effective rectangular region, with the target of increasing the number of image pixels defined by the effective rectangular region,
image data within the effective rectangular region is extracted.
Preferably, the determining the width of an effective rectangular area according to the inter-pupil distance includes taking the product of the inter-pupil distance and the first coefficient as the width of the effective rectangular area,
the determining the height of the effective rectangular area according to the width comprises taking the product result of the second coefficient and the width of the effective rectangular area as the height of the effective rectangular area,
wherein the first coefficient is greater than 1 and the second coefficient is determined according to the height from below the eyes to part or the whole chin;
the determining of the position of the effective rectangular region, based on the face center position and the range defined by the effective rectangular region, targeting the increase of the number of image pixels defined by the effective rectangular region, includes,
the position of the effective rectangular region is determined such that the height of the vertical coordinate of the face to the chin occupies a height of the effective rectangular region greater than a first threshold, the center of the face is on the center line in the width direction of the effective rectangular region or is offset from the center line by less than a second threshold, and the image pixels defined by the effective rectangular region include cheek regions from below the eyes to a part or the entire chin.
Preferably, the extracting image data within the effective region includes,
extracting the face image contour and the lower eyelid image contour,
a first curve segment intersecting the left face image contour at a first intersection point and the right face image contour at a second intersection point is formed below the lower eyelid image contour,
and forming a closed curve by using the first face image contour between the first intersection point and the second intersection point and including the lower jaw and the first curve segment, wherein a closed area formed by the closed curve is used as an effective area.
Preferably, the convolution calculating the pixel value of the pixel in the effective area image data and the convolution kernel includes,
establishing a mask area for setting the pixel value of more than one continuous pixel in the area to be 0; the mask area includes a nose and mouth area,
and performing convolution calculation on pixel values of pixel points except the mask area and a convolution kernel based on the extracted effective area image to obtain convolution values of each convolution calculation, wherein the convolution kernel is a 3 x 3 matrix.
Preferably, the mask region is a rectangular mask region, the rectangular mask region has the distance from the tip of the nose to the edge valley of the lower edge of the mouth as the height of the mask region, the distance between the left mouth corner and the right mouth corner as the width of the mask region,
alternatively, the first and second electrodes may be,
the mask area is a closed mask area which is closed by an irregular polygon, and the mask area is a closed irregular polygon formed by sequentially connecting a left nose wing, a right mouth corner, a lower edge valley of the mouth part and the left mouth corner,
wherein the trough is located at the lowest position of the lower mouth edge.
Preferably, the extracting image data within the effective region includes,
removing a region at least comprising eyes, a nose and a mouth in the face image to obtain a residual face image, taking the residual face image as an effective region, and extracting image data in the effective region;
and performing convolution calculation on the pixel values of the pixels in the effective region image data and the convolution kernels, wherein the convolution calculation is performed on the pixel values of the pixels and the convolution kernels in the extracted effective region image to obtain convolution values of each convolution calculation.
Preferably, the region of the face image including at least the eyes, the nose, and the mouth is removed, including,
removing the transverse strip-shaped area penetrating through the eyes and the longitudinal strip-shaped area which is perpendicular to the transverse strip-shaped area and covers the nose and the mouth,
wherein the content of the first and second substances,
the width of the transverse strip-shaped area is at least the distance between the left outer canthus and the right outer canthus, and the height of the transverse strip-shaped area is at least the average value of the longitudinal distances between the two eyes and the lower eyelid;
the longitudinal strip-shaped region is a trapezoidal region formed by the outer envelope of the nose and mouth, or,
the width of the longitudinal strip-shaped area is the distance between the left mouth angle and the right mouth angle, the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two lower eyelids and the vertical coordinate of the lower edge valley of the mouth part, or the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two pupils and the vertical coordinate of the lower edge valley of the mouth part; the trough is located where the lower edge of the mouth is lowermost.
The invention provides a device for detecting image noise of a biological characteristic part, which comprises a memory and a processor, wherein,
the memory stores an application program that is executed by the application,
the processor executes the application program to realize the detection step of the image noise of the biological characteristic part.
The invention provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the detection step of the image noise of the biological characteristic part.
The method is used for removing the position image area which interferes with noise aiming at the image comprising the biological characteristic part, thereby extracting effective image data, and detecting through self-defined noise degree.
Drawings
Fig. 1 is a schematic flow chart of a method for detecting facial image noise according to an embodiment of the present disclosure.
Fig. 2a to 2d are schematic diagrams illustrating a relationship between the effective rectangular area and the face position.
Fig. 3 is a schematic diagram of an irregular effective area.
Fig. 4a and 4b are schematic views of a mask region.
FIG. 5 is a schematic diagram of convolution calculation.
Fig. 6 is a flowchart illustrating a method for detecting facial image noise according to another embodiment of the present application.
Fig. 7a and 7b are schematic diagrams of the remaining effective region after removing the region based on the face image.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The applicant finds that the existing image noise detection method lacks pertinence of the image of the biological characteristic part in engineering application of image identification detection based on the biological characteristic part, and the influence of a specific part or area in the image of the biological characteristic part on noise detection is not concerned, so that the noise detection scheme with excellent effect is difficult to be expected even if applied to the image of the biological characteristic part.
The method for detecting the noise of the biological characteristic part image is designed aiming at the influence of a specific part or an area in the biological characteristic part image on noise detection, the specific area image data influencing the noise detection is removed, and the image noise detection is carried out by considering the self-defined noise degree of the image data of a reserved area.
The following description will be given by taking the noise detection of the facial image as an example, it should be understood that the present application is not limited to the facial image, and has other parts of biometric identification, including and not limited to fingerprints, palm prints and the like, and also refers to the equivalent and similar modifications to realize the noise identification including the part image.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a schematic flow chart of a method for detecting facial image noise according to an embodiment of the present application.
Step 101 of acquiring lightness image data of image data including a face as first image data,
in general, a visible light (VIS) image is an RGB image including R, G, B components, and in order to avoid judgment of interference noise caused by R, G, B components in RGB data, conversion of an RGB image or a color image into a grayscale image is realized by eliminating hue and saturation information of the image while preserving brightness, that is, a graying process is specifically converted into:
Y=0.299*R+0.587*G+0.114*B
where Y is the image gray scale value and R, G, B is the component of the RGB image.
For Near Infrared (NIR) images, typically 8bits of single channel data, i.e., Y data, no conversion is required. .
And 102, removing at least the eye region in the face image to extract an effective region image of the face, and taking the extracted image data as second image data.
The applicant has found that in noise detection of a face image, an eye region interferes with judgment of noise, and therefore, at least the eye region is removed.
In one embodiment, left eye pupil coordinates, right eye pupil coordinates, left mouth angle coordinates and right mouth angle coordinates in the face image are acquired, the mean value of 4 coordinates is calculated to obtain the face center position,
expressed mathematically as:
fc_x=(eyel_x+eyer_x+mouthl_x+mouthr_x)/4
fc_y=(eyel_y+eyer_y+mouthl_y+mouthr_y)/4
the coordinates of the face center position are (fc _ x, fc _ y), the left eye pupil coordinates are (eye _ x, eye _ y), the right eye pupil coordinates (eye _ x, eye _ y), the left mouth angle coordinates (mouthl _ x, mouthl _ y), and the right mouth angle coordinates (mouthl _ x, mouthl _ y).
Forming an effective rectangular region for extracting image data by taking a width 1.6-2 times an inter-pupil distance (inter-pupil distance) as a width of the effective rectangular region and at least 70% of the width of the effective rectangular region as a height of the effective rectangular region, wherein a mathematical expression of the width of the effective rectangular region is as follows:
facewidth=w*(eyer_x-eyel_x)
faceheight=h*facewidth,
wherein, facewidth is the width of the effective rectangular area, w is a first coefficient, faceheight is the height of the effective rectangular area, and h is a second coefficient; wherein the first coefficient is greater than 1 and the second coefficient is determined based on the height from below the eye to part or the entire chin.
Since the eye region interferes with the judgment of noise, the requirement that the image in the effective rectangular region at least does not include eyes is met, and in order to acquire data favorable for noise detection, the position (location) of the effective rectangular region on the face can be determined by combining the center position of the face and the height and width of the effective rectangular region according to the aim of increasing the number of image pixels defined by the effective rectangular region; preferably, the image within the effective rectangular region includes a cheek region other than the above-eye region, the first position is determined such that the height of the face center ordinate to the chin occupies a height of the effective rectangular region greater than a first threshold, the second position is determined such that the face center is on a center line of the effective rectangular region in the width direction or is offset from the center line by less than a second threshold, and the image data within the effective rectangular region is extracted, for example, such that the height of the face center ordinate to the chin occupies a first position determined by at least 50% of the height of the effective rectangular region, such that the face center is on the center line of the effective rectangular region in the width direction or is offset within 20% from the vicinity of the center line, such that the effective rectangular region includes the cheek region from below the eyes to a part or the entire chin.
Referring to fig. 2a to 2d, fig. 2a to 2d are schematic diagrams showing the relationship between the effective rectangular region and the face position, i.e., the positioning of the effective rectangular region. Wherein, fig. 2a is a case where the height of the rectangular area with high occupancy from the center ordinate of the face to the chin is less than 50%, and fig. 2b is another case where the height of the rectangular area with high occupancy from the center ordinate of the face to the chin is less than 50%, in which case, since the image data of the face area in the height direction within the effective rectangular area is limited, the extracted image data cannot be used as effective data; FIG. 2c is a view showing a case where the center of the face is largely deviated from the center line in the width direction of the effective rectangular region, in which case the extracted image data cannot be regarded as effective data because the image data of the face region in the width direction within the effective rectangular region is limited; FIG. 2d illustrates a situation where the effective rectangular area is located relatively to the face, including the cheek area from under the eyes to part of the chin.
In the second embodiment, the shape of the effective region may also be a closed irregular polygon formed by connecting a plurality of curve segments end to end, so as to distinguish the image data to be extracted from the face image. For example, referring to fig. 3, a face image contour and a lower eyelid image contour of the eye are extracted, a first curve segment intersecting the left face image contour at a first intersection point and the right face image contour at a second intersection point is formed below the lower eyelid image contour, a closed curve formed by the first face image contour including the lower jaw and the first curve segment between the first intersection point and the second intersection point is formed, and an area formed by the closed curve is used as an effective area.
Step 103, the image data (second image data) in the extracted effective region is equalized to increase the contrast of the image, which is beneficial to the selection of the noise threshold. And taking the equalized second image data as third image data.
In this step, the second image data may be equalized in a histogram equalization manner, so that the local contrast is enhanced without affecting the overall contrast.
And step 104, setting the pixel values of the nose and mouth regions to be 0 based on the third image so as to avoid interference of the nose and mouth on noise calculation.
The applicant finds that for noise detection of a face image, a nose and a mouth can generate serious interference on noise calculation, so that the influence caused by the nose and the mouth needs to be removed, namely, pixel values of a nose and mouth region are set to be 0; the area formed by these multiple continuous pixels appears black in the image, and is referred to as the mask area in this application.
In one embodiment, referring to fig. 4a and 4b, fig. 4a and 4b are schematic views of a mask region. Fig. 4a is a rectangular mask region, which uses the distance between the nose tip and the edge valley of the lower edge of the mouth as the height of the mask region, and the distance between the left mouth corner and the right mouth corner as the width of the mask region, and establishes a mask region in a rectangular shape. Wherein the trough is located at the lowest position of the lower mouth edge.
In the second embodiment, as shown in fig. 4b, a closed irregular polygon is formed by connecting the left nose wing, the right mouth corner, the lower edge valley of the mouth part, and the left mouth corner in this order, and the closed region is used as a mask region.
After the mask region is established, the pixel value of each pixel point in the mask region is set to 0.
And 105, setting a convolution kernel, and performing convolution calculation according to a set step length based on the third image data.
Referring to fig. 5, fig. 5 is a schematic diagram of the convolution calculation. The convolution kernel N is set to a 3 × 3 matrix, for example, the convolution kernel N is set to:
Figure BDA0002295133080000071
and for the pixel points in any size except the mask area and having the same size as the convolution kernel in the third image, performing convolution calculation on the pixel value I (x, y) of the pixel points in the size and the convolution kernel N to obtain a convolution value (target pixel). For example, in fig. 5, the pixel point corresponding to the convolution kernel is 3 × 3, and in the 3 × 3 pixel points, the pixel value of each pixel point is multiplied by the corresponding value in the convolution kernel and then summed to obtain a convolution value.
In this step, convolution calculation may be performed on all pixel points in the third image except for the mask region, or convolution calculation may be performed on a part of the pixel points except for the mask region; and for the pixel points in the mask area, convolution calculation is not needed.
Preferably, the step length of sliding is 1 pixel.
And 106, calculating the noise degree based on the obtained convolution value.
In this step, the respective convolution values are summed, and then an average value of the convolution values is calculated as a noise degree. Expressed mathematically as:
Noise=∑|I(x,y)*N|/n
where Noise is the Noise level, N is the number of convolution calculations, i.e., the number of accumulated convolution values, N is the convolution kernel, I (x, y) is the pixel value of the pixel point (x, y), and I (x, y) × N represents the convolution operation between the pixel value and the convolution kernel.
Taking fig. 5 as an example, the 8 × 8 third image data is convolved with a 3 × 3 convolution kernel to obtain 6 × 6 convolution values at most, and the average value of the 36 convolution values is taken as the noise level.
The noise level calculated in the above manner is equivalent to averaging again the image data of the cheek region after the moving average processing, and this manner can accurately represent the reality of the image noise and is simple in engineering implementation.
And 107, comparing the noise degree with a set noise degree threshold, if the noise degree is greater than the set noise degree threshold, judging that the current image belongs to the noise image, and at the moment, triggering subsequent optimization processing, such as gain modification, light supplement power output increase and the like, otherwise, judging that the image is normal, and continuing the original process.
Because the actual effect of different cameras is inconsistent, the noise threshold values of different models of cameras are inconsistent, and can be reasonably set according to the actual image effect and the detection requirement in specific application.
In the foregoing embodiment, it should be understood that the processing of the grayscale processing and the equalization of the image may not be limited to the order of the first embodiment, for example, the equalization processing may also be performed before step 102, that is, the equalization processing is performed on the first image, and the equalization processing on the second image data is beneficial to improving the processing efficiency of the image and reducing the memory occupation in view of that the second image data is the valid data extracted from the first image data. Similarly, the gradation processing of the image may also perform the gradation processing and the equalization processing on the extracted image data after extracting the image data in the effective region, and the gradation processing and the equalization processing may not be in strict order of priority.
The embodiment can quickly realize the calculation of the noise degree of the face image based on the single-frame image, has excellent face detection effect and concise processing flow, and realizes the quick detection of the noise of the face image.
Example two:
referring to fig. 6, fig. 6 is a schematic flow chart of a method for detecting facial image noise according to another embodiment of the present application.
Step 601, removing the region at least including the eyes, the nose and the mouth in the face image to extract the effective region image of the face,
the region consisting of the eyes, nose, and mouth is removed in view of the severe interference of the eye, nose, and mouth regions to the noise calculation. Referring to fig. 7a and 7b, fig. 7a and 7b are schematic views of the effective region remaining after removing the region based on the face image. As shown in fig. 7a, the removal region comprises a transverse strip-shaped region penetrating through the eyes and a longitudinal strip-shaped region perpendicular to the transverse strip-shaped region and covering the nose and the mouth, wherein the width of the transverse strip-shaped region is at least the distance between the left outer corner of the eye and the right outer corner of the eye, and the height of the transverse strip-shaped region is at least the average value of the longitudinal distances between the upper eyelid and the lower eyelid in the two eyes; the distance between the center of the transverse strip-shaped region and the centers of the eyes is smaller than a set third threshold, preferably, the center of the transverse strip-shaped region coincides with the center of the eyes, wherein the center of the eyes is an average value of the coordinates of the two pupils.
The width of the longitudinal strip-shaped area is the distance between the left mouth corner and the right mouth corner, the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two lower eyelids and the vertical coordinate of the lower edge valley of the mouth part, in order to make the transverse strip-shaped area partially coincide with the longitudinal strip-shaped area, so as to avoid incomplete removal of the area, preferably, the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two pupils and the vertical coordinate of the lower edge valley of the mouth part; the distance between the center of the longitudinal strip-shaped area and the center of the face is smaller than a set fourth threshold value, and preferably, the center of the longitudinal strip-shaped area coincides with the center of the face.
Expressed mathematically as:
for the region of the lateral stripe shape,
Hx=|eyelo_x-eyero_x|
Figure BDA0002295133080000091
Hc_x=(eyel_x+eyer_x)/2
Hc_y=(eyel_y+eyer_y)/2
wherein Hx is the width of the horizontal stripe region, eyelo _ x is the abscissa of the left external canthus, eyero _ x is the abscissa of the right external canthus,
hy is the height of the transverse strip-shaped area, eyelu _ y is the ordinate of the upper left eyelid, eyeld _ y is the ordinate of the lower left eyelid, eyeru _ y is the ordinate of the upper right eyelid, and eyerd _ y is the ordinate of the lower right eyelid;
the center coordinates of the horizontal bar-shaped regions are (Hc _ x, Hc _ y), the left eye pupil coordinates are (eye _ x, eye _ y), and the right eye pupil coordinates are (eye _ x, eye _ y).
For the longitudinal strip-shaped areas,
Vx=|mouthl_x-mouthr_x)|
Figure BDA0002295133080000092
Vc_x=fc_x
Vc_y=fc_y
wherein Vx is the width of the longitudinal strip-shaped area, mouthl _ x is the abscissa of the left mouth corner, mouthr _ x is the abscissa of the right mouth corner,
vy is the height of the longitudinal strip-shaped area, eye _ y is the ordinate of the left pupil, eye _ y is the ordinate of the right pupil, and mouthd _ y is the ordinate of the valley of the lower edge of the mouth.
The center coordinates of the longitudinal bar-shaped region are (Vc _ x, Vc _ y) and the face center coordinates are (fc _ x, fc _ y) in order to retain the image data of the cheek region as much as possible, as shown in fig. 7b, the longitudinal bar-shaped region may be a trapezoidal region formed by including the outer envelope of the nose and the mouth.
In order to reduce the amount of subsequent data processing, the height of the horizontal stripe region may also be extended to the hairline of the forehead, thereby preserving the cheek region.
Based on the face image, the region remaining after the removal of the region is taken as an effective region, and effective region image data is obtained.
Step 602, performing gray scale processing on the image data of the effective area to obtain brightness image data of the effective area.
The gradation processing at this step is the same as that at step 102.
At step 603, the brightness image data of the active area is equalized to increase the local image contrast.
The above-mentioned steps 603 and 602 may be interchanged in order, that is, the effective area image data may be equalized first, and then the gray scale processing may be performed based on the equalized image data.
Step 604, convolution calculation is performed according to a set step size based on the equalized brightness image data of the effective region.
And for the pixel points in the brightness image of the equalized effective area, carrying out convolution calculation on the pixel values I (x, y) of the pixel points and a convolution kernel N to obtain convolution values.
In this step, convolution calculation may be performed on all the pixel points in the effective region, or partial pixel points in the effective region may be selected for convolution calculation. This step is the same as step 105.
Step 605, based on the obtained convolution value, noise degree calculation is performed. This step is the same as step 106.
And 606, comparing the noise degree with a set noise degree threshold, if the noise degree is greater than the set noise degree threshold, judging that the current image belongs to a noise image, and otherwise, judging that the image is normal. This step is the same as step 107.
According to the embodiment, the image data calculated by the interference noise is removed, the processing flow is more concise while the face detection effect is excellent, and the rapid detection of the face image noise is realized.
The invention provides a device for detecting image noise of a biological characteristic part, which comprises a memory and a processor, wherein the memory stores an application program, and the processor executes the detection step of the image noise of the biological characteristic part according to the embodiment of the invention.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps:
extracting image data in an effective area based on the image including the biological characteristic part to obtain effective area image data, wherein the effective area includes the biological characteristic part image except the area judged by the interference noise;
performing convolution calculation on the pixel values of the pixels in the effective area image data and a convolution kernel to obtain convolution values,
and calculating the average value of all the convolution values to obtain the noise degree representing the image noise of the biological characteristic part, and if the noise degree is greater than a preset noise threshold value, judging that the biological characteristic part image is a noise image.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (13)

1. A method for detecting noise in an image of a biometric region, the method comprising,
extracting image data in an effective area based on the image including the biological characteristic part to obtain effective area image data, wherein the effective area includes the biological characteristic part image except the area judged by the interference noise;
performing convolution calculation on the pixel values of the pixels in the effective area image data and a convolution kernel to obtain convolution values,
and calculating the average value of all the convolution values to obtain the noise degree representing the image noise of the biological characteristic part, and if the noise degree is greater than a preset noise threshold value, judging that the biological characteristic part image is a noise image.
2. The detection method according to claim 1, wherein the biometric-included part image is a single frame RGB image, the method further comprising,
and carrying out gray level processing on the biological characteristic part image or carrying out gray level processing on the effective area image to obtain brightness image data.
3. The detection method according to claim 2, further comprising subjecting the brightness image data to an equalization process.
4. The detection method according to any one of claims 1 to 3, wherein the part image including the biometric feature is a face image including a face feature; the effective region includes face images other than eyes.
5. The detection method according to claim 4, wherein said extracting image data within the effective region includes,
determining the width of an effective rectangular area according to the distance between the pupils, determining the height of the effective rectangular area according to the width to obtain an effective rectangular area at least limited except the area above the eyes,
acquiring left eye pupil coordinate, right eye pupil coordinate, mouth left mouth angle coordinate and right mouth angle coordinate in the face image, calculating the mean value of 4 coordinates to obtain the face center position,
determining the position of the effective rectangular region based on the face center position and the range defined by the effective rectangular region, with the target of increasing the number of image pixels defined by the effective rectangular region,
image data within the effective rectangular region is extracted.
6. The detection method as claimed in claim 5, wherein the determining the width of an effective rectangular region according to the inter-pupil distance includes taking the product of the inter-pupil distance and the first coefficient as the width of the effective rectangular region,
the determining the height of the effective rectangular area according to the width comprises taking the product result of the second coefficient and the width of the effective rectangular area as the height of the effective rectangular area,
wherein the first coefficient is greater than 1 and the second coefficient is determined according to the height from below the eyes to part or the whole chin;
the determining of the position of the effective rectangular region, based on the face center position and the range defined by the effective rectangular region, targeting the increase of the number of image pixels defined by the effective rectangular region, includes,
the position of the effective rectangular region is determined such that the height of the vertical coordinate of the face to the chin occupies a height of the effective rectangular region greater than a first threshold, the center of the face is on the center line in the width direction of the effective rectangular region or is offset from the center line by less than a second threshold, and the image pixels defined by the effective rectangular region include cheek regions from below the eyes to a part or the entire chin.
7. The detection method according to claim 4, wherein said extracting image data within the effective region includes,
extracting the face image contour and the lower eyelid image contour,
a first curve segment intersecting the left face image contour at a first intersection point and the right face image contour at a second intersection point is formed below the lower eyelid image contour,
forming a closed curve by using a first face image contour which is between a first intersection point and a second intersection point and comprises a lower jaw and the first curve segment, wherein a closed area formed by the closed curve is used as an effective area;
image data within the effective area is extracted.
8. The detection method according to claim 4, wherein said performing a convolution calculation of pixel values of pixels in said effective area image data with a convolution kernel includes,
establishing a mask area for setting the pixel value of more than one continuous pixel in the area to be 0; the mask area includes a nose and mouth area,
and performing convolution calculation on pixel values of pixel points except the mask area and a convolution kernel based on the extracted effective area image to obtain convolution values of each convolution calculation, wherein the convolution kernel is a 3 x 3 matrix.
9. The detection method according to claim 8, wherein the mask region is a rectangular mask region having a distance from a tip of a nose to a valley of a lower edge of the mouth as a height of the mask region, a distance between a left mouth corner and a right mouth corner as a width of the mask region,
alternatively, the first and second electrodes may be,
the mask area is a closed mask area which is closed by an irregular polygon, and the mask area is a closed irregular polygon formed by sequentially connecting a left nose wing, a right mouth corner, a lower edge valley of the mouth part and the left mouth corner,
wherein the trough is located at the lowest position of the lower mouth edge.
10. The detection method according to claim 9, wherein said extracting image data within the effective region includes,
removing a region at least comprising eyes, a nose and a mouth in the face image to obtain a residual face image, taking the residual face image as an effective region, and extracting image data in the effective region;
and performing convolution calculation on the pixel values of the pixels in the effective region image data and the convolution kernels, wherein the convolution calculation is performed on the pixel values of the pixels and the convolution kernels in the extracted effective region image to obtain convolution values of each convolution calculation.
11. The detection method according to claim 10, wherein the removing of the region including at least the eyes, the nose, and the mouth in the face image includes,
removing the transverse strip-shaped area penetrating through the eyes and the longitudinal strip-shaped area which is perpendicular to the transverse strip-shaped area and covers the nose and the mouth,
wherein the content of the first and second substances,
the width of the transverse strip-shaped area is at least the distance between the left outer canthus and the right outer canthus, and the height of the transverse strip-shaped area is at least the average value of the longitudinal distances between the upper eyelids and the lower eyelids in the two eyes;
the longitudinal strip-shaped region is a trapezoidal region formed by the outer envelope of the nose and mouth, or,
the width of the longitudinal strip-shaped area is the distance between the left mouth angle and the right mouth angle, the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two lower eyelids and the vertical coordinate of the lower edge valley of the mouth part, or the height of the longitudinal strip-shaped area is the distance between the average value of the vertical coordinates of the two pupils and the vertical coordinate of the lower edge valley of the mouth part; the trough is located where the lower edge of the mouth is lowermost.
12. An apparatus for detecting image noise of a biometric feature location, comprising a memory and a processor, wherein,
the memory stores an application program that is executed by the application,
the processor executes the application program to implement the step of detecting the noise of the biometric image according to any one of claims 1 to 11.
13. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the step of detecting the noise of the image of the biometric area according to any one of claims 1 to 11.
CN201911197942.2A 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part Active CN112883759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197942.2A CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197942.2A CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Publications (2)

Publication Number Publication Date
CN112883759A true CN112883759A (en) 2021-06-01
CN112883759B CN112883759B (en) 2023-09-26

Family

ID=76039606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197942.2A Active CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Country Status (1)

Country Link
CN (1) CN112883759B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
US20070154096A1 (en) * 2005-12-31 2007-07-05 Jiangen Cao Facial feature detection on mobile devices
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN104794693A (en) * 2015-04-17 2015-07-22 浙江大学 Human image optimization method capable of automatically detecting mask in human face key areas
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
CN107220623A (en) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 A kind of face identification method and system
CN107346408A (en) * 2016-05-05 2017-11-14 鸿富锦精密电子(天津)有限公司 Age recognition methods based on face feature
CN107734264A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107784678A (en) * 2017-11-08 2018-03-09 北京奇虎科技有限公司 Generation method, device and the terminal of cartoon human face image
CN110503608A (en) * 2019-07-13 2019-11-26 贵州大学 The image de-noising method of convolutional neural networks based on multi-angle of view

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
US20070154096A1 (en) * 2005-12-31 2007-07-05 Jiangen Cao Facial feature detection on mobile devices
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN104794693A (en) * 2015-04-17 2015-07-22 浙江大学 Human image optimization method capable of automatically detecting mask in human face key areas
CN107346408A (en) * 2016-05-05 2017-11-14 鸿富锦精密电子(天津)有限公司 Age recognition methods based on face feature
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
CN107220623A (en) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 A kind of face identification method and system
CN107734264A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107784678A (en) * 2017-11-08 2018-03-09 北京奇虎科技有限公司 Generation method, device and the terminal of cartoon human face image
CN110503608A (en) * 2019-07-13 2019-11-26 贵州大学 The image de-noising method of convolutional neural networks based on multi-angle of view

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜平 等: "光照和噪声条件下的人脸识别", 上海交通大学学报, vol. 09 *

Also Published As

Publication number Publication date
CN112883759B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN108491784B (en) Single person close-up real-time identification and automatic screenshot method for large live broadcast scene
CN114418957B (en) Global and local binary pattern image crack segmentation method based on robot vision
US7643659B2 (en) Facial feature detection on mobile devices
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
WO2020000908A1 (en) Method and device for face liveness detection
US20070154095A1 (en) Face detection on mobile devices
CN111209845A (en) Face recognition method and device, computer equipment and storage medium
WO2020253062A1 (en) Method and apparatus for detecting image border
CN111368758B (en) Face ambiguity detection method, face ambiguity detection device, computer equipment and storage medium
CN104318262A (en) Method and system for replacing skin through human face photos
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
CN109859217B (en) Segmentation method and computing device for pore region in face image
CN110717372A (en) Identity verification method and device based on finger vein recognition
US11475707B2 (en) Method for extracting image of face detection and device thereof
Arandjelović Making the most of the self-quotient image in face recognition
CN108710837A (en) Cigarette smoking recognition methods, device, computer equipment and storage medium
CN111145086A (en) Image processing method and device and electronic equipment
Cheng et al. A pre-saliency map based blind image quality assessment via convolutional neural networks
CN114240925A (en) Method and system for detecting document image definition
CN111178276A (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN110363103B (en) Insect pest identification method and device, computer equipment and storage medium
CN111382745A (en) Nail image segmentation method, device, equipment and storage medium
CN107516083A (en) A kind of remote facial image Enhancement Method towards identification
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
WO2019223066A1 (en) Global enhancement method, device and equipment for iris image, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant