CN112883759B - Method for detecting image noise of biological feature part - Google Patents

Method for detecting image noise of biological feature part Download PDF

Info

Publication number
CN112883759B
CN112883759B CN201911197942.2A CN201911197942A CN112883759B CN 112883759 B CN112883759 B CN 112883759B CN 201911197942 A CN201911197942 A CN 201911197942A CN 112883759 B CN112883759 B CN 112883759B
Authority
CN
China
Prior art keywords
area
image
effective
noise
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911197942.2A
Other languages
Chinese (zh)
Other versions
CN112883759A (en
Inventor
华丛一
任志浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911197942.2A priority Critical patent/CN112883759B/en
Publication of CN112883759A publication Critical patent/CN112883759A/en
Application granted granted Critical
Publication of CN112883759B publication Critical patent/CN112883759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses a method for detecting image noise of a biological feature part, which is characterized by comprising the steps of extracting image data in an effective area based on the image of the biological feature part to obtain the image data of the effective area, wherein the effective area comprises the image of the biological feature part except for an area for judging interference noise; and carrying out convolution calculation on the pixel values of the pixels in the image data of the effective area and the convolution kernel to obtain convolution values, calculating the average value of all the convolution values to obtain the noise degree representing the image noise of the biological feature part, and judging that the image of the biological feature part is a noise image if the noise degree is larger than a preset noise threshold. According to the method, the calculation of the noise degree of the effective area can be quickly realized based on a single frame image, the processing flow is simple and efficient, and the scene applicability is very wide.

Description

Method for detecting image noise of biological feature part
Technical Field
The application relates to the field of digital image processing, in particular to a method for detecting image noise of a biological feature part.
Background
Image noise refers to unnecessary or redundant disturbance information in image data, for example, "snowflake" noise that is often generated in the event of insufficient or the like. The presence of noise severely affects the quality of the image and must be corrected prior to the image enhancement and classification processes.
Facial, palmprint, fingerprint, etc. have been widely used as one of biometric identification. Taking a face image as an example, the quality of the face image has a direct influence on the actual effects of a face detection algorithm, a face recognition algorithm and a face living body detection algorithm, and also relates to the operation of auxiliary modules, such as exposure control, gain control, wide dynamic setting and the like, and one particularly important judgment standard in the quality of the face image is noise.
In the prior art, detection of image noise and noise elimination are based on global images, namely, processing is performed based on all data information in the images, and the method has universality, but has poor effect when being applied to images of biological feature parts, and some detection schemes with excellent effects have high complexity and serious time consumption.
Disclosure of Invention
In view of the above, the present application provides a method for detecting noise in an image of a biological feature, so as to rapidly detect noise in an image including the biological feature.
The application provides a method for detecting image noise of a biological feature part, which comprises the following steps of,
extracting image data in an effective area based on the image including the biological feature part to obtain effective area image data, wherein the effective area includes the biological feature part image except the area judged by the interference noise;
performing convolution calculation on the pixel value of the pixel in the effective area image data and the convolution kernel to obtain a convolution value,
and calculating the average value of all convolution values to obtain the noise degree representing the image noise of the biological feature part, and judging the biological feature part image as a noise image if the noise degree is larger than a preset noise threshold value.
Wherein the image including the biological feature is a single frame RGB image, the method further comprises,
and carrying out gray scale processing on the biological characteristic part image or carrying out gray scale processing on the effective area image to obtain brightness image data.
Preferably, the method further comprises equalizing the brightness image data.
Preferably, the image including the biological feature is a facial image including facial features; the effective area includes facial images other than eyes.
Preferably, the extracting the image data in the effective area includes,
acquiring left eye pupil coordinates, right eye pupil coordinates, left mouth corner coordinates and right mouth corner coordinates in the face image, calculating the average value of the 4 coordinates to obtain the center position of the face,
determining the width of an effective rectangular area according to the distance between pupils, determining the height of the effective rectangular area according to the width to obtain the effective rectangular area at least limited except the area above eyes,
determining the position of the effective rectangular area according to the center position of the face and the range defined by the effective rectangular area, with the aim of increasing the number of image pixels defined by the effective rectangular area,
image data within the effective rectangular area is extracted.
Preferably, the determining the width of an effective rectangular area according to the inter-pupil distance includes taking the product of the inter-pupil distance and the first coefficient as the width of the effective rectangular area,
the determining the height of the effective rectangular area according to the width comprises taking the product of the second coefficient and the width of the effective rectangular area as the height of the effective rectangular area,
wherein the first coefficient is greater than 1 and the second coefficient is determined from the height below the eye to a portion or the entire chin;
the determining the position of the effective rectangular area according to the center position of the face and the range defined by the effective rectangular area, with the aim of increasing the number of image pixels defined by the effective rectangular area, includes,
the first position is determined according to the fact that the height from the ordinate of the center of the face to the chin occupies the effective rectangular area and is larger than a first threshold value, the second position is determined according to the fact that the center of the face is on the central line of the width direction of the effective rectangular area or is deviated from the central line and smaller than a second threshold value, the image pixels defined by the effective rectangular area comprise cheek areas from the lower part of eyes to part of or the whole chin, and the position of the effective rectangular area is determined.
Preferably, the extracting the image data in the effective area includes,
extracting facial image contour and eye lower eyelid image contour,
a first curve segment intersecting the left face image contour at a first intersection point and intersecting the right face image contour at a second intersection point is formed below the lower eyelid image contour,
and forming a closed curve with the first facial image contour including the lower jaw between the first intersection point and the second intersection point and the first curve segment, wherein a closed area formed by the closed curve is taken as an effective area.
Preferably, the convolving the pixel values of the pixels in the active area image data with a convolution kernel, including,
establishing a mask area for setting the pixel value of more than one continuous pixel in the area to 0; the mask region includes a nose and mouth region,
and carrying out convolution calculation on pixel values of pixel points except for the mask area and convolution kernels based on the extracted effective area image to obtain convolution values of each convolution calculation, wherein the convolution kernels are 3 multiplied by 3 matrixes.
Preferably, the mask region is a rectangular mask region having a distance from a nose tip to a valley at a lower edge of a mouth as a height of the mask region, a distance between a left mouth corner and a right mouth corner as a width of the mask region,
or alternatively, the process may be performed,
the mask area is a mask area closed by irregular polygons, the mask area is a closed irregular polygon formed by connecting a left nasal wing, a right mouth corner, a lower edge valley of a mouth and a left mouth corner in sequence,
wherein, the edge valley is located at the lowest position of the lower edge of the mouth part.
Preferably, the extracting the image data in the effective area includes,
removing areas at least comprising eyes, nose and mouth in the face image to obtain a residual face image, taking the residual face image as an effective area, and extracting image data in the effective area;
the step of carrying out convolution calculation on the pixel values of the pixels in the effective area image data and the convolution kernel comprises the step of carrying out convolution calculation on the pixel values of the pixel points and the convolution kernel of the extracted pixel points in the effective area image to obtain convolution values of each convolution calculation.
Preferably, the face-removed image includes at least regions of eyes, nose, and mouth, including,
removing a transverse strip-shaped area penetrating through the eyes and a longitudinal strip-shaped area perpendicular to the transverse strip-shaped area and covering the nose and the mouth,
wherein, the liquid crystal display device comprises a liquid crystal display device,
the width of the transverse strip-shaped area is at least the distance between the left outer corner and the right outer corner, and the height of the transverse strip-shaped area is at least the average value of the longitudinal distance between the two eyes and the lower eyelid;
the longitudinal strip-shaped region is a trapezoidal region formed by the nose and mouth outer envelopes, or,
the width of the longitudinal strip-shaped area is the distance between the left mouth corner and the right mouth corner, the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two lower eyelids and the longitudinal coordinate of the edge valley of the lower edge of the mouth, or the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two pupils and the longitudinal coordinate of the edge valley of the lower edge of the mouth; the valley is located at the lowest position of the lower edge of the mouth.
The application provides a detection device for image noise of a biological feature part, which comprises a memory and a processor, wherein,
the memory is stored with an application program,
the processor executes the application program to realize the detection step of the image noise of the biological feature.
The present application provides a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-described biometric feature image noise detection step.
The method is simple and efficient in processing flow and wide in scene applicability, and can be used for carrying out corresponding processing on the face image in combination with the influence of specific parts in the face area on noise detection in a targeted manner, so that the excellent effect and detection time of noise detection are considered, and the image quality can be rapidly evaluated.
Drawings
Fig. 1 is a flowchart of a face image noise detection method according to an embodiment of the application.
Fig. 2a to 2d are schematic diagrams showing the relationship between the effective rectangular area and the face position.
Fig. 3 is a schematic view of an irregular effective region.
Fig. 4a and 4b are schematic diagrams of mask areas.
Fig. 5 is a schematic illustration of convolution calculation.
Fig. 6 is a flowchart of a face image noise detection method according to another embodiment of the application.
Fig. 7a and 7b are schematic views of the effective area remaining after the area is removed based on the face image.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical means and advantages of the present application more apparent.
The applicant found that in engineering applications based on image recognition detection of a biological feature, existing image noise detection methods lack pertinence of a biological feature image, and influence of a specific part or region in the biological feature image on noise detection is not focused, which results in difficulty in reaching expectations even if an excellent noise detection scheme is applied to the image of the biological feature.
The application designs a method for detecting the image noise of the biological feature aiming at the influence of a specific part or region in the image of the biological feature on noise detection, removes image data of the specific region influencing noise detection, and detects the image noise by taking the noise degree of the image data of a reserved region into consideration in a self-defining way.
In the following, facial image noise detection will be described as an example, it being understood that the application is not limited to facial images, having other parts of biometric identification, including but not limited to fingerprints, palmprints, etc., and that equivalent, similar modifications may be made to achieve noise identification including said part images.
Embodiment one:
referring to fig. 1, fig. 1 is a flowchart illustrating a face image noise detection method according to an embodiment of the application.
Step 101 of acquiring, as first image data,
in general, a visible light (VIS) image is an RGB image including R, G, B components, and in order to avoid the determination of R, G, B component interference noise in RGB data, the conversion of an RGB image or a color image into a gray-scale image, that is, a graying process, specifically, into:
Y=0.299*R+0.587*G+0.114*B
where Y is the image gray value and R, G, B is the component of the RGB image.
For Near Infrared (NIR) images, there is no conversion required for single channel data, typically 8bits, i.e., Y data. .
And 102, removing at least eye regions in the face image to extract an effective region image of the face, and taking the extracted image data as second image data.
The applicant has found that in the noise detection of facial images, the eye regions interfere with the judgment of noise, and therefore, at least the eye regions are removed.
In one embodiment, the left eye pupil coordinate, the right eye pupil coordinate, the mouth left mouth angular coordinate and the right mouth angular coordinate in the face image are obtained, the average value of the 4 coordinates is calculated to obtain the face center position,
expressed by the mathematical formula:
fc_x=(eyel_x+eyer_x+mouthl_x+mouthr_x)/4
fc_y=(eyel_y+eyer_y+mouthl_y+mouthr_y)/4
wherein the coordinates of the center position of the face are (fc_x, fc_y), the left eye pupil coordinates are (eyel_x, eyel_y), the right eye pupil coordinates (eyer_x, eyer_y), the left mouth corner coordinates (mouthl_x, mouthl_y), and the right mouth corner coordinates (mouthr_x, mouthr_y).
An effective rectangular area for extracting image data is formed with a width 1.6 to 2 times of the inter-pupil distance (inter-pupil distance) as the width of the effective rectangular area, and at least 70% of the width of the effective rectangular area as the height of the effective rectangular area, wherein the mathematical expression of the width of the effective rectangular area is expressed as:
facewidth=w*(eyer_x-eyel_x)
faceheight=h*facewidth,
wherein, facewidth is the width of the effective rectangular area, w is a first coefficient, faceheight is the height of the effective rectangular area, and h is a second coefficient; wherein the first coefficient is greater than 1 and the second coefficient is determined from the height below the eye to a portion or the entire chin.
Because the eye area can interfere with the judgment of noise, the condition that the image in the effective rectangular area at least does not comprise eyes is satisfied, in order to acquire data favorable for noise detection, the position (positioning) of the effective rectangular area on the face can be determined by combining the center position of the face and the height and width of the effective rectangular area according to the aim of increasing the number of image pixels defined by the effective rectangular area; preferably, the image within the effective rectangular area includes a cheek area other than the area above the eyes, the first position is determined such that the height of the face center ordinate to the chin occupies the effective rectangular area above a first threshold value, the second position is determined such that the face center is on the center line in the width direction of the effective rectangular area or is offset from the center line by less than a second threshold value, for example, the second position is determined such that the height of the face center ordinate to the chin occupies at least 50% of the height of the effective rectangular area, the face center is on the center line in the width direction of the effective rectangular area or is offset from the center line by less than 20% in the vicinity of the center line, and the image data within the effective rectangular area is extracted such that the effective rectangular area includes the cheek area below the eyes to a part or the whole chin.
Referring to fig. 2a to 2d, fig. 2a to 2d show schematic diagrams of the relationship between the effective rectangular area and the face position, i.e., the positioning of the effective rectangular area. Wherein fig. 2a is a case where the height of the effective rectangular area from the ordinate of the center of the face to the chin is less than 50%, and fig. 2b is another case where the height of the effective rectangular area from the ordinate of the center of the face to the chin is less than 50%, in both cases, since the image data of the face area in the height direction within the effective rectangular area is limited, the extracted image data cannot be used as effective data; fig. 2c is a view showing a case where the center of the face is deviated from the center line in the width direction of the effective rectangular area to be larger, in which case the extracted image data cannot be used as effective data because the image data of the face area in the width direction in the effective rectangular area is limited; fig. 2d illustrates one situation where the effective rectangular area is ideal for the face, the effective rectangular area includes the cheek area under the eyes to part of the chin.
In the second embodiment, the shape of the effective area may be a closed irregular polygon formed by connecting a plurality of curve segments end to distinguish image data to be extracted from the face image. For example, referring to fig. 3, a face image contour and an eye lower eyelid image contour are extracted, a first curve segment intersecting the left face image contour at a first intersection point and intersecting the right face image contour at a second intersection point is formed below the lower eyelid image contour, a closed curve is formed between the first intersection point and the second intersection point and including the first face image contour of the lower jaw and the first curve segment, and a region formed by the closed curve is taken as an effective region.
And 103, equalizing the image data (second image data) in the extracted effective area to increase the contrast of the image, thereby being beneficial to the selection of the noise threshold value. And taking the equalized second image data as third image data.
In this step, the second image data may be equalized in a histogram equalization manner such that the local contrast is enhanced without affecting the overall contrast.
Step 104, setting the pixel values of the nose and mouth regions to 0 based on the third image to avoid the interference of the nose and mouth on the noise calculation.
The applicant found that for noise detection of facial images, the nose and mouth can cause serious interference to noise calculation, so that the influence caused by the nose and mouth needs to be removed, namely, the pixel values of the nose and mouth areas are set to be 0; the region formed by these plural continuous pixel points appears black in the image, and is referred to as a mask region in the present application.
In one embodiment, referring to fig. 4a and 4b, fig. 4a and 4b are schematic diagrams of mask areas. Wherein fig. 4a is a rectangular mask area with the distance between the nose tip and the edge valley of the lower edge of the mouth as the height of the mask area and the distance between the left and right corners as the width of the mask area, creating a mask area shaped as a rectangle. Wherein, the edge valley is located at the lowest position of the lower edge of the mouth part.
In the second embodiment, as shown in fig. 4b, the left nasal wing, the right mouth corner, the lower edge valley of the mouth, and the left mouth corner are sequentially connected to form a closed irregular polygon, and the closed area is used as a mask area.
After the mask region is established, the pixel value of each pixel point within the mask region is set to 0.
Step 105, a convolution kernel is set, and convolution calculation is performed according to a set step length based on the third image data.
Referring to fig. 5, fig. 5 is a schematic diagram of convolution calculation. The convolution kernel N is set to a matrix of 3×3, for example, the convolution kernel N is set to:
and for the pixel points in any size which is the same as the convolution kernel except the mask area in the third image, carrying out convolution calculation on the pixel values I (x, y) of the pixel points in the size and the convolution kernel N to obtain a convolution value (target pixel). For example, in fig. 5, the number of pixels corresponding to the convolution kernel is 3×3, and among the 3×3 pixels, the pixel values of the pixels are multiplied by the corresponding values in the convolution kernel and summed to obtain a convolution value.
In this step, convolution calculation may be performed on all the pixel points except the mask region in the third image, or convolution calculation may be performed by selecting a portion of the pixel points except the mask region; for pixel points in the mask area, convolution calculation is not needed.
Preferably, the sliding step is 1 pixel.
And 106, calculating noise degree based on the obtained convolution value.
In this step, the respective convolution values are summed, and then an average value of the convolution values is calculated as the noise level. Expressed mathematically as:
Noise=∑|I(x,y)*N|/n
wherein Noise is Noise degree, N is the number of times of performing convolution calculation, that is, the number of accumulated convolution values, N is a convolution kernel, I (x, y) is a pixel value of a pixel point (x, y), and I (x, y) x N represents convolution operation of the pixel value and the convolution kernel.
Taking fig. 5 as an example, the third image data of 8×8 is convolved with a convolution kernel of 3×3, and a maximum of 6×6 convolution values are obtained, and the average value of the 36 convolution values is taken as the noise level.
The noise level calculated in the above manner corresponds to that after the image data of the cheek region is subjected to the moving average processing, the image data is averaged again, and the manner can accurately represent the reality of the image noise, so that the engineering implementation is simple.
Step 107, comparing the noise level with a set noise level threshold, if the noise level is greater than the set noise level threshold, determining that the current image belongs to the noise image, at this time, triggering subsequent optimization processes, such as modifying gain, increasing light-filling power output, etc., if not, determining that the image is normal, and continuing the original flow.
Because the actual effect of different cameras is inconsistent, the noise threshold values of different models of cameras are inconsistent, and the noise threshold values can be reasonably set according to the actual image effect and the detection requirement in specific application.
In the above embodiment, it should be understood that the gray scale processing and the equalization processing of the image may not be limited to the order of the first embodiment, for example, the equalization processing may also be performed before the step 102 is performed, that is, the equalization processing may be performed on the first image, where the equalization processing is performed on the second image data in view of the second image data being effective data extracted from the first image data, which is beneficial to improving the processing efficiency of the image and reducing the occupation of the memory. Similarly, the gradation processing of the image may also be performed on the extracted image data after the image data in the effective area is extracted, and the gradation processing and the equalization processing may be in no strict order.
According to the embodiment, the calculation of the noise degree of the face image can be quickly realized based on the single frame image, the face detection effect is excellent, the processing flow is concise, and the quick detection of the face image noise is realized.
Embodiment two:
referring to fig. 6, fig. 6 is a flowchart illustrating a face image noise detection method according to another embodiment of the application.
Step 601, removing regions of the face image including at least eyes, nose, and mouth to extract an effective region image of the face,
in view of the severe interference of the eye, nose, and mouth regions with noise calculations, the regions consisting of the eye, nose, and mouth are removed. Referring to fig. 7a and 7b, fig. 7a and 7b are schematic views of the effective area remaining after the area is removed based on the face image. As in fig. 7a, the removal zone comprises a lateral strip zone extending through the eyes and a longitudinal strip zone perpendicular to the lateral strip zone and covering the nose and mouth, wherein the width of the lateral strip zone is at least the distance between the left and right outer corners of the eyes, and the height of the lateral strip zone is at least the average of the longitudinal distances between the upper and lower eyelids in both eyes; the distance between the center of the horizontal bar-shaped area and the eye centers of the two eyes is smaller than a set third threshold value, and preferably, the center of the horizontal bar-shaped area coincides with the eye center position, wherein the eye center positions of the two eyes are the average value of the two pupil coordinates.
The width of the longitudinal strip-shaped area is the distance between the left mouth corner and the right mouth corner, the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two lower eyelids and the longitudinal coordinate of the edge valley of the lower edge of the mouth, so that the transverse strip-shaped area is partially overlapped with the longitudinal strip-shaped area to avoid incomplete removal of the area, and preferably, the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two pupils and the longitudinal coordinate of the edge valley of the lower edge of the mouth; the distance between the center of the longitudinal strip-shaped area and the center of the face is smaller than a set fourth threshold value, and preferably the center of the longitudinal strip-shaped area coincides with the center position of the face.
Expressed by the mathematical formula:
for the lateral strip-shaped areas,
Hx=|eyelo_x-eyero_x|
Hc_x=(eyel_x+eyer_x)/2
Hc_y=(eyel_y+eyer_y)/2
where Hx is the width of the lateral stripe, eyelo_x is the left outer corner of the eye, eyelo_x is the right outer corner of the eye,
hy is the height of the lateral stripe, eyelu_y is the ordinate of the upper left eyelid, eyelu_y is the ordinate of the lower left eyelid, eyelu_y is the ordinate of the upper right eyelid, eyelu_y is the ordinate of the lower right eyelid;
the center coordinates of the lateral stripe are (hc_x, hc_y), the left eye pupil coordinates are (eyel_x, eyel_y), and the right eye pupil coordinates are (eyer_x, eyer_y).
For the longitudinal strip-shaped areas,
Vx=|mouthl_x-mouthr_x)|
Vc_x=fc_x
Vc_y=fc_y
where Vx is the width of the longitudinal stripe, mouthl_x is the left-hand corner abscissa, mouthr_x is the right-hand corner abscissa,
vy is the height of the longitudinal stripe, eyel_y is the ordinate of the left pupil, eyer_y is the ordinate of the right pupil, and mouthd_y is the ordinate of the lower edge valley of the mouth.
The center coordinates of the longitudinal bar areas are (vc_x, vc_y), and the face center coordinates are (fc_x, fc_y) in order to preserve the image data of the cheek parts as much as possible, as shown in fig. 7b, the longitudinal bar areas may be trapezoid areas formed including nose and mouth outer envelopes.
In order to reduce the amount of subsequent data processing, the height of the lateral stripe area may also be extended to the hairline of the forehead, thereby preserving the cheek area.
And based on the face image, taking the area remained after the removal of the removed area as an effective area to obtain effective area image data.
Step 602, gray scale processing is performed on the image data of the effective area, so as to obtain brightness image data of the effective area.
The gray scale process of this step is the same as step 102.
Step 603 equalizes the brightness image data of the active area to increase local image contrast.
The order of steps 603 and 602 may be interchanged, that is, the effective area image data may be subjected to equalization processing first, and then gray scale processing may be performed based on the image data after the equalization processing.
Step 604, performing convolution calculation according to a set step length based on the brightness image data of the equalized effective area.
And (3) for pixel points in the equalized brightness image of the effective area, carrying out convolution calculation on pixel values I (x, y) of the pixel points and a convolution kernel N to obtain convolution values.
In this step, convolution calculation may be performed on all the pixel points in the effective area, or convolution calculation may be performed by selecting a portion of the pixel points in the effective area. This step is the same as step 105.
Step 605, noise level calculation is performed based on the acquired convolution value. This step is the same as step 106.
Step 606, comparing the noise level with a set noise level threshold, if the noise level is greater than the set noise level threshold, determining that the current image belongs to the noise image, otherwise, determining that the image is normal. This step is the same as step 107.
According to the embodiment, the image data calculated by removing the interference noise is removed, so that the face detection effect is excellent, the processing flow is simpler, and the rapid detection of the face image noise is realized.
The application provides a detection device for image noise of a biological feature, which comprises a memory and a processor, wherein the memory stores an application program, and the processor executes the detection step for the image noise of the biological feature according to the embodiment of the claims.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program realizes the following steps when being executed by a processor:
extracting image data in an effective area based on the image including the biological feature part to obtain effective area image data, wherein the effective area includes the biological feature part image except the area judged by the interference noise;
performing convolution calculation on the pixel value of the pixel in the effective area image data and the convolution kernel to obtain a convolution value,
and calculating the average value of all convolution values to obtain the noise degree representing the image noise of the biological feature part, and judging the biological feature part image as a noise image if the noise degree is larger than a preset noise threshold value.
For the apparatus/network side device/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (12)

1. A method for detecting image noise of a biological feature is characterized in that the method comprises the following steps of,
extracting image data in an effective area based on the image including the biological feature part to obtain effective area image data, wherein the effective area includes the biological feature part image except the area judged by the interference noise;
performing convolution calculation on the pixel value of the pixel in the effective area image data and the convolution kernel to obtain a convolution value,
calculating the average value of all convolution values to obtain the noise degree representing the image noise of the biological feature part, and judging the biological feature part image as a noise image if the noise degree is larger than a preset noise threshold value;
wherein, the liquid crystal display device comprises a liquid crystal display device,
the image including the biological feature is a facial image including facial features,
the region of the interference noise judgment is an eye region,
the effective area comprises a facial image except for eyes, the effective area determines the width of an effective rectangular area according to the distance between pupils, and the height of the effective rectangular area is determined according to the width, so that the effective rectangular area except for the area above eyes is at least limited.
2. The method of claim 1, wherein the image including the biological feature is a single frame RGB image, the method further comprising,
and carrying out gray scale processing on the biological characteristic part image or carrying out gray scale processing on the effective area image to obtain brightness image data.
3. The method of detecting according to claim 2, further comprising equalizing the brightness image data.
4. The detection method according to claim 1, wherein the extracting the image data in the effective area includes,
acquiring left eye pupil coordinates, right eye pupil coordinates, left mouth corner coordinates and right mouth corner coordinates in the face image, calculating the average value of the 4 coordinates to obtain the center position of the face,
determining the position of the effective rectangular area according to the center position of the face and the range defined by the effective rectangular area, with the aim of increasing the number of image pixels defined by the effective rectangular area,
image data within the effective rectangular area is extracted.
5. The method of claim 4, wherein determining the width of an effective rectangular area based on the inter-pupil distance comprises taking the product of the inter-pupil distance and the first coefficient as the width of the effective rectangular area,
the determining the height of the effective rectangular area according to the width comprises taking the product of the second coefficient and the width of the effective rectangular area as the height of the effective rectangular area,
wherein the first coefficient is greater than 1 and the second coefficient is determined from the height below the eye to a portion or the entire chin;
the determining the position of the effective rectangular area according to the center position of the face and the range defined by the effective rectangular area, with the aim of increasing the number of image pixels defined by the effective rectangular area, includes,
the first position is determined according to the fact that the height from the ordinate of the center of the face to the chin occupies the effective rectangular area and is larger than a first threshold value, the second position is determined according to the fact that the center of the face is on the central line of the width direction of the effective rectangular area or is deviated from the central line and smaller than a second threshold value, the image pixels defined by the effective rectangular area comprise cheek areas from the lower part of eyes to part of or the whole chin, and the position of the effective rectangular area is determined.
6. The detection method according to claim 1, wherein the extracting the image data in the effective area includes,
extracting facial image contour and eye lower eyelid image contour,
a first curve segment intersecting the left face image contour at a first intersection point and intersecting the right face image contour at a second intersection point is formed below the lower eyelid image contour,
forming a closed curve by a first facial image contour including a mandible and the first curve section between the first intersection point and the second intersection point, wherein a closed area formed by the closed curve is taken as an effective area;
image data within the effective area is extracted.
7. The detection method of claim 1, wherein convolving the pixel values of the pixels in the active area image data with a convolution kernel comprises,
establishing a mask area for setting the pixel value of more than one continuous pixel in the area to 0; the mask region includes a nose and mouth region,
and carrying out convolution calculation on pixel values of pixel points except for the mask area and convolution kernels based on the extracted effective area image to obtain convolution values of each convolution calculation, wherein the convolution kernels are 3 multiplied by 3 matrixes.
8. The detecting method according to claim 7, wherein the mask area is a rectangular mask area having a distance between a nose tip and a valley of a lower edge of the mouth as a height of the mask area and a distance between a left mouth corner and a right mouth corner as a width of the mask area,
or alternatively, the process may be performed,
the mask area is a mask area closed by irregular polygons, the mask area is a closed irregular polygon formed by connecting a left nasal wing, a right mouth corner, a lower edge valley of a mouth and a left mouth corner in sequence,
wherein, the edge valley is located at the lowest position of the lower edge of the mouth part.
9. The detection method of claim 8, wherein extracting image data within the active area includes,
removing areas at least comprising eyes, nose and mouth in the face image to obtain a residual face image, taking the residual face image as an effective area, and extracting image data in the effective area;
the step of carrying out convolution calculation on the pixel values of the pixels in the effective area image data and the convolution kernel comprises the step of carrying out convolution calculation on the pixel values of the pixel points and the convolution kernel of the extracted pixel points in the effective area image to obtain convolution values of each convolution calculation.
10. The detection method of claim 9, wherein the removing of the region of the facial image that includes at least eyes, nose, and mouth comprises,
removing a transverse strip-shaped area penetrating through the eyes and a longitudinal strip-shaped area perpendicular to the transverse strip-shaped area and covering the nose and the mouth,
wherein, the liquid crystal display device comprises a liquid crystal display device,
the width of the transverse strip-shaped area is at least the distance between the left outer eye corner and the right outer eye corner, and the height of the transverse strip-shaped area is at least the average value of the longitudinal distances between the upper eyelid and the lower eyelid in two eyes;
the longitudinal strip-shaped region is a trapezoidal region formed by the nose and mouth outer envelopes, or,
the width of the longitudinal strip-shaped area is the distance between the left mouth corner and the right mouth corner, the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two lower eyelids and the longitudinal coordinate of the edge valley of the lower edge of the mouth, or the height of the longitudinal strip-shaped area is the distance between the average value of the longitudinal coordinates of the two pupils and the longitudinal coordinate of the edge valley of the lower edge of the mouth; the valley is located at the lowest position of the lower edge of the mouth.
11. A device for detecting image noise of a biological feature is characterized by comprising a memory and a processor, wherein,
the memory is stored with an application program,
a processor executes the application program to implement the biometric feature image noise detection steps of any one of claims 1 to 10.
12. A computer-readable storage medium, wherein a computer program is stored in the storage medium, which, when executed by a processor, carries out the step of detecting image noise of a biological feature according to any one of claims 1 to 10.
CN201911197942.2A 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part Active CN112883759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197942.2A CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197942.2A CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Publications (2)

Publication Number Publication Date
CN112883759A CN112883759A (en) 2021-06-01
CN112883759B true CN112883759B (en) 2023-09-26

Family

ID=76039606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197942.2A Active CN112883759B (en) 2019-11-29 2019-11-29 Method for detecting image noise of biological feature part

Country Status (1)

Country Link
CN (1) CN112883759B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN104794693A (en) * 2015-04-17 2015-07-22 浙江大学 Human image optimization method capable of automatically detecting mask in human face key areas
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
CN107220623A (en) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 A kind of face identification method and system
CN107346408A (en) * 2016-05-05 2017-11-14 鸿富锦精密电子(天津)有限公司 Age recognition methods based on face feature
CN107734264A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107784678A (en) * 2017-11-08 2018-03-09 北京奇虎科技有限公司 Generation method, device and the terminal of cartoon human face image
CN110503608A (en) * 2019-07-13 2019-11-26 贵州大学 The image de-noising method of convolutional neural networks based on multi-angle of view

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7643659B2 (en) * 2005-12-31 2010-01-05 Arcsoft, Inc. Facial feature detection on mobile devices

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933527A (en) * 1995-06-22 1999-08-03 Seiko Epson Corporation Facial image processing method and apparatus
CN101305913A (en) * 2008-07-11 2008-11-19 华南理工大学 Face beauty assessment method based on video
CN104794693A (en) * 2015-04-17 2015-07-22 浙江大学 Human image optimization method capable of automatically detecting mask in human face key areas
CN107346408A (en) * 2016-05-05 2017-11-14 鸿富锦精密电子(天津)有限公司 Age recognition methods based on face feature
CN106778676A (en) * 2016-12-31 2017-05-31 中南大学 A kind of notice appraisal procedure based on recognition of face and image procossing
CN107220623A (en) * 2017-05-27 2017-09-29 湖南德康慧眼控制技术股份有限公司 A kind of face identification method and system
CN107734264A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107734267A (en) * 2017-09-11 2018-02-23 广东欧珀移动通信有限公司 Image processing method and device
CN107784678A (en) * 2017-11-08 2018-03-09 北京奇虎科技有限公司 Generation method, device and the terminal of cartoon human face image
CN110503608A (en) * 2019-07-13 2019-11-26 贵州大学 The image de-noising method of convolutional neural networks based on multi-angle of view

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
光照和噪声条件下的人脸识别;杜平 等;上海交通大学学报;第09卷;全文 *

Also Published As

Publication number Publication date
CN112883759A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN114418957B (en) Global and local binary pattern image crack segmentation method based on robot vision
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
US7643659B2 (en) Facial feature detection on mobile devices
EP3047426B1 (en) Feature extraction and matching and template update for biometric authentication
US20210150194A1 (en) Image feature extraction method for person re-identification
CN108491784B (en) Single person close-up real-time identification and automatic screenshot method for large live broadcast scene
KR20180109665A (en) A method and apparatus of image processing for object detection
CN111209845A (en) Face recognition method and device, computer equipment and storage medium
CN111368758B (en) Face ambiguity detection method, face ambiguity detection device, computer equipment and storage medium
CN110717372A (en) Identity verification method and device based on finger vein recognition
CN109241973B (en) Full-automatic soft segmentation method for characters under texture background
CN110634116B (en) Facial image scoring method and camera
WO2021139167A1 (en) Method and apparatus for facial recognition, electronic device, and computer readable storage medium
US11475707B2 (en) Method for extracting image of face detection and device thereof
Du et al. A new approach to iris pattern recognition
CN111382745A (en) Nail image segmentation method, device, equipment and storage medium
CN112214773A (en) Image processing method and device based on privacy protection and electronic equipment
CN114240925A (en) Method and system for detecting document image definition
JP4082203B2 (en) Open / close eye determination device
CN112883759B (en) Method for detecting image noise of biological feature part
CN116434071B (en) Determination method, determination device, equipment and medium for normalized building mask
CN101447026B (en) Pinkeye detecting device and detection method
CN104156720A (en) Face image denoising method on basis of noise evaluation model
CN115862121A (en) Face rapid matching method based on multimedia resource library
CN113159037B (en) Picture correction method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant