CN111754413A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111754413A
CN111754413A CN201910251458.7A CN201910251458A CN111754413A CN 111754413 A CN111754413 A CN 111754413A CN 201910251458 A CN201910251458 A CN 201910251458A CN 111754413 A CN111754413 A CN 111754413A
Authority
CN
China
Prior art keywords
image
preset
filtering
determining
direction information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910251458.7A
Other languages
Chinese (zh)
Inventor
范玲珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910251458.7A priority Critical patent/CN111754413A/en
Publication of CN111754413A publication Critical patent/CN111754413A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: after the gray level of the first image is corrected, filtering the second image after the gray level correction by adopting a self-adaptive directional filtering method; further, detail features in the adaptively filtered third image are detected according to a maximum inter-class variance method. Therefore, by adopting the self-adaptive directional filtering method, the direction information of the detail features in the second image can be considered in the filtering process, so that the noise in the second image can be removed on the basis of protecting the detail features in the second image, the detail features in the image can be accurately detected, and the requirement of a user on detection data can be met.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
In the image acquisition process, a lot of interferences (such as image textures, watermarks, brightness differences, shadows, impurities, oil stains, uneven illumination, image acquisition equipment shaking and the like) generally exist, so that a large amount of noise inevitably exists in the acquired image, and the difficulty in detecting detailed features (such as cracks or fingerprints) of the acquired image is increased.
In the related art, the collected image can be subjected to processing such as gray level correction, image denoising, edge detection and the like, so that the detail features in the collected image can be detected. However, because a uniform low-pass filtering algorithm is adopted in the existing image denoising processing process, part of detail features of an image can be removed in the image denoising process, so that the detail features in the image cannot be accurately detected, and the required detection data cannot be acquired.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an image processing apparatus and a storage medium, which solve the technical problem that required detection data cannot be acquired because detailed features in an image cannot be accurately detected in the related art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
performing gray correction on the collected first image to obtain a second image;
filtering the second image according to a self-adaptive directional filtering method to obtain a third image;
detecting detail features in the third image according to a maximum inter-class variance method; wherein the minutiae comprise cracks or fingerprints.
In a possible implementation manner, the filtering the second image according to an adaptive directional filtering method to obtain a third image includes:
determining direction information of detail features in the second image;
determining a target preset filtering template corresponding to the direction information according to the direction information of the detail characteristics;
and filtering the second image according to the target preset filtering template to obtain the third image.
In one possible implementation manner, the determining the direction information of the detail feature in the second image includes:
determining the average gray value of the pixels in the second image in each preset direction in a preset direction selection template;
determining at least one reference direction according to the average gray value in each preset direction;
according to the at least one reference direction, respectively determining direction information of each pixel point in the second image;
and determining the direction information of the detail features in the second image according to the direction information of each pixel point in the second image.
In a possible implementation manner, the determining, according to the direction information of the detail feature, a target preset filtering template corresponding to the direction information includes:
and rotating the initial preset filtering template according to the direction information of the detail characteristics to obtain the target preset filtering template.
In a possible implementation manner, the filtering the second image according to the target preset filtering template to obtain the third image includes:
and multiplying the gray value of any pixel point in the second image by the corresponding weight value in the target preset filtering template.
In one possible implementation, the detecting the detail feature in the third image according to the maximum inter-class variance method includes:
determining a target threshold;
and detecting detail features in the third image according to the target threshold.
In one possible implementation, the determining the target threshold includes:
determining a preset threshold value according to the gray level histogram of the third image;
dividing the pixel points in the third image into a first type pixel point set and a second type pixel point set according to the preset threshold value;
determining intra-class variance and inter-class variance between the first class pixel point set and the second class pixel point set;
and determining the target threshold according to the intra-class variance and the inter-class variance.
In a possible implementation manner, the performing gray scale correction on the acquired first image to obtain a second image includes:
and carrying out gray scale correction on the first image according to a self-adaptive homomorphic logarithmic gray scale correction method to obtain the second image.
In a possible implementation manner, the performing gray scale correction on the first image according to an adaptive homomorphic log gray scale correction method to obtain the second image includes:
and performing gray correction on each pixel point in the first image according to a self-adaptive homomorphic logarithmic gray correction method until the average gray value of each pixel point in the first image is greater than or equal to a first preset gray value and less than or equal to a second preset gray value to obtain the second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the correction module is used for carrying out gray correction on the collected first image to obtain a second image;
the filtering module is used for filtering the second image according to a self-adaptive directional filtering method to obtain a third image;
the detection module is used for detecting the detail features in the third image according to the maximum inter-class variance method; wherein the minutiae comprise cracks or fingerprints.
In one possible implementation, the filtering module includes:
a first determination unit, configured to determine direction information of a detail feature in the second image;
the second determining unit is used for determining a target preset filtering template corresponding to the direction information according to the direction information of the detail characteristics;
and the filtering unit is used for filtering the second image according to the target preset filtering template to obtain the third image.
In a possible implementation manner, the first determining unit is specifically configured to:
determining the average gray value of the pixels in the second image in each preset direction in a preset direction selection template;
determining at least one reference direction according to the average gray value in each preset direction;
according to the at least one reference direction, respectively determining direction information of each pixel point in the second image;
and determining the direction information of the detail features in the second image according to the direction information of each pixel point in the second image.
In a possible implementation manner, the second determining unit is specifically configured to:
and rotating the initial preset filtering template according to the direction information of the detail characteristics to obtain the target preset filtering template.
In a possible implementation manner, the filtering unit is specifically configured to:
and multiplying the gray value of any pixel point in the second image by the corresponding weight value in the target preset filtering template.
In one possible implementation manner, the detection module includes:
a third determination unit configured to determine a target threshold;
and the detection unit is used for detecting detail features in the third image according to the target threshold.
In a possible implementation manner, the third determining unit is specifically configured to:
determining a preset threshold value according to the gray level histogram of the third image;
dividing the pixel points in the third image into a first type pixel point set and a second type pixel point set according to the preset threshold value;
determining intra-class variance and inter-class variance between the first class pixel point set and the second class pixel point set;
and determining the target threshold according to the intra-class variance and the inter-class variance.
In one possible implementation manner, the modification module includes:
and the correcting unit is used for carrying out gray correction on the first image according to a self-adaptive homomorphic logarithmic gray correction method to obtain the second image.
In a possible implementation manner, the modifying unit is specifically configured to:
and performing gray correction on each pixel point in the first image according to a self-adaptive homomorphic logarithmic gray correction method until the average gray value of each pixel point in the first image is greater than or equal to a first preset gray value and less than or equal to a second preset gray value to obtain the second image.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory;
wherein the memory is to store program instructions;
the processor is configured to call and execute the program instructions stored in the memory, and when the processor executes the program instructions stored in the memory, the electronic device is configured to execute the method according to any implementation manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the method according to any implementation manner of the first aspect.
According to the image processing method, the image processing device, the image processing equipment and the storage medium, after the gray level of the first image is corrected, the second image after the gray level correction is filtered by adopting a self-adaptive directional filtering method; further, detail features in the adaptively filtered third image are detected according to a maximum inter-class variance method. Therefore, by adopting the self-adaptive directional filtering method, the direction information of the detail features in the second image can be considered in the filtering process, so that the noise in the second image can be removed on the basis of protecting the detail features in the second image, the detail features in the image can be accurately detected, and the requirement of a user on detection data can be met.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a preset direction selection template provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image processing method according to another embodiment of the present application;
FIG. 4 is a diagram illustrating an initial default filter template according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an image processing method according to another embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating a method for determining the variance between maximum classes according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a method for adaptive homomorphic log gray scale modification according to an embodiment of the present application;
FIG. 8 is a first schematic diagram illustrating comparison between before and after image processing according to an embodiment of the present disclosure;
FIG. 9 is a second schematic diagram illustrating comparison between before and after image processing according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, an application scenario and a part of vocabulary related to the embodiments of the present application will be described.
The image processing method, the image processing device, the image processing apparatus and the image processing storage medium provided by the embodiment of the application can be applied to detection application scenes (for example, application scenes such as pavement crack detection or fingerprint detection) of image detail features, and can accurately identify the detail features in the collected images, so that the problems that in the related art, the detail features in the images cannot be accurately detected, the required detection data cannot be acquired, the pavement which needs to be repaired is omitted, the fingerprint detection is wrong and the like are solved.
Of course, the image processing method, the apparatus, the device and the storage medium provided in the embodiment of the present application may also be applied to other application scenarios, which are not limited in the embodiment of the present application.
In the embodiment of the present application, an execution subject for executing the image processing method may be an electronic device, or may be an image processing apparatus provided in the electronic device. Illustratively, the image processing device provided by the embodiment of the present application can be realized by software and/or hardware.
The electronic device involved in the embodiments of the present application may include, but is not limited to, any of the following: the device with image processing function such as mobile phone, computer or video monitoring, etc. can also be other devices with image processing function.
The detailed features referred to in the embodiments of the present application may include, but are not limited to: cracks or fingerprints.
The direction information of the detail feature in any image referred to in the embodiments of the present application is used to indicate the direction of the detail feature in the image.
Any preset filtering template referred to in the embodiments of the present application includes at least one weight value (each weight value corresponds to one sub-region in the preset filtering template). Illustratively, any preset filtering template may be applicable to a certain preset direction. For example, the preset filter template 1 may be adapted to the horizontal direction, the preset filter template 2 may be adapted to the vertical direction, and the like.
In the preset direction selection template related in the embodiment of the application, the preset image area is divided into at least two preset directions, so that the direction information of the detail features in any image can be determined by referring to the preset direction selection template.
Fig. 1 is a schematic diagram of a preset direction selection template according to an embodiment of the present disclosure. As shown in fig. 1, in the preset direction selection template provided in the embodiment of the present application, a preset image area may be divided into 8 preset directions, for example, a preset direction 1-a preset direction 8.
The numbers "first" and "second" in the embodiments of the present application are used for distinguishing similar objects, and are not necessarily used for describing a specific order or sequence order, and should not constitute any limitation to the embodiments of the present application.
According to the image processing method, the image processing device, the image processing equipment and the storage medium, after the gray level of the first image is corrected, the second image after the gray level correction is filtered by adopting a self-adaptive directional filtering method; further, detail features in the adaptively filtered third image are detected according to a maximum inter-class variance method. Therefore, by adopting the self-adaptive directional filtering method, the direction information of the detail features in the second image can be considered in the filtering process, so that the noise in the second image can be removed on the basis of protecting the detail features in the second image, and the technical problem that the required detection data cannot be acquired because the detail features in the image cannot be accurately detected in the related technology is solved.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 2, the method of the embodiment of the present application may include:
step S201, carrying out gray correction on the collected first image to obtain a second image.
In the step, the collected first image is subjected to gray level correction to perform illumination compensation processing to obtain a second image, so that illumination of each detail feature in the second image is uniform, and subsequent detection of the detail features is facilitated.
In a possible implementation manner, the gray scale correction is performed on the first image according to an adaptive homomorphic logarithmic gray scale correction method to effectively compensate for a portion of the first image that is too dark or too bright (i.e., does not meet a standard illumination range), so as to obtain the second image.
Of course, other gray scale correction methods can be used for gray scale correction on the acquired first image, which is not limited in the embodiment of the present application.
And S202, filtering the second image according to a self-adaptive directional filtering method to obtain a third image.
Because the detail features (such as cracks in a pavement crack image) have obvious high-frequency characteristics, belong to obvious edge signals, and have no fixed morphological features, the common low-pass filtering method can cause the detail features to be greatly lost, and cannot be applied to the preprocessing of the crack image (or the fingerprint image and the like).
Considering that the detail features usually have a certain directivity, adaptive filtering based on directivity is a method for processing the image in the spatial domain, and can better retain the detail features. The core idea of the adaptive directional filtering method is to eliminate or weaken high-frequency components in an image and keep low-frequency components so as to reduce the degree of gray scale change of the image.
In this step, the second image is filtered according to an adaptive directional filtering method, so that noise in the second image can be removed on the basis of considering direction information of the detail features in the second image, and therefore, the detail features in the second image cannot be damaged.
And S203, detecting detail features in the third image according to the maximum inter-class variance method.
In this step, for example, a target threshold is determined according to a maximum inter-class variance method, so that the detail feature in the third image is detected according to the target threshold. For example, the detail features in the third image are detected by judging the size relationship between the gray value of each pixel point in the third image and the target threshold.
In the embodiment of the application, after the gray level of the first image is corrected, the second image after the gray level correction is filtered by adopting an adaptive directional filtering method; further, detail features in the adaptively filtered third image are detected according to a maximum inter-class variance method. Therefore, by adopting the self-adaptive directional filtering method, the direction information of the detail features in the second image can be considered in the filtering process, so that the noise in the second image can be removed on the basis of protecting the detail features in the second image, the detail features in the image can be accurately detected, and the requirement of a user on detection data can be met.
Fig. 3 is a schematic flowchart of an image processing method according to another embodiment of the present application. On the basis of the above embodiments, the present application embodiment describes an implementation manner of the above step S202. As shown in fig. 3, the method of the embodiment of the present application may include:
step S202A, determining direction information of the detail feature in the second image.
Considering that the detail features have obvious linear features and have certain directivity in a certain range, the directivity of the detail features can be represented by the direction of each pixel point in the region in the embodiment of the application.
Illustratively, the average gray value in each preset direction in the template is selected by determining the pixel in the second image in the preset direction.
For example, as shown in fig. 1, the preset direction selection template provided in the embodiment of the present application may include 8 preset directions (e.g., preset direction 1-preset direction 8), and the central pixel point is taken as (x)0,y0) Calculating the average gray value in each preset direction
Figure BDA0002012532200000081
Its neighborhood gray value is F (x)0+w,y0+ h) of w ∈ [4,4 ]],h∈[4,4]。
Further, at least one reference direction is determined according to the average gray value in each preset direction.
For example, determining M according to equation (1)jIs equal to the maximum value MjmaxJ (j ═ 1, 2, 3, 4) and determining at least one reference direction comprises: a preset direction j and a preset direction j + 4.
Figure BDA0002012532200000082
Of course, at least one reference direction may also be determined according to other equivalent or modified formulas of the above formula (1), which is not limited in the embodiments of the present application.
Of course, at least one reference direction may also be determined in other ways according to the average gray scale value in each preset direction, which is not limited in the embodiment of the present application.
Further, according to the at least one reference direction, direction information of each pixel point in the second image is respectively determined.
For example, the direction information of each pixel point in the second image is respectively determined according to the at least one reference direction (e.g., a preset direction j and a preset direction j +4) by using formula (2).
Figure BDA0002012532200000083
Wherein (x)k,yk) Representing the kth pixel point, D, in said second imagekDirection information representing the kth pixel, F (x)k,yk) Representing the gray value of the kth pixel point, wherein the value range of k is more than 0 and less than or equal to the total number of the pixel points in the second pixel.
Of course, the direction information of each pixel point in the second image may also be determined according to other equivalent or deformation formulas of the above formula (2), which is not limited in this embodiment of the application.
Of course, according to the at least one reference direction, the direction information of each pixel point in the second image may also be determined in other manners, which is not limited in this embodiment of the application.
Further, according to the direction information of each pixel point in the second image, the direction information of the detail features in the second image is determined.
To ensure accuracy of the directional information of the detail features, the second image is illustratively divided into sub-regions of a preset size (e.g., 9 × 9 pixels); further, for each sub-region, determining a histogram of the sub-region according to direction information of pixel points in the sub-region; further, for each sub-region, the direction corresponding to the peak in the histogram of the sub-region is taken as the direction information of the sub-region, so as to form the direction information of the detail feature in the second image.
Of course, according to the direction information of each pixel point in the second image, the direction information of the detail feature in the second image may also be determined in other ways, which is not limited in this embodiment of the application.
Step S202B, according to the direction information of the detail features, determining a target preset filtering template corresponding to the direction information.
In order to furthest retain the detail feature information, the embodiment of the application provides that the direction information of different detail features corresponds to different preset filtering templates, and any preset filtering template can be suitable for a certain preset direction.
In a possible implementation manner, an initial preset filtering template is rotated according to the direction information of the detail features to obtain the target preset filtering template.
In this implementation manner, assuming that an initial preset filtering template corresponding to a certain preset direction is preset, the initial preset filtering template is rotated according to the direction information of the detail features, so as to obtain a target preset filtering template corresponding to the direction information of the detail features. For example, fig. 4 is a schematic diagram of an initial preset filtering template provided in the embodiment of the present application, and assuming that the initial preset filtering template shown in fig. 4 corresponds to a horizontal direction and the direction information of the detail feature indicates a direction of 90 degrees, the initial preset filtering template shown in fig. 4 is rotated by 90 degrees to obtain a target preset filtering template corresponding to the direction information of the detail feature.
It should be noted that, in order to ensure that the total gray scale value of the pixels in the region is not changed, the sum of the ownership weighted values in any preset filtering template is equal to zero.
In another possible implementation manner, according to the direction information of the detail feature, a target preset filtering template corresponding to the direction information of the detail feature is determined from a plurality of preset filtering templates.
In this implementation manner, assuming that a plurality of preset filtering templates are preset and different preset filtering templates correspond to different preset directions, the target preset filtering template corresponding to the direction information of the detail feature is determined from the plurality of preset filtering templates according to the direction information of the detail feature. For example, assuming that a preset filtering template 1 corresponding to a preset direction 1, a preset filtering template 2 corresponding to a preset direction 2, and direction information of the detail feature indicate the preset direction 2 are preset, a target preset filtering template (e.g., the preset filtering template 2) corresponding to the direction information of the detail feature is determined according to the direction information of the detail feature.
Of course, according to the direction information of the detail features, a target preset filtering template corresponding to the direction information may also be determined in other ways, which is not limited in the embodiment of the present application.
Step S202C, filtering the second image according to the target preset filtering template to obtain the third image.
In this step, the pixel points in the second image are filtered through a target preset filtering template corresponding to the direction information of the detail features, so that the detail feature information in the second image can be retained to the greatest extent.
Illustratively, for any pixel point in the second image, the gray value of the pixel point is multiplied by a corresponding weight value in the target preset filtering template, so as to filter the second image, and obtain the third image. For example, for a pixel point (x) in the second imagek,yk) The pixel point (x)k,yk) And the coordinates (x) in the target preset filtering templatek,yk) The weight values in the subzones are multiplied.
Furthermore, the result of multiplying the gray value of each pixel point in the second image by the corresponding weight value in the target preset filtering template can be normalized. For example, the result of multiplying the gray value of each pixel point in the second image by the corresponding weight value in the target preset filtering template is normalized according to formula (3).
F'(x,y)=Round(F(x,y)×255/Fmax(x,y)-Fmin(x, y)) formula (3)
Wherein, F' (x, y) represents the gray value after normalization processing corresponding to the pixel (x, y), F (x, y) represents the gray value corresponding to the pixel (x, y) in the second image, Fmax(x, y) represents the maximum gray value, Fmin(x, y) represents the minimum gray value and Round () represents the rounding even function.
Of course, other equivalent or modified formulas of the above formula (3) may also be used for normalization, which is not limited in the embodiment of the present application.
In the embodiment of the application, the direction information of the detail feature in the second image is determined; further, according to the direction information of the detail features, a target preset filtering template corresponding to the direction information is determined; and further, filtering the second image according to the target preset filtering template to obtain the third image. Therefore, the target preset filtering template corresponding to the direction information in the second image is used for filtering, so that the noise removing process in the second image is ensured not to remove the detail features in the second image, the detail features in the image can be detected accurately, and the requirement of a user on detection data can be met.
Fig. 5 is a flowchart illustrating an image processing method according to another embodiment of the present application. On the basis of the above embodiments, the present application embodiment describes an implementation manner of the above step S203. As shown in fig. 5, the method of the embodiment of the present application may include:
step S203A, determine the target threshold.
In the step, a target threshold is determined according to the maximum inter-class variance method, so that the detail features in the third image can be detected according to the target threshold subsequently.
Exemplarily, fig. 6 is a schematic flowchart of a maximum inter-class variance method provided in the embodiment of the present application, and as shown in fig. 6, a preset threshold is determined according to a gray histogram of the third image. Further, the pixels in the third image are divided into a first type pixel set and a second type pixel set according to the preset threshold, for example, the pixels in the third image with the gray scale value greater than the preset threshold are divided into the first type pixel set C0, and the pixels in the third image with the gray scale value not greater than the preset threshold are divided into the second type pixel set C1.
Further, intra-class variance and inter-class variance between the first class pixel point set C0 and the second class pixel point set C1 are determined.
For example, assuming that the preset threshold is t (a positive number greater than 1), the maximum gray level of the third image is L (a positive number greater than t), and the pixels in the third image are divided into the first-type pixel point set C0(C0 ═ 0,1, …, t-1, t)) and the second-type pixel point set C1(C1 ═ t +1, t +2, …, L-1)) according to the preset threshold t, the intra-class variance is determined according to formula (4)
Figure BDA0002012532200000111
And determining the between-class variance according to equation (5)
Figure BDA0002012532200000112
Figure BDA0002012532200000113
Figure BDA0002012532200000114
Wherein, variance of C0
Figure BDA0002012532200000115
Variance of C1
Figure BDA0002012532200000116
Probability of C0
Figure BDA0002012532200000117
Probability of C1
Figure BDA0002012532200000118
Mean value of C0
Figure BDA0002012532200000121
Mean value of C1
Figure BDA0002012532200000122
Mu (t) represents the mean value or expectation of the third image gray level, N represents the number of pixel points with the gray level i in the third image, and N represents the image in the third imageTotal number of prime points.
Of course, the intra-class variance may also be determined according to other equivalent or variant formulas of the above formula (4), which is not limited in the embodiment of the present application.
Of course, the inter-class variance may also be determined according to other equivalent or modified formulas of the above formula (5), which is not limited in the embodiment of the present application.
Further, the target threshold is determined according to the intra-class variance and the inter-class variance.
Illustratively, by making the inter-class variance
Figure BDA0002012532200000123
And the intra-class variance
Figure BDA0002012532200000124
T when the quotient of (a) takes the maximum value is the target threshold.
Of course, the target threshold may also be determined by other ways, which is not limited in the embodiment of the present application.
Step S203B, detecting detail features in the third image according to the target threshold.
Illustratively, the detail features in the third image are detected by judging the size relationship between the gray value of each pixel point in the third image and the target threshold. For example, the pixel points in the third image with the gray scale value larger than the target threshold are set as a first color, and the pixel points in the third image with the gray scale value not larger than the target threshold are set as a second color, so that the detail features in the third image can be accurately identified.
Fig. 7 is a schematic flow chart of a method for adaptive homomorphic log gray scale correction according to an embodiment of the present application. On the basis of the foregoing embodiments, an implementation manner of "performing gray scale correction on the first image according to the adaptive homomorphic logarithmic gray scale correction method" is described in the embodiments of the present application.
Illustratively, performing gray correction on each pixel point in the first image according to an adaptive homomorphic logarithmic gray correction method to effectively compensate for a part which is too dark or too bright (i.e., does not meet a standard illumination range) in the first image until a gray average value of each pixel point in the first image is greater than or equal to a first preset gray value (e.g., 91) and less than or equal to a second preset gray value (e.g., 193), so as to obtain the second image.
As shown in fig. 7, the adaptive homomorphic log gray scale correction method of this embodiment may include:
and S1, initializing the acquired first image.
And S2, judging whether the average gray value of each pixel point in the first image is larger than or equal to a first preset gray value.
Exemplarily, if the average value of the grays of each pixel point in the first image is not greater than or equal to a first preset grayscale value, performing S3; if the average gray level of each pixel point in the first image is greater than or equal to the first preset gray level value, S5 is executed.
And S3, performing gray correction on each pixel point in the first image.
Illustratively, formula (6) may be used to perform gray correction on the pixel points.
Figure BDA0002012532200000131
Wherein,
Figure BDA0002012532200000132
x represents the x coordinate of the pixel, y represents the y coordinate of the pixel, F (x, y) represents the gray value of the pixel (x, y) (i.e. the pixel with the coordinate of (x, y)), s (x, y) represents the incident component of F (x, y), h (x, y) represents the reflection component of F (x, y), D represents the field of the pixel (x, y), F (x, y) represents the gray value of the pixel (x, y) after gray correction, D represents the control coefficient, GminRepresents the minimum gray value, GmaxRepresents the maximum gray value, GMRepresenting the gray scale average.
For example, the value range of d may be greater than or equal to 0 and less than or equal to a preset control coefficient (e.g., 10000).
Of course, the gray scale correction may also be performed on the pixel points by using other equivalent or deformation formulas of the above formula (6), which is not limited in the embodiment of the present application.
And S4, adjusting the control coefficient d according to the preset step length d 0.
Illustratively, a preset step d0 (e.g., 50) is added on the basis of the control coefficient before adjustment to obtain the adjusted control coefficient, and the process returns to execute S2.
And S5, judging whether the average gray value of each pixel point in the first image is less than or equal to a second preset gray value.
Exemplarily, if the average value of the grays of each pixel point in the first image is not less than or equal to a second preset grayscale value, performing S3; and if the average gray value of each pixel point in the first image is less than or equal to a second preset gray value (namely, the gray value meets the standard illumination range), stopping gray correction.
Of course, according to the adaptive homomorphic logarithmic gray scale correction method, other realizable manners may also be adopted to perform gray scale correction on each pixel point in the first image, which is not limited in the embodiment of the present application.
In the embodiment of the application, the gray correction is performed on each pixel point in the first image according to a self-adaptive homomorphic logarithmic gray correction method so as to effectively compensate the part which does not accord with the standard illumination range in the first image until the gray average value of each pixel point in the first image accords with the standard illumination range, and the noise in the image can be accurately removed on the basis of protecting the detail characteristics in the image.
Fig. 8 is a first schematic diagram illustrating comparison before and after image processing according to an embodiment of the present application, and fig. 9 is a second schematic diagram illustrating comparison before and after image processing according to an embodiment of the present application. As shown in fig. 8 and 9, after the image processing method provided by the embodiment of the present application performs gray level correction, filtering, threshold segmentation and other processing on an image with clear detail features and an image with low contrast, the background noise of the image is significantly attenuated, and complete detail features can be detected. Therefore, the image processing method provided by the embodiment of the application can effectively retain the detail features while smoothing the image, and plays a role in selectively retaining the detail features.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 10, an image processing apparatus 100 provided in an embodiment of the present application may include: a correction module 1001, a filtering module 1002 and a detection module 1003.
The correction module 1001 is configured to perform gray correction on the acquired first image to obtain a second image;
a filtering module 1002, configured to filter the second image according to a self-adaptive directional filtering method to obtain a third image;
a detecting module 1003, configured to detect a detail feature in the third image according to a maximum inter-class variance method; wherein the minutiae comprise cracks or fingerprints.
In a possible implementation manner, the filtering module 1002 includes:
a first determination unit, configured to determine direction information of a detail feature in the second image;
the second determining unit is used for determining a target preset filtering template corresponding to the direction information according to the direction information of the detail characteristics;
and the filtering unit is used for filtering the second image according to the target preset filtering template to obtain the third image.
In a possible implementation manner, the first determining unit is specifically configured to:
determining the average gray value of the pixels in the second image in each preset direction in a preset direction selection template;
determining at least one reference direction according to the average gray value in each preset direction;
according to the at least one reference direction, respectively determining direction information of each pixel point in the second image;
and determining the direction information of the detail features in the second image according to the direction information of each pixel point in the second image.
In a possible implementation manner, the second determining unit is specifically configured to:
and rotating the initial preset filtering template according to the direction information of the detail characteristics to obtain the target preset filtering template.
In a possible implementation manner, the filtering unit is specifically configured to:
and multiplying the gray value of any pixel point in the second image by the corresponding weight value in the target preset filtering template.
In a possible implementation manner, the detecting module 1003 includes:
a third determination unit configured to determine a target threshold;
and the detection unit is used for detecting detail features in the third image according to the target threshold.
In a possible implementation manner, the third determining unit is specifically configured to:
determining a preset threshold value according to the gray level histogram of the third image;
dividing the pixel points in the third image into a first type pixel point set and a second type pixel point set according to the preset threshold value;
determining intra-class variance and inter-class variance between the first class pixel point set and the second class pixel point set;
and determining the target threshold according to the intra-class variance and the inter-class variance.
In a possible implementation manner, the modification module 1001 includes:
and the correcting unit is used for carrying out gray correction on the first image according to a self-adaptive homomorphic logarithmic gray correction method to obtain the second image.
In a possible implementation manner, the modifying unit is specifically configured to:
and performing gray correction on each pixel point in the first image according to a self-adaptive homomorphic logarithmic gray correction method until the average gray value of each pixel point in the first image is greater than or equal to a first preset gray value and less than or equal to a second preset gray value to obtain the second image.
The image processing apparatus provided in the embodiment of the present application may be configured to execute the technical solution of the embodiment of the image processing method of the present application, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 11, an electronic device 110 provided in an embodiment of the present application may include: a processor 1101 and a memory 1102;
wherein the memory 1102 is configured to store program instructions;
the processor 1101 is configured to call and execute the program instruction stored in the memory 1102, and when the processor 1101 executes the program instruction stored in the memory 1102, the electronic device 110 is configured to execute the technical solution of the above-mentioned embodiment of the image processing method in the present application, and the implementation principle and the technical effect are similar, and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the instructions enable the computer to execute the technical solution of the embodiment of the image processing method in the present application, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be understood by those of ordinary skill in the art that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic, and should not limit the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. An image processing method, comprising:
performing gray correction on the collected first image to obtain a second image;
filtering the second image according to a self-adaptive directional filtering method to obtain a third image;
detecting detail features in the third image according to a maximum inter-class variance method; wherein the minutiae comprise cracks or fingerprints.
2. The method of claim 1, wherein filtering the second image according to an adaptive directional filtering method to obtain a third image comprises:
determining direction information of detail features in the second image;
determining a target preset filtering template corresponding to the direction information according to the direction information of the detail characteristics;
and filtering the second image according to the target preset filtering template to obtain the third image.
3. The method of claim 2, wherein determining the direction information of the detail feature in the second image comprises:
determining the average gray value of the pixels in the second image in each preset direction in a preset direction selection template;
determining at least one reference direction according to the average gray value in each preset direction;
according to the at least one reference direction, respectively determining direction information of each pixel point in the second image;
and determining the direction information of the detail features in the second image according to the direction information of each pixel point in the second image.
4. The method according to claim 2, wherein the determining, according to the direction information of the detail feature, a target preset filtering template corresponding to the direction information comprises:
and rotating the initial preset filtering template according to the direction information of the detail characteristics to obtain the target preset filtering template.
5. The method according to claim 2, wherein the filtering the second image according to the target preset filtering template to obtain the third image comprises:
and multiplying the gray value of any pixel point in the second image by the corresponding weight value in the target preset filtering template.
6. The method according to any one of claims 1-5, wherein the detecting detail features in the third image according to a maximum between class variance method comprises:
determining a target threshold;
and detecting detail features in the third image according to the target threshold.
7. The method of claim 6, wherein determining the target threshold comprises:
determining a preset threshold value according to the gray level histogram of the third image;
dividing the pixel points in the third image into a first type pixel point set and a second type pixel point set according to the preset threshold value;
determining intra-class variance and inter-class variance between the first class pixel point set and the second class pixel point set;
and determining the target threshold according to the intra-class variance and the inter-class variance.
8. The method according to any one of claims 1-5, wherein performing a gray scale correction on the acquired first image to obtain a second image comprises:
and carrying out gray scale correction on the first image according to a self-adaptive homomorphic logarithmic gray scale correction method to obtain the second image.
9. The method of claim 8, wherein the performing the gamma correction on the first image according to the adaptive homomorphic log gamma correction method to obtain the second image comprises:
and performing gray correction on each pixel point in the first image according to a self-adaptive homomorphic logarithmic gray correction method until the average gray value of each pixel point in the first image is greater than or equal to a first preset gray value and less than or equal to a second preset gray value to obtain the second image.
10. An image processing apparatus characterized by comprising:
the correction module is used for carrying out gray correction on the collected first image to obtain a second image;
the filtering module is used for filtering the second image according to a self-adaptive directional filtering method to obtain a third image;
the detection module is used for detecting the detail features in the third image according to the maximum inter-class variance method; wherein the minutiae comprise cracks or fingerprints.
11. The apparatus of claim 10, wherein the filtering module comprises:
a first determination unit, configured to determine direction information of a detail feature in the second image;
the second determining unit is used for determining a target preset filtering template corresponding to the direction information according to the direction information of the detail characteristics;
and the filtering unit is used for filtering the second image according to the target preset filtering template to obtain the third image.
12. An electronic device, comprising: a processor and a memory;
wherein the memory is to store program instructions;
the processor to invoke and execute program instructions stored in the memory, the electronic device to perform the method of any of claims 1-9 when the processor executes the program instructions stored in the memory.
13. A computer-readable storage medium having stored therein instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-9.
CN201910251458.7A 2019-03-29 2019-03-29 Image processing method, device, equipment and storage medium Pending CN111754413A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910251458.7A CN111754413A (en) 2019-03-29 2019-03-29 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910251458.7A CN111754413A (en) 2019-03-29 2019-03-29 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111754413A true CN111754413A (en) 2020-10-09

Family

ID=72671760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910251458.7A Pending CN111754413A (en) 2019-03-29 2019-03-29 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111754413A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784741A (en) * 2021-01-21 2021-05-11 宠爱王国(北京)网络科技有限公司 Pet identity recognition method and device and nonvolatile storage medium
CN115330767A (en) * 2022-10-12 2022-11-11 南通南辉电子材料股份有限公司 Method for identifying production abnormity of corrosion foil

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018126484A1 (en) * 2017-01-09 2018-07-12 中国科学院自动化研究所 Reconfigurable parallel image detail enhancing method and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018126484A1 (en) * 2017-01-09 2018-07-12 中国科学院自动化研究所 Reconfigurable parallel image detail enhancing method and apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
祁亚萍: "一种改进的基于方向场的指纹图像增强算法", 科技信息, no. 21, 25 July 2011 (2011-07-25), pages 0 - 4 *
邢谦谦等: "CT图像肺结节的毛刺检测与量化评估", 计算机应用, vol. 34, no. 12, 10 December 2014 (2014-12-10), pages 0 - 3 *
郑伟华等: "自适应同态对数光照补偿", 中国图象图形学报, vol. 16, no. 8, 16 August 2011 (2011-08-16), pages 0 - 5 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784741A (en) * 2021-01-21 2021-05-11 宠爱王国(北京)网络科技有限公司 Pet identity recognition method and device and nonvolatile storage medium
CN115330767A (en) * 2022-10-12 2022-11-11 南通南辉电子材料股份有限公司 Method for identifying production abnormity of corrosion foil

Similar Documents

Publication Publication Date Title
CN110766679B (en) Lens contamination detection method and device and terminal equipment
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN111080661B (en) Image-based straight line detection method and device and electronic equipment
CN110390643B (en) License plate enhancement method and device and electronic equipment
CN108038833B (en) Image self-adaptive sharpening method for gradient correlation detection and storage medium
JP2015225665A (en) Image noise removal method and image noise removal device
CN109584198B (en) Method and device for evaluating quality of face image and computer readable storage medium
CN109903294B (en) Image processing method and device, electronic equipment and readable storage medium
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN111210395B (en) Retinex underwater image enhancement method based on gray value mapping
CN113781406B (en) Scratch detection method and device for electronic component and computer equipment
CN109214996B (en) Image processing method and device
WO2014070273A1 (en) Recursive conditional means image denoising
CN116542982B (en) Departure judgment device defect detection method and device based on machine vision
CN112634301A (en) Equipment area image extraction method and device
CN114022383A (en) Moire pattern removing method and device for character image and electronic equipment
CN113436162A (en) Method and device for identifying weld defects on surface of hydraulic oil pipeline of underwater robot
CN112634288A (en) Equipment area image segmentation method and device
CN109102466A (en) Image smear determination method and device
CN113344801A (en) Image enhancement method, system, terminal and storage medium applied to gas metering facility environment
CN111754413A (en) Image processing method, device, equipment and storage medium
CN112926695A (en) Image recognition method and system based on template matching
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN111311610A (en) Image segmentation method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination