Disclosure of Invention
In order to solve the problems in the prior art, a main object of the embodiments of the present specification is to provide a method, an apparatus, and a storage medium for detecting Mura in a display screen, so as to solve the technical problem in the prior art that Mura cannot be better detected.
The technical scheme of one or more embodiments of the specification is realized by the following modes:
a method of detecting Mura of a display screen, comprising: taking the collected image of the display screen as an image to be processed; sampling the image to be processed through a sampling line; acquiring a gray curve of a pixel point covered by each sampling line; extracting target features in the image to be processed according to the gray curve and taking the target features as training samples; training a neural network model by using the training sample with the label to obtain a detection model; and detecting the image to be detected containing the display screen through the detection model so as to detect whether the image to be detected contains Mura.
Preferably, wherein extracting the target feature in the image to be processed according to the gray curve comprises: traversing each pixel point covered by the sampling line; acquiring the gray value of each pixel point in the gray curve; determining the positions of wave crests and/or wave troughs in the gray curve according to the gray value of each pixel point; and extracting the target characteristics according to the positions of the wave crests and/or the wave troughs.
Preferably, the determining the position of the peak and/or the trough in the gray curve according to the gray value of each pixel point includes: setting the position of a pixel point as (x, y), and when the position of the pixel point meets a first condition, taking the position of the pixel point as the position of a peak in the gray curve, wherein the first condition is represented by a first equation group, and the expression of the first equation group is as follows:
when the position of the pixel point meets a second condition, the position of the pixel point is taken as the position of a trough in the gray curve, the second condition is expressed by a second equation set, and the expression of the second equation set is as follows:
the gray scale curves representing two sides of the wave crest keep monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve.
Preferably, wherein extracting the target feature according to the position of the peak and/or the trough comprises: obtaining the pole of the peak according to the position of the peak; determining a first position and a second position where the absolute value of the gray-scale change rate on two sides of the pole of the peak is maximum, and a first gray-scale change rate corresponding to the first position and a second gray-scale change rate corresponding to the second position; determining to extract the target feature according to the first and second gray scale change rates; and/or obtaining the pole of the wave trough according to the position of the wave trough; determining a third position and a fourth position where the absolute value of the gray-scale change rates on two sides of the pole of the trough is maximum, and a third gray-scale change rate corresponding to the third position and a fourth gray-scale change rate corresponding to the fourth position; and determining to extract the target feature according to the third gray change rate and the fourth gray change rate.
Preferably, wherein extracting the target feature according to the first and second gray change rates comprises: taking a position with a gray change rate of one Nth of the first gray change rate as a first target position, taking a position with a gray change rate of one Nth of the second gray change rate as a second target position, and taking N as a preset numerical value; taking the sum of the absolute value of the difference between the first target position and the abscissa of the pole and the absolute value of the difference between the second target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the first target position and the absolute value of the difference between the gray values of the extreme point and the second target position as a second characteristic; and taking the first characteristic and the second characteristic as the target characteristic.
Preferably, wherein extracting the target feature according to the third and fourth gray change rates comprises: taking a position with a gray change rate which is one nth of the third gray change rate as a third target position, taking a position with a gray change rate which is one nth of the fourth gray change rate as a fourth target position, and taking N as a preset numerical value; taking the sum of the absolute value of the difference between the third target position and the abscissa of the pole and the absolute value of the difference between the fourth target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the third target position and the absolute value of the difference between the gray values of the extreme point and the fourth target position as a second characteristic; and taking the first characteristic and the second characteristic as the target characteristic.
Preferably, after the first feature and the second feature are taken as the target features, the method further includes: presetting a first label of the target feature, wherein the first label represents that the image to be processed comprises a Mura area; presetting a second label of the target feature, wherein the second label represents that the Mura area is not included in the image to be processed.
Preferably, after determining the positions of the peaks and/or the troughs in the gray scale curve according to the gray scale value of each pixel point, optimizing the positions of the peaks and/or the troughs includes: determining adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point; taking the average value of the coordinates corresponding to the positions of the adjacent or superposed peaks or troughs in the preset value of the first pixel point as the de-duplication position of the adjacent or superposed peaks or troughs in the preset value of the first pixel point; traversing the wave crests or the wave troughs and the de-weight positions, and deleting the wave crests, the wave troughs and/or the de-weight positions with the quantity smaller than a threshold value within a preset range, wherein the preset range is a range which takes the wave crests, the wave troughs or the de-weight positions as the center, the width is a second pixel point preset value, and the length is the length of the image to be processed in the direction vertical to the equally-divided sampling lines.
Preferably, the sampling the image to be processed by the sampling line includes: determining the interval of the equal sampling lines according to the length of the image to be processed in the direction vertical to the equal sampling lines and the number of the equal sampling lines; and equally dividing the image to be processed through the equally dividing sampling line.
Preferably, the interval of the equant sampling lines is determined according to the length of the image to be processed in the direction perpendicular to the equant sampling lines and the number of the equant sampling lines, and the expression is as follows:
d=row/(n+1)
wherein d is the interval of the equal sampling lines, row is the length of the image to be processed in the direction vertical to the equal sampling lines, and n is the number of the equal sampling lines.
Preferably, before sampling the image to be processed by the sampling line, the method further includes preprocessing the image to be processed, including: carrying out mean value filtering processing on the image to be processed; and performing down-sampling on the image after mean filtering.
An apparatus for detecting Mura of a display screen, comprising: at least one processor; a memory storing program instructions that, when executed by the at least one processor, cause the apparatus to perform any of the methods described above.
A computer-readable storage medium storing a program for detecting Mura of a display screen, which when executed by a processor, performs any of the methods described above.
Compared with the prior art, the embodiment of the application adopts at least one technical scheme which can at least achieve the following beneficial effects:
according to the technical scheme, the collected image of the display screen is used as the image to be processed, the image to be processed is sampled by using the sampling line, and then the gray curve of the pixel point covered by each sampling line is obtained. And extracting target features in the image to be processed according to the gray curve, taking the extracted target features as training samples, and training a neural network model by using the training samples with labels to obtain a detection model. And detecting the image to be detected containing the display screen through the detection model to obtain a detection result of whether the image to be detected contains Mura. According to the technical scheme, the target characteristics in the processed image are obtained by utilizing the information in the gray curve, and the method for automatically detecting the Mura is obtained by combining the neural network. The method effectively realizes the detection of Mura in the display screen.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
As shown in fig. 1a and 1b, for the collected images of the display screen, taking the display screen of the mobile phone as an example, the horizontal rectangular frame in fig. 1a and the vertical rectangular frame in fig. 1b are regions where the overall brightness is not uniform in the mobile phone screen, and the regions are Mura. Because the position of Mura is not fixed, and the Mura has no obvious difference with the surrounding background, the contrast with the surrounding background is low, and the Mura is not easy to find. Therefore, the Mura detection needs to be performed on the display screen, and whether the Mura exists in the display screen is further judged, so that the quality of the display screen is ensured. Taking the line Mura as an example, the line Mura exists in the rectangular box in fig. 1 (the figure is only for illustration since Mura is not obvious by nature).
In the prior art, there are various methods for detecting Mura, for example, the conventional method based on edge detection and threshold segmentation, because the contrast between Mura and surrounding background is low and there is no obvious edge, the conventional method based on edge detection and threshold segmentation is difficult to stably detect the Mura. In consideration of the wide variety of Mura, the detection difficulty is also high by a detection algorithm with universality.
The method divides a screen area into a plurality of non-overlapping pixel blocks, adaptively enhances an image and positions a Mura area in the image according to the gray distribution characteristics of each pixel block. For another example, a fuzzy pattern recognition technology simulating human thinking and logic ways enables the detection system to simulate human thinking to recognize and classify Mura defects. For another example, the method is improved on the basis of the original Space Standard Observation (SSO) method, a variation function is adopted to determine an image texture period, and then frequency domain filtering is performed according to the texture period, so as to detect the Mura defect. And inhibiting image noise by using real-value Gabor wavelet filtering, then solving the problem of uneven brightness by using homomorphic transformation, and improving the position of locating Mura by a Chan-Vese dynamic contour model. The methods are all methods for positioning or determining Mura in the prior art, but the methods cannot better detect the Mura.
The technical scheme of the application provides a novel method for detecting Mura in a display screen, and the Mura can be better detected.
As shown in fig. 2, a schematic flow chart of a method for detecting Mura of a display screen according to the present disclosure is provided. The method mainly comprises the following steps:
and step S100, taking the acquired image of the display screen as an image to be processed. Because the display screen is detected, the image of the display screen needs to be collected before the technical scheme is implemented. The specific acquisition method is not limited herein, and may be to acquire an image of a display screen displaying a corresponding picture by a camera in an environment with a certain light brightness according to a requirement, or to acquire an image of a display screen by other image acquisition devices having an image acquisition function when the display screen displays a certain picture. And then taking the acquired image of the display screen as an image to be processed.
And step S200, sampling the image to be processed through the sampling line. And processing the image to be processed after obtaining the image to be processed, including sampling the image to be processed through a sampling line. The image to be processed includes a large number of pixels in the horizontal direction or the vertical direction, because Mura does not include only one or two pixels in the horizontal direction or the vertical direction, but has a certain range, for example, the line Mura has a certain length and width. Therefore, in order not to process each pixel point in the image to be processed in the horizontal direction or the vertical direction, the image to be processed may be sampled through a sampling line, so as to perform Mura detection on the image to be processed through the sampling line, and the sampling line may be in the horizontal direction or the vertical direction. The number of sampling lines can be set according to actual requirements, and is not limited herein. Certainly, the number of sampling lines can also be increased according to actual requirements, and if the number of sampling lines is consistent with the number of pixel points of the image to be processed in the horizontal direction or the vertical direction, and the length of the sampling lines is equal to the length of one side parallel to the image to be processed, the sampling lines can cover each pixel point of the image to be processed.
Step 300, obtaining a gray curve of the pixel point covered by each sampling line. After the image to be processed is sampled by the sampling lines, the sampling lines correspond to corresponding pixel points in the image to be processed, which are referred to as pixel points covered by the sampling lines, and the pixel points covered by each sampling line have corresponding gray values (the specific determination process of the gray values is not the key point of the scheme, and is not described in detail here). And acquiring a gray value corresponding to the pixel point covered by each sampling line, and forming a corresponding curve by the gray value corresponding to the pixel point covered by each sampling line to obtain the gray curve of the pixel point covered by each sampling line. The gray curve of the pixel point covered by each sampling line represents the gray change condition of the pixel point covered by the sampling line, and Mura can be further detected according to the change condition.
And step S400, extracting target characteristics in the image to be processed according to the gray curve and taking the target characteristics as training samples. After a gray curve of a pixel point covered by each sampling line is obtained, target features in the image to be processed are extracted according to the gray curve and serve as training samples of a training neural network, and the target features are used for further detecting whether Mura is included in the image to be processed.
And S500, training a neural network model by using the training sample with the label to obtain a detection model. After the training samples are obtained, the training samples are used for training the neural network, the training samples are provided with labels, the labels can be two types of labels which are preset, one type of labels is labels which indicate that the image to be detected comprises Mura, and the other type of labels which indicate that the image to be detected does not comprise Mura. Of course, these labels may be only labels indicating that Mura is included in the image to be detected. And training the neural network through the training sample with the label to obtain the trained neural network, and calling the trained neural network as a detection network.
Step S600, after the detection model is obtained, the image to be detected including the display screen can be detected through the detection model so as to detect whether the image to be detected includes Mura.
According to the technical scheme, a new method is provided for detecting whether the image of the display screen comprises Mura. The image to be processed is sampled by utilizing the sampling lines, and then the gray curve of the pixel point covered by each sampling line is obtained. And extracting target features in the image to be processed according to the gray curve, taking the extracted target features as training samples, and training a neural network model by combining a neural network and utilizing the training samples with labels to obtain a detection model. The image to be detected containing the display screen is detected through the detection model, and the detection result of whether the image to be detected contains Mura is obtained, so that the Mura in the display screen can be better and automatically detected. The method is applicable to line Mura, although other Mura may also be used.
The method in the embodiment realizes automatic detection of Mura in the display screen, and as an optimization or supplement to the method, the technical scheme of the application also provides another embodiment.
Before step S200, i.e. before sampling the image to be processed by the sampling line, the image to be processed is also preprocessed. The pretreatment step comprises:
a first pretreatment step: and filtering the image to be processed.
A second pretreatment step: the filtered image is down-sampled.
The execution process of the first preprocessing step is as follows:
fig. 3 is a schematic diagram of a gray scale curve of a pixel covered by a sampling line before an image to be processed is preprocessed. In order to illustrate the existence of noise in the image to be processed without preprocessing, the present embodiment uses the image to be processed to which the sampling lines have been added for illustration, and the image to be processed is not actually sampled by the sampling lines, but is performed in step S200. In the process of collecting the display screen image, because the intensity of the light source at one side irradiating the whole screen is uneven, the natural light in a darkroom can be refracted and diffused, and the lens of the camera can have various interference factors such as errors and vibration, the noise is inevitably introduced into the collected display screen image, so that the gray curve of the pixel point covered by the sampling line in the image to be processed has violent oscillation. In order to reduce the burr caused by noise or reduce the influence of noise, the method comprises the following steps: the image to be processed is filtered, so that the purposes of image noise reduction and image smoothing are achieved, and the effectiveness and reliability of subsequent further image processing are improved. In this embodiment, a method of mean filtering is adopted, but other filtering methods that can achieve the same effect may also be adopted. The specific process of performing the average filtering process is not the focus of this embodiment, and for example, for the image to be processed with size 2600 × 4800 pixels, a filtering matrix with size 127 × 127 may be used for filtering. After the image to be processed is subjected to the mean filtering process, the situation in fig. 3 is avoided from occurring in step S300, thereby affecting the operational problem of step S300.
As shown in fig. 4, compared with the gray scale curve before the preprocessing step one, the gray scale curve of the pixel point covered by the sampling line in the image to be processed after the image to be processed is processed by the preprocessing step one has the advantages that the noise and the burr in the gray scale curve after the preprocessing step one are greatly reduced, the violent oscillation does not occur, and the subsequent processing of the image to be processed is facilitated. It should be noted that, in fig. 3 and 4, a gray scale curve corresponding to 12 sampling lines is taken as an example, the sampling lines are in a horizontal direction or a vertical direction, a horizontal axis represents a position of a pixel point covered by the sampling lines (for example, coordinates in an image to be processed are more conveniently represented in the figure, and the coordinates of the position are coordinates after being reduced according to a certain proportion), and a vertical axis represents a gray scale value of the corresponding pixel point. For example, if the sampling line is vertically sampled to the image to be processed, the gray scale curve in fig. 3 and 4 represents a curve of the gray scale values of the pixels covered by the sampling line in the vertical direction. It is to be noted that the sampling line is a gray scale curve corresponding to the sampling line in the horizontal direction.
After the image to be processed is filtered, the method also comprises a second preprocessing step: the filtered image is down-sampled. The size of the image to be processed is large, a large amount of time is needed during processing, and actual operation is not facilitated, so that the size of the image to be processed needs to be reduced. Steps S100-S600 are then performed.
This specification also provides another embodiment. This embodiment is further limited to step S200, and steps S100 and S300 to S600 may be unchanged or may be combined with other embodiments. Step S200, sampling the image to be processed through a sampling line, comprising the following steps:
determining the interval of the equal sampling lines according to the length of the image to be processed in the direction vertical to the equal sampling lines and the number of the equal sampling lines, wherein the expression is as follows:
d=row/(n+1) (1)
wherein d is the interval of the equal sampling lines, row is the length of the image to be processed in the direction vertical to the equal sampling lines, and n is the number of the equal sampling lines.
And then equally dividing the image to be processed through equally dividing sampling lines.
For example, the bisected sampling lines are vertical to sample the image to be processed, the length of the image to be processed in the direction perpendicular to the bisected sampling lines is the length of the image to be processed in the horizontal direction, that is, row is the length of the image to be processed in the horizontal direction, assuming row is 260 pixels and n is 12, d is 20 pixels according to formula (1), and the interval between each sampling line is 20 pixels.
As shown in fig. 5, this specification also provides another embodiment. In this embodiment, step S400, extracting a target feature in an image to be processed according to a gray curve, includes:
step S401, traversing each pixel point covered by the sampling line. After the to-be-processed image is sampled by the sampling lines, pixel points of all the sampling lines are traversed, namely pixel points covered by all the sampling lines are traversed.
Step S402, obtaining the gray value of each pixel point in the gray curve. The gray value of each pixel point corresponds to the gray value of the corresponding pixel point in the gray curve corresponding to the pixel point covered by each sampling line, and the gray value of the corresponding pixel point is obtained from the corresponding gray curve so as to extract the target characteristic according to the gray value of the pixel point covered by the sampling line.
Step S403, determining the positions of wave crests and/or wave troughs in the gray curve according to the gray value of each pixel point. Each pixel point has a corresponding position in the graph to be processed, the position is expressed in a coordinate form, a sampling line is assumed to be in a vertical direction, a coordinate system is established according to the image to be processed, the horizontal direction of the image to be processed is taken as an x axis, the vertical direction is taken as a y axis, and one pixel point is taken as an example, the method comprises the following steps:
setting the position of the pixel point as (x, y), when the position of the pixel point meets a first condition, taking the position of the pixel point as the position of a peak in a gray curve, wherein the first condition is represented by a first equation set, and the expression of the first equation set is as follows:
the gray scale curves representing two sides of the peak keep a monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve, and is set to 20 pixel points in this embodiment. In this embodiment, when determining the peak, the first equation group is selected as the determination condition because the sampling line is verticalAnd the coordinates in the horizontal direction of the pixel points covered by the sampling line are the same, namely the coordinates in the x axis are the same, and the coordinates in the vertical direction are changed, namely the coordinates in the y axis are changed. When the gray value of the pixel point (x, y) -1 adjacent to the pixel point (x, y) in the y-axis negative direction in the gray curve is smaller than that of the pixel point (x, y) in the gray curve, and the gray value of the pixel point (x, y +1) adjacent to the pixel point (x, y) in the y-axis positive direction in the gray curve is larger than that of the pixel point (x, y) in the gray curve, the gray value of the pixel point (x, y) in the gray curve is the largest among the three points. Because the wave crest is relative to the wave crest in a certain interval range, the gray value curves of the three points cannot indicate that the pixel point (x, y) is the wave crest, and then the points are classified
And
the corresponding gray value in the gray curve is simultaneously used as the determination condition of the peak, and the interval in the y axis is the interval
On the y-axis, pixel points
And (x, y) the gray values corresponding to the gray curve are sequentially increased (monotone interval), and the pixel points (x, y),
And
the corresponding gray values in the gray curve are sequentially reduced (monotonous interval), which shows that the change of the pixel point (x, y) on the y axis is
The gray value in the gray curve in the interval is the maximum, and the gray curves corresponding to the pixel points in a certain interval at both sides of the pixel point (x, y) are allIs monotonic, so pixel point (x, y) is taken as the peak in the gray curve. Of course, the variation interval of the y-axis can be adjusted and changed as required.
When the position of the pixel point meets a second condition, the position of the pixel point is taken as the position of a trough in the gray curve, the second condition is expressed by a second equation set, and the expression of the second equation set is as follows:
the gray scale curves representing two sides of the peak keep a monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve, and is set to 20 pixel points in this embodiment. Similarly, the process of determining the trough is the same as the process of determining the peak according to the first condition, except that the signs of the corresponding inequalities in the second equation set are opposite.
Of course, the sampling line may also be a sampling line in the horizontal direction, and the image to be processed is sampled in the horizontal direction. The peak and the trough in the gray curve can exist singly or simultaneously, the peak and the trough exist, and the target characteristics are extracted according to the actual situation.
And S404, extracting target characteristics according to the positions of the wave crests and/or the wave troughs. Since the peak and the trough in the gray curve may exist singly or simultaneously, the target feature needs to be extracted according to the actual situation. Referring to fig. 6 and 7, the step includes:
step S4041, obtaining the pole (x) of the peak according to the position of the peak0,y0). The specific process of obtaining the corresponding pole according to the peak and/or the trough is not the focus of the present application, and is not described in detail herein.
Step S4042, after determining the peak in the gray curve, determining the first position and the second position where the absolute value of the gray change rate at the two sides of the pole of the peak is maximum, and changing the gray at one side of the pole of the peakThe position where the absolute value of the rate is the largest is taken as the first position, and the position where the absolute value of the gray change rate on the other side of the pole of the peak is the largest is taken as the second position. A first rate of change of the gray scale corresponding to the first location and a second rate of change of the gray scale corresponding to the second location are then determined. The specific process of determining the first and second positions is not central to this step and will not be explained in detail. The first position is noted as (x) in this step
0+a,y
0) And the second position is noted as (x)
0+b,y
0) The first gray scale change rate is recorded as
The second rate of change of the gray scale is recorded as
Step S4043, determining and extracting the target feature according to the first and second gray change rates, including:
and taking a position with a gray change rate of one Nth of the first gray change rate as a first target position, taking a position with a gray change rate of one Nth of the second gray change rate as a second target position, wherein N is a preset numerical value, and the value of N in the step is 3. In this step, the change rate at which the gradation change rate is one nth of the first gradation change rate is recorded as
Recording the change rate of the gray scale change rate which is one Nth of the second gray scale change rate as the change rate
Denote the first target position as (x)
0+c,y
0) Denote the second target position as (x)
0+d,y
0)。
The sum of the absolute value of the difference between the first target position and the abscissa of the pole and the absolute value of the difference between the second target position and the abscissa of the pole is taken as a first feature, and is expressed as width | c | + | d |. The absolute value of the difference between the gray values of the pole point and the first target position and the difference between the gray values of the pole point and the second target positionIs taken as a second feature and is denoted as diffvalue ═ I (x)0,y0)-I(x0+c,y0)|+|I(x0,y0)-I(x0+d,y0)|。
The first feature and the second feature are taken as target features.
Step S4041 to step S4043 are performed when the peak exists in the gray scale curve, and the pole of the peak is determined according to the peak, so as to extract the target feature. In the following steps S4044 to S4046, when there is a trough in the gray scale curve, the pole of the trough is determined according to the trough, and the target feature is extracted, and steps S4041 to S4043 may be performed first, or steps S4044 to S4046 may be performed first.
And/or
Step S4044, obtaining the pole (x) of the trough according to the position of the trough1,y1)。
Step S4045, determining a third position and a fourth position where the absolute values of the gray-scale change rates of the two sides of the pole of the trough are maximum, and determining a third gray-scale change rate corresponding to the third position and a fourth gray-scale change rate corresponding to the fourth position. And taking the position with the maximum absolute value of the gray change rate at one side of the pole of the trough as a third position, and taking the position with the maximum absolute value of the gray change rate at the other side of the pole of the trough as a fourth position. And then determining a third gray scale change rate corresponding to the third position and a fourth gray scale change rate corresponding to the fourth position. The specific process of determining the third and fourth positions is not the focus of this step and will not be explained in detail. In this step, the third position is denoted as (x)
1+a,y
1) And the fourth position is marked as (x)
1+b,y
1) And the third gradation change rate is recorded as
The fourth rate of change in gray scale is recorded as
And S4046, determining the extracted target feature according to the third and fourth gray change rates. The method comprises the following steps:
and taking the position with the gray change rate being one nth of the third gray change rate as a third target position, taking the position with the gray change rate being one nth of the fourth gray change rate as a fourth target position, wherein N is a preset numerical value, and the value of N in the step is 3. In this step, the change rate at which the gradation change rate is one nth of the third gradation change rate is recorded as
Recording the change rate of the fourth gray change rate of which the gray change rate is one Nth of the fourth gray change rate as
Denote the third target position as (x)
1+c,y
1) Denote the fourth target position as (x)
1+d,y
1)。
The sum of the absolute value of the difference between the abscissa of the pole and the third target position and the absolute value of the difference between the abscissa of the pole and the fourth target position is taken as a first feature, and is expressed as width | + | d |. The sum of the absolute value of the difference between the gray values of the pole and the third target position and the absolute value of the difference between the gray values of the pole and the fourth target position is taken as a second feature and is expressed as Difvalue ═ I (x)1,y1)-I(x1+c,y1)|+|I(x1,y1)-I(x1+d,y1)|。
The first feature and the second feature are taken as target features. Through step S404, target features are extracted according to the positions of the peaks and/or valleys, so as to prepare for training a neural network model and further detecting Mura.
Values are illustrated (refer to fig. 8, 9, and 10), the determination of the gray scale change rate corresponding to each of the first target position, the second target position, the third target position, and the fourth target position is as follows:
in a gray-scale image, the brightest region corresponds to the peak of the gray-scale curve, and the boundary of the region corresponds to the region where black and white intersect, and the change in the gray-scale rate of the region is the largest, that is, the absolute value of the slope (gray-scale change rate) of the gray-scale curve is the largest. However, the gray curve where the partial image exists appearsThe same or similar situation as in fig. 8. When the gray scale curve is shown in fig. 8, it is explained that the sampling region of the sampling line corresponding to the gray scale curve is not a common black-white-black luminance change but a black-white change. Therefore, a monotonous curve is present on the left side of the peak point shown in fig. 8, and the absolute value of the slope (gradation change rate) of the monotonous curve is the maximum. Obviously, this is not in compliance with Mura. Therefore, in the present embodiment, the value of N may be set to 3, and the change rate may be set to 3
Or
As the target position. Of course, the value of N may also take on 2 or other values.
The position with the maximum absolute value of the gray scale change rate at two sides of the pole of the peak or the trough is referred to as (x)0+a,y0)、(x0+b,y0) Or (x)1+a,y1)、(x1+b,y1) The obtained | a | + | b | is taken as the first feature. However, since the case shown in fig. 8 is considered, a position where the absolute value of the gradation change rate is the largest may exist at the edge of the gradation curve. Therefore, width ═ c | + | d | is taken as the first feature.
Referring to fig. 9a and 9b, the rectangular frame in fig. 9a is the region of white-black-white luminance variation, and the valley in the gray curve in the rectangular frame in fig. 9b is the gray value variation of the white-black-white variation portion in the rectangular frame in fig. 9 a. Since the gray scale curve corresponding to the pixel points covered by the sampling line has larger aliasing due to the adoption of uniform coordinates, which causes difficulty in analysis, the gray scale curve is displayed in a staggered manner in fig. 9b, and the gray scale value is not a true value and is only used for indication.
Referring to fig. 10, a diagram of the first feature and the second feature corresponding to the poles of the valleys in the gray scale curve is shown. In the figure, the double arrow a is a first characteristic determined from the poles of the valleys and the single arrow B is a second characteristic determined from the poles of the valleys.
After step S400, that is, after the first feature and the second feature are taken as target features, the method further includes:
and presetting a first label of the target feature, wherein the first label indicates that the Mura area is included in the image to be processed and can be [ 10 ] for example. And presetting a second label of the target feature, wherein the second label indicates that the Mura region is not included in the image to be processed and can be [ 01 ] for example. And training a neural network model through the target characteristics with the labels to obtain a detection model, and further detecting the Mura of the image to be detected.
Steps S100, S200, S300, S500, and S600 may remain unchanged in this embodiment, or may be combined with other embodiments.
In another embodiment, the embodiment may be combined with step S403, and in step S403, after determining the positions of the peaks and/or the troughs in the gray scale curve according to the gray scale value of each pixel point, the method further includes a step of optimizing the positions of the peaks and/or the troughs. Referring to fig. 11 and 12, the step includes:
and determining adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point. When adjacent or coincident peaks exist within a range, those peaks within the range are determined for further manipulation. The same operation is performed for the valleys. In this embodiment, the preset value of the first pixel point is set to 5.
And after the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point are determined, taking the average value of the corresponding coordinates of the positions of the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point as the deduplication position of the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point. That is to say, the average value of the coordinates corresponding to the positions of the adjacent or coincident peaks within the first pixel preset value is used as the deduplication position of the adjacent or coincident peaks within the first pixel preset value. The same holds true for the valleys. For example, if the first pixel point has a preset value of 5 pixel points, and a certain peak is taken as a center, and another peak exists within a range of 5 pixel points away from the certain peak, the peak and the corresponding other peak are deduplicated (refer to a circle portion in fig. 11). Since the peaks have positions and are represented in an unknown form, the sum of the coordinates of the peaks is averaged, the averaged value is used as the coordinate of the deduplication position, the x-axis coordinates of the peaks are summed and then averaged to be the x-axis coordinate of the deduplication position, and the y-axis coordinates of the peaks are summed and then averaged to be the y-axis coordinate of the deduplication position. The de-duplication positions are then taken as the positions after processing these peaks.
When the wave crest does not have an adjacent or coincident wave crest in the first pixel point preset value or the wave trough does not have an adjacent or coincident wave trough in the first pixel point preset value, the wave crest or the wave trough is directly processed in the following mode.
Traversing the peaks and the deduplication positions corresponding to the peaks, and/or traversing the troughs and the deduplication positions corresponding to the troughs, and deleting the peaks, the troughs and/or the deduplication positions whose number is smaller than the threshold value within the preset range (refer to the block in fig. 11). The preset range is a range which takes a wave crest, a wave trough or a de-weight position as a center, the width is a second pixel point preset value, and the length is the length of the image to be processed in the direction vertical to the equal division sampling line. The preset value of the second pixel point is the same as that in step S403, the threshold is set to 5, and the bisected sampling line is a sampling line in the vertical direction.
Taking a peak as an example, traversing each peak and the deduplication position corresponding to the peak, taking each peak or the deduplication position corresponding to the peak as a center, determining whether the number of the peaks and/or the deduplication positions corresponding to the peaks in a preset range is smaller than a threshold, and if so, indicating that the peak or the deduplication position corresponding to the peak as the center is a 'solitary point', deleting the peak or the deduplication position corresponding to the peak as the center. Similarly, the same operation is performed for the trough or the corresponding deduplication position of the trough. The rectangular box in fig. 11 is the preset range indicated in this embodiment, and fig. 12 is the optimized result, and the positive triangle represents the peak, and the negative triangle represents the trough.
For the neural network model trained in step S500, the BP neural network model is adopted in this embodiment, which has the characteristics of strong nonlinearity and good robustness, and the error feedback of the BP neural network can more accurately fit the mapping relationship. The structure of the BP neural network is generally divided into three layers: an input layer, a hidden layer, and an output layer. External signals are input by the input layer, and each input unit transmits the input signals to each unit of the hidden layer. The hidden layer is used as a processing unit in the neural network structure, and the number of layers is different according to different network requirements. Since the input data is 2-dimensional data, the number of nodes of the input layer is 2, and the output value is 2-dimensional data, the number of nodes of the output layer is 2. The number of nodes in the hidden layer can be selected according to formula (2), where the number of nodes is 3.
Equation (2) is as follows:
wherein m is the number of hidden layer nodes, n is the number of input layer nodes, l is the number of output layer nodes, and alpha is a constant between 1 and 10. The structure of the established neural network (detection model) is shown in fig. 13.
Fig. 14 is a training performance diagram of the BP network, and it can be seen from fig. 14 that the mean square error of the BP network is 0.000030035 after 12 times of training, the recognition accuracy is high, and the actual engineering requirements are met.
Inputting the test sample into the detection model to obtain the detection result of the test sample, wherein the detection result of the detection model is shown in table 1.
Table 1: mura detection rate
And grading the flaw by 1-5 grades according to the width of the obtained Mura, wherein the flaw belongs to the 1 st grade Mura below the width of 40 pixels and is gradually increased by 20 pixels, and the flaw belongs to the 5 th grade Mura above the width of 100 pixels. The test results of the test samples are shown in FIG. 15. In 36 test samples, the first 24 samples are samples with Mura, the last 12 samples are samples without Mura, and the detection results of the test samples show that 24 samples with defects can be detected, but 3 screen false detections exist in 12 pictures without defects, wherein the pictures 31 and 32 are regions with suspected Mura after being judged, which shows that the detection model has higher accuracy and higher detection speed, and achieves the purpose of automatic and high-accuracy detection.
As shown in fig. 16, the comparison graph of the detection result of whether Mura exists in the image to be processed by using the gray curve and the result of performing Mura detection on the image to be processed by combining the gray curve with the BP neural network is shown. It can be seen from fig. 16 that no matter the comparison is performed in terms of correct detection rate or in terms of average detection time, the technique of the present application determines the target feature through the gray scale curve corresponding to the pixel point covered by the sampling line, and the effect of performing Mura detection on the image to be detected by combining with the BP neural network is better than the effect of performing Mura detection on the image to be processed based on the gray scale curve. According to the technical scheme, the correct detection rate is improved by 2.8 percentage points, the average detection time is shortened to 2 seconds from 6 seconds, and the average detection time is shortened by forty percent. And the detection model does not need to adjust parameters in the detection process.
The present specification also provides an apparatus for detecting Mura of a display screen, comprising, at least one processor; a memory storing program instructions that, when executed by the at least one processor, cause the apparatus to perform any of the methods described above.
The present specification also provides a computer-readable storage medium storing a program for detecting Mura of a display screen, which when executed by a processor, performs any of the methods described above.
Although the present invention has been described with reference to specific preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the scope of protection of one or more embodiments of the present specification shall be subject to the scope of protection of the claims.