CN111598869A - Method, equipment and storage medium for detecting Mura of display screen - Google Patents

Method, equipment and storage medium for detecting Mura of display screen Download PDF

Info

Publication number
CN111598869A
CN111598869A CN202010409618.9A CN202010409618A CN111598869A CN 111598869 A CN111598869 A CN 111598869A CN 202010409618 A CN202010409618 A CN 202010409618A CN 111598869 A CN111598869 A CN 111598869A
Authority
CN
China
Prior art keywords
gray
image
processed
pixel point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010409618.9A
Other languages
Chinese (zh)
Other versions
CN111598869B (en
Inventor
曾庆化
杜亚玲
李一能
姜涌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gaoshi Technology Suzhou Co ltd
Original Assignee
Huizhou Govion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Govion Technology Co ltd filed Critical Huizhou Govion Technology Co ltd
Publication of CN111598869A publication Critical patent/CN111598869A/en
Application granted granted Critical
Publication of CN111598869B publication Critical patent/CN111598869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure relates to a method for detecting Mura of a display screen, which comprises the following steps: collecting an image of a display screen as an image to be processed; sampling the image to be processed through a sampling line; acquiring a gray curve of a pixel point covered by each sampling line; extracting target features in the image to be processed according to the gray curve and taking the target features as training samples; training a neural network model by using the training sample with the label to obtain a detection model; and detecting the image to be detected containing the display screen through the detection model so as to detect whether the image to be detected contains Mura. The Mura in the display screen can be better detected by the method.

Description

Method, equipment and storage medium for detecting Mura of display screen
Technical Field
The embodiment of the specification relates to the field of image processing, in particular to a method, equipment and a storage medium for detecting Mura of a display screen.
Background
Along with the development of science and technology and the increase of daily demand, the display terminal of display information has appeared, and display terminal can show corresponding content according to the user demand, and then satisfies the user demand. However, the display terminal displays the content to be displayed through the display screen in the display terminal, so the importance of the display screen to the display terminal is self-evident. The types of display terminals are various, such as televisions, smart phones, automobile instruments, head-mounted VR devices, and the like.
The quality of the display screen is affected by many factors, one of the most common of which is called "Mura", which is derived from japanese transliteration and means "clouds" or "clouds" indicating uneven, inconsistent or defective display. Mura on a display screen, also referred to as brightness non-uniformity, is a poor display defect of the display screen that manifests as partial area brightness or color non-uniformity of the display screen, detracting from the user's viewing experience, and may hinder the performance or functionality of the display.
Because the contrast of the area where the Mura is located and the surrounding background is low, the edges are fuzzy, the shapes are different, so that the Mura is not easy to be found, and the detection effect of whether the Mura exists in the display screen in the prior art is not good, a better method for detecting the Mura is also needed.
Disclosure of Invention
In order to solve the problems in the prior art, a main object of the embodiments of the present specification is to provide a method, an apparatus, and a storage medium for detecting Mura in a display screen, so as to solve the technical problem in the prior art that Mura cannot be better detected.
The technical scheme of one or more embodiments of the specification is realized by the following modes:
a method of detecting Mura of a display screen, comprising: taking the collected image of the display screen as an image to be processed; sampling the image to be processed through a sampling line; acquiring a gray curve of a pixel point covered by each sampling line; extracting target features in the image to be processed according to the gray curve and taking the target features as training samples; training a neural network model by using the training sample with the label to obtain a detection model; and detecting the image to be detected containing the display screen through the detection model so as to detect whether the image to be detected contains Mura.
Preferably, wherein extracting the target feature in the image to be processed according to the gray curve comprises: traversing each pixel point covered by the sampling line; acquiring the gray value of each pixel point in the gray curve; determining the positions of wave crests and/or wave troughs in the gray curve according to the gray value of each pixel point; and extracting the target characteristics according to the positions of the wave crests and/or the wave troughs.
Preferably, the determining the position of the peak and/or the trough in the gray curve according to the gray value of each pixel point includes: setting the position of a pixel point as (x, y), and when the position of the pixel point meets a first condition, taking the position of the pixel point as the position of a peak in the gray curve, wherein the first condition is represented by a first equation group, and the expression of the first equation group is as follows:
Figure BDA0002492708180000021
when the position of the pixel point meets a second condition, the position of the pixel point is taken as the position of a trough in the gray curve, the second condition is expressed by a second equation set, and the expression of the second equation set is as follows:
Figure BDA0002492708180000022
the gray scale curves representing two sides of the wave crest keep monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve.
Preferably, wherein extracting the target feature according to the position of the peak and/or the trough comprises: obtaining the pole of the peak according to the position of the peak; determining a first position and a second position where the absolute value of the gray-scale change rate on two sides of the pole of the peak is maximum, and a first gray-scale change rate corresponding to the first position and a second gray-scale change rate corresponding to the second position; determining to extract the target feature according to the first and second gray scale change rates; and/or obtaining the pole of the wave trough according to the position of the wave trough; determining a third position and a fourth position where the absolute value of the gray-scale change rates on two sides of the pole of the trough is maximum, and a third gray-scale change rate corresponding to the third position and a fourth gray-scale change rate corresponding to the fourth position; and determining to extract the target feature according to the third gray change rate and the fourth gray change rate.
Preferably, wherein extracting the target feature according to the first and second gray change rates comprises: taking a position with a gray change rate of one Nth of the first gray change rate as a first target position, taking a position with a gray change rate of one Nth of the second gray change rate as a second target position, and taking N as a preset numerical value; taking the sum of the absolute value of the difference between the first target position and the abscissa of the pole and the absolute value of the difference between the second target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the first target position and the absolute value of the difference between the gray values of the extreme point and the second target position as a second characteristic; and taking the first characteristic and the second characteristic as the target characteristic.
Preferably, wherein extracting the target feature according to the third and fourth gray change rates comprises: taking a position with a gray change rate which is one nth of the third gray change rate as a third target position, taking a position with a gray change rate which is one nth of the fourth gray change rate as a fourth target position, and taking N as a preset numerical value; taking the sum of the absolute value of the difference between the third target position and the abscissa of the pole and the absolute value of the difference between the fourth target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the third target position and the absolute value of the difference between the gray values of the extreme point and the fourth target position as a second characteristic; and taking the first characteristic and the second characteristic as the target characteristic.
Preferably, after the first feature and the second feature are taken as the target features, the method further includes: presetting a first label of the target feature, wherein the first label represents that the image to be processed comprises a Mura area; presetting a second label of the target feature, wherein the second label represents that the Mura area is not included in the image to be processed.
Preferably, after determining the positions of the peaks and/or the troughs in the gray scale curve according to the gray scale value of each pixel point, optimizing the positions of the peaks and/or the troughs includes: determining adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point; taking the average value of the coordinates corresponding to the positions of the adjacent or superposed peaks or troughs in the preset value of the first pixel point as the de-duplication position of the adjacent or superposed peaks or troughs in the preset value of the first pixel point; traversing the wave crests or the wave troughs and the de-weight positions, and deleting the wave crests, the wave troughs and/or the de-weight positions with the quantity smaller than a threshold value within a preset range, wherein the preset range is a range which takes the wave crests, the wave troughs or the de-weight positions as the center, the width is a second pixel point preset value, and the length is the length of the image to be processed in the direction vertical to the equally-divided sampling lines.
Preferably, the sampling the image to be processed by the sampling line includes: determining the interval of the equal sampling lines according to the length of the image to be processed in the direction vertical to the equal sampling lines and the number of the equal sampling lines; and equally dividing the image to be processed through the equally dividing sampling line.
Preferably, the interval of the equant sampling lines is determined according to the length of the image to be processed in the direction perpendicular to the equant sampling lines and the number of the equant sampling lines, and the expression is as follows:
d=row/(n+1)
wherein d is the interval of the equal sampling lines, row is the length of the image to be processed in the direction vertical to the equal sampling lines, and n is the number of the equal sampling lines.
Preferably, before sampling the image to be processed by the sampling line, the method further includes preprocessing the image to be processed, including: carrying out mean value filtering processing on the image to be processed; and performing down-sampling on the image after mean filtering.
An apparatus for detecting Mura of a display screen, comprising: at least one processor; a memory storing program instructions that, when executed by the at least one processor, cause the apparatus to perform any of the methods described above.
A computer-readable storage medium storing a program for detecting Mura of a display screen, which when executed by a processor, performs any of the methods described above.
Compared with the prior art, the embodiment of the application adopts at least one technical scheme which can at least achieve the following beneficial effects:
according to the technical scheme, the collected image of the display screen is used as the image to be processed, the image to be processed is sampled by using the sampling line, and then the gray curve of the pixel point covered by each sampling line is obtained. And extracting target features in the image to be processed according to the gray curve, taking the extracted target features as training samples, and training a neural network model by using the training samples with labels to obtain a detection model. And detecting the image to be detected containing the display screen through the detection model to obtain a detection result of whether the image to be detected contains Mura. According to the technical scheme, the target characteristics in the processed image are obtained by utilizing the information in the gray curve, and the method for automatically detecting the Mura is obtained by combining the neural network. The method effectively realizes the detection of Mura in the display screen.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. In the drawings, several embodiments of the disclosure are illustrated by way of example and not by way of limitation, and like or corresponding reference numerals indicate like or corresponding parts and in which:
fig. 1a and fig. 1b are schematic diagrams of collected screen sampling images provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for detecting Mura of a display screen according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a gray scale curve of a pixel point covered by a sampling line before an image to be processed is preprocessed according to an embodiment of the present disclosure;
fig. 4 is a gray scale curve of a pixel point covered by a sampling line in an image to be processed after the image to be processed is processed in the first preprocessing step according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a process of extracting a target feature in an image to be processed according to a gray scale curve according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a process of extracting a target feature according to positions of peaks and/or valleys according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating another process of extracting a target feature according to the positions of peaks and/or valleys according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a gray scale curve of a black-white luminance variation according to an embodiment of the present application;
FIG. 9a is a schematic diagram of a region with white-black-white luminance variation according to an embodiment of the present application;
FIG. 9b is a schematic diagram of a gray scale curve corresponding to the region of the white-black-white luminance variation in FIG. 9a according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a target feature in a gray scale curve according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a wave crest and/or a wave trough before being optimized according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an optimized peak and/or valley provided by an embodiment of the present application;
FIG. 13 is a diagram illustrating results of a detection model according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of training performance of a detection model according to an embodiment of the present disclosure;
FIG. 15 shows the test results of a test sample according to an embodiment of the present application;
fig. 16 is a comparison graph of detection results provided in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
As shown in fig. 1a and 1b, for the collected images of the display screen, taking the display screen of the mobile phone as an example, the horizontal rectangular frame in fig. 1a and the vertical rectangular frame in fig. 1b are regions where the overall brightness is not uniform in the mobile phone screen, and the regions are Mura. Because the position of Mura is not fixed, and the Mura has no obvious difference with the surrounding background, the contrast with the surrounding background is low, and the Mura is not easy to find. Therefore, the Mura detection needs to be performed on the display screen, and whether the Mura exists in the display screen is further judged, so that the quality of the display screen is ensured. Taking the line Mura as an example, the line Mura exists in the rectangular box in fig. 1 (the figure is only for illustration since Mura is not obvious by nature).
In the prior art, there are various methods for detecting Mura, for example, the conventional method based on edge detection and threshold segmentation, because the contrast between Mura and surrounding background is low and there is no obvious edge, the conventional method based on edge detection and threshold segmentation is difficult to stably detect the Mura. In consideration of the wide variety of Mura, the detection difficulty is also high by a detection algorithm with universality.
The method divides a screen area into a plurality of non-overlapping pixel blocks, adaptively enhances an image and positions a Mura area in the image according to the gray distribution characteristics of each pixel block. For another example, a fuzzy pattern recognition technology simulating human thinking and logic ways enables the detection system to simulate human thinking to recognize and classify Mura defects. For another example, the method is improved on the basis of the original Space Standard Observation (SSO) method, a variation function is adopted to determine an image texture period, and then frequency domain filtering is performed according to the texture period, so as to detect the Mura defect. And inhibiting image noise by using real-value Gabor wavelet filtering, then solving the problem of uneven brightness by using homomorphic transformation, and improving the position of locating Mura by a Chan-Vese dynamic contour model. The methods are all methods for positioning or determining Mura in the prior art, but the methods cannot better detect the Mura.
The technical scheme of the application provides a novel method for detecting Mura in a display screen, and the Mura can be better detected.
As shown in fig. 2, a schematic flow chart of a method for detecting Mura of a display screen according to the present disclosure is provided. The method mainly comprises the following steps:
and step S100, taking the acquired image of the display screen as an image to be processed. Because the display screen is detected, the image of the display screen needs to be collected before the technical scheme is implemented. The specific acquisition method is not limited herein, and may be to acquire an image of a display screen displaying a corresponding picture by a camera in an environment with a certain light brightness according to a requirement, or to acquire an image of a display screen by other image acquisition devices having an image acquisition function when the display screen displays a certain picture. And then taking the acquired image of the display screen as an image to be processed.
And step S200, sampling the image to be processed through the sampling line. And processing the image to be processed after obtaining the image to be processed, including sampling the image to be processed through a sampling line. The image to be processed includes a large number of pixels in the horizontal direction or the vertical direction, because Mura does not include only one or two pixels in the horizontal direction or the vertical direction, but has a certain range, for example, the line Mura has a certain length and width. Therefore, in order not to process each pixel point in the image to be processed in the horizontal direction or the vertical direction, the image to be processed may be sampled through a sampling line, so as to perform Mura detection on the image to be processed through the sampling line, and the sampling line may be in the horizontal direction or the vertical direction. The number of sampling lines can be set according to actual requirements, and is not limited herein. Certainly, the number of sampling lines can also be increased according to actual requirements, and if the number of sampling lines is consistent with the number of pixel points of the image to be processed in the horizontal direction or the vertical direction, and the length of the sampling lines is equal to the length of one side parallel to the image to be processed, the sampling lines can cover each pixel point of the image to be processed.
Step 300, obtaining a gray curve of the pixel point covered by each sampling line. After the image to be processed is sampled by the sampling lines, the sampling lines correspond to corresponding pixel points in the image to be processed, which are referred to as pixel points covered by the sampling lines, and the pixel points covered by each sampling line have corresponding gray values (the specific determination process of the gray values is not the key point of the scheme, and is not described in detail here). And acquiring a gray value corresponding to the pixel point covered by each sampling line, and forming a corresponding curve by the gray value corresponding to the pixel point covered by each sampling line to obtain the gray curve of the pixel point covered by each sampling line. The gray curve of the pixel point covered by each sampling line represents the gray change condition of the pixel point covered by the sampling line, and Mura can be further detected according to the change condition.
And step S400, extracting target characteristics in the image to be processed according to the gray curve and taking the target characteristics as training samples. After a gray curve of a pixel point covered by each sampling line is obtained, target features in the image to be processed are extracted according to the gray curve and serve as training samples of a training neural network, and the target features are used for further detecting whether Mura is included in the image to be processed.
And S500, training a neural network model by using the training sample with the label to obtain a detection model. After the training samples are obtained, the training samples are used for training the neural network, the training samples are provided with labels, the labels can be two types of labels which are preset, one type of labels is labels which indicate that the image to be detected comprises Mura, and the other type of labels which indicate that the image to be detected does not comprise Mura. Of course, these labels may be only labels indicating that Mura is included in the image to be detected. And training the neural network through the training sample with the label to obtain the trained neural network, and calling the trained neural network as a detection network.
Step S600, after the detection model is obtained, the image to be detected including the display screen can be detected through the detection model so as to detect whether the image to be detected includes Mura.
According to the technical scheme, a new method is provided for detecting whether the image of the display screen comprises Mura. The image to be processed is sampled by utilizing the sampling lines, and then the gray curve of the pixel point covered by each sampling line is obtained. And extracting target features in the image to be processed according to the gray curve, taking the extracted target features as training samples, and training a neural network model by combining a neural network and utilizing the training samples with labels to obtain a detection model. The image to be detected containing the display screen is detected through the detection model, and the detection result of whether the image to be detected contains Mura is obtained, so that the Mura in the display screen can be better and automatically detected. The method is applicable to line Mura, although other Mura may also be used.
The method in the embodiment realizes automatic detection of Mura in the display screen, and as an optimization or supplement to the method, the technical scheme of the application also provides another embodiment.
Before step S200, i.e. before sampling the image to be processed by the sampling line, the image to be processed is also preprocessed. The pretreatment step comprises:
a first pretreatment step: and filtering the image to be processed.
A second pretreatment step: the filtered image is down-sampled.
The execution process of the first preprocessing step is as follows:
fig. 3 is a schematic diagram of a gray scale curve of a pixel covered by a sampling line before an image to be processed is preprocessed. In order to illustrate the existence of noise in the image to be processed without preprocessing, the present embodiment uses the image to be processed to which the sampling lines have been added for illustration, and the image to be processed is not actually sampled by the sampling lines, but is performed in step S200. In the process of collecting the display screen image, because the intensity of the light source at one side irradiating the whole screen is uneven, the natural light in a darkroom can be refracted and diffused, and the lens of the camera can have various interference factors such as errors and vibration, the noise is inevitably introduced into the collected display screen image, so that the gray curve of the pixel point covered by the sampling line in the image to be processed has violent oscillation. In order to reduce the burr caused by noise or reduce the influence of noise, the method comprises the following steps: the image to be processed is filtered, so that the purposes of image noise reduction and image smoothing are achieved, and the effectiveness and reliability of subsequent further image processing are improved. In this embodiment, a method of mean filtering is adopted, but other filtering methods that can achieve the same effect may also be adopted. The specific process of performing the average filtering process is not the focus of this embodiment, and for example, for the image to be processed with size 2600 × 4800 pixels, a filtering matrix with size 127 × 127 may be used for filtering. After the image to be processed is subjected to the mean filtering process, the situation in fig. 3 is avoided from occurring in step S300, thereby affecting the operational problem of step S300.
As shown in fig. 4, compared with the gray scale curve before the preprocessing step one, the gray scale curve of the pixel point covered by the sampling line in the image to be processed after the image to be processed is processed by the preprocessing step one has the advantages that the noise and the burr in the gray scale curve after the preprocessing step one are greatly reduced, the violent oscillation does not occur, and the subsequent processing of the image to be processed is facilitated. It should be noted that, in fig. 3 and 4, a gray scale curve corresponding to 12 sampling lines is taken as an example, the sampling lines are in a horizontal direction or a vertical direction, a horizontal axis represents a position of a pixel point covered by the sampling lines (for example, coordinates in an image to be processed are more conveniently represented in the figure, and the coordinates of the position are coordinates after being reduced according to a certain proportion), and a vertical axis represents a gray scale value of the corresponding pixel point. For example, if the sampling line is vertically sampled to the image to be processed, the gray scale curve in fig. 3 and 4 represents a curve of the gray scale values of the pixels covered by the sampling line in the vertical direction. It is to be noted that the sampling line is a gray scale curve corresponding to the sampling line in the horizontal direction.
After the image to be processed is filtered, the method also comprises a second preprocessing step: the filtered image is down-sampled. The size of the image to be processed is large, a large amount of time is needed during processing, and actual operation is not facilitated, so that the size of the image to be processed needs to be reduced. Steps S100-S600 are then performed.
This specification also provides another embodiment. This embodiment is further limited to step S200, and steps S100 and S300 to S600 may be unchanged or may be combined with other embodiments. Step S200, sampling the image to be processed through a sampling line, comprising the following steps:
determining the interval of the equal sampling lines according to the length of the image to be processed in the direction vertical to the equal sampling lines and the number of the equal sampling lines, wherein the expression is as follows:
d=row/(n+1) (1)
wherein d is the interval of the equal sampling lines, row is the length of the image to be processed in the direction vertical to the equal sampling lines, and n is the number of the equal sampling lines.
And then equally dividing the image to be processed through equally dividing sampling lines.
For example, the bisected sampling lines are vertical to sample the image to be processed, the length of the image to be processed in the direction perpendicular to the bisected sampling lines is the length of the image to be processed in the horizontal direction, that is, row is the length of the image to be processed in the horizontal direction, assuming row is 260 pixels and n is 12, d is 20 pixels according to formula (1), and the interval between each sampling line is 20 pixels.
As shown in fig. 5, this specification also provides another embodiment. In this embodiment, step S400, extracting a target feature in an image to be processed according to a gray curve, includes:
step S401, traversing each pixel point covered by the sampling line. After the to-be-processed image is sampled by the sampling lines, pixel points of all the sampling lines are traversed, namely pixel points covered by all the sampling lines are traversed.
Step S402, obtaining the gray value of each pixel point in the gray curve. The gray value of each pixel point corresponds to the gray value of the corresponding pixel point in the gray curve corresponding to the pixel point covered by each sampling line, and the gray value of the corresponding pixel point is obtained from the corresponding gray curve so as to extract the target characteristic according to the gray value of the pixel point covered by the sampling line.
Step S403, determining the positions of wave crests and/or wave troughs in the gray curve according to the gray value of each pixel point. Each pixel point has a corresponding position in the graph to be processed, the position is expressed in a coordinate form, a sampling line is assumed to be in a vertical direction, a coordinate system is established according to the image to be processed, the horizontal direction of the image to be processed is taken as an x axis, the vertical direction is taken as a y axis, and one pixel point is taken as an example, the method comprises the following steps:
setting the position of the pixel point as (x, y), when the position of the pixel point meets a first condition, taking the position of the pixel point as the position of a peak in a gray curve, wherein the first condition is represented by a first equation set, and the expression of the first equation set is as follows:
Figure BDA0002492708180000121
the gray scale curves representing two sides of the peak keep a monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve, and is set to 20 pixel points in this embodiment. In this embodiment, when determining the peak, the first equation group is selected as the determination condition because the sampling line is verticalAnd the coordinates in the horizontal direction of the pixel points covered by the sampling line are the same, namely the coordinates in the x axis are the same, and the coordinates in the vertical direction are changed, namely the coordinates in the y axis are changed. When the gray value of the pixel point (x, y) -1 adjacent to the pixel point (x, y) in the y-axis negative direction in the gray curve is smaller than that of the pixel point (x, y) in the gray curve, and the gray value of the pixel point (x, y +1) adjacent to the pixel point (x, y) in the y-axis positive direction in the gray curve is larger than that of the pixel point (x, y) in the gray curve, the gray value of the pixel point (x, y) in the gray curve is the largest among the three points. Because the wave crest is relative to the wave crest in a certain interval range, the gray value curves of the three points cannot indicate that the pixel point (x, y) is the wave crest, and then the points are classified
Figure BDA0002492708180000122
And
Figure BDA0002492708180000123
the corresponding gray value in the gray curve is simultaneously used as the determination condition of the peak, and the interval in the y axis is the interval
Figure BDA0002492708180000124
On the y-axis, pixel points
Figure BDA0002492708180000125
And (x, y) the gray values corresponding to the gray curve are sequentially increased (monotone interval), and the pixel points (x, y),
Figure BDA0002492708180000126
And
Figure BDA0002492708180000127
the corresponding gray values in the gray curve are sequentially reduced (monotonous interval), which shows that the change of the pixel point (x, y) on the y axis is
Figure BDA0002492708180000128
The gray value in the gray curve in the interval is the maximum, and the gray curves corresponding to the pixel points in a certain interval at both sides of the pixel point (x, y) are allIs monotonic, so pixel point (x, y) is taken as the peak in the gray curve. Of course, the variation interval of the y-axis can be adjusted and changed as required.
When the position of the pixel point meets a second condition, the position of the pixel point is taken as the position of a trough in the gray curve, the second condition is expressed by a second equation set, and the expression of the second equation set is as follows:
Figure BDA0002492708180000131
the gray scale curves representing two sides of the peak keep a monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve, and is set to 20 pixel points in this embodiment. Similarly, the process of determining the trough is the same as the process of determining the peak according to the first condition, except that the signs of the corresponding inequalities in the second equation set are opposite.
Of course, the sampling line may also be a sampling line in the horizontal direction, and the image to be processed is sampled in the horizontal direction. The peak and the trough in the gray curve can exist singly or simultaneously, the peak and the trough exist, and the target characteristics are extracted according to the actual situation.
And S404, extracting target characteristics according to the positions of the wave crests and/or the wave troughs. Since the peak and the trough in the gray curve may exist singly or simultaneously, the target feature needs to be extracted according to the actual situation. Referring to fig. 6 and 7, the step includes:
step S4041, obtaining the pole (x) of the peak according to the position of the peak0,y0). The specific process of obtaining the corresponding pole according to the peak and/or the trough is not the focus of the present application, and is not described in detail herein.
Step S4042, after determining the peak in the gray curve, determining the first position and the second position where the absolute value of the gray change rate at the two sides of the pole of the peak is maximum, and changing the gray at one side of the pole of the peakThe position where the absolute value of the rate is the largest is taken as the first position, and the position where the absolute value of the gray change rate on the other side of the pole of the peak is the largest is taken as the second position. A first rate of change of the gray scale corresponding to the first location and a second rate of change of the gray scale corresponding to the second location are then determined. The specific process of determining the first and second positions is not central to this step and will not be explained in detail. The first position is noted as (x) in this step0+a,y0) And the second position is noted as (x)0+b,y0) The first gray scale change rate is recorded as
Figure BDA0002492708180000143
The second rate of change of the gray scale is recorded as
Figure BDA0002492708180000144
Step S4043, determining and extracting the target feature according to the first and second gray change rates, including:
and taking a position with a gray change rate of one Nth of the first gray change rate as a first target position, taking a position with a gray change rate of one Nth of the second gray change rate as a second target position, wherein N is a preset numerical value, and the value of N in the step is 3. In this step, the change rate at which the gradation change rate is one nth of the first gradation change rate is recorded as
Figure BDA0002492708180000141
Recording the change rate of the gray scale change rate which is one Nth of the second gray scale change rate as the change rate
Figure BDA0002492708180000142
Denote the first target position as (x)0+c,y0) Denote the second target position as (x)0+d,y0)。
The sum of the absolute value of the difference between the first target position and the abscissa of the pole and the absolute value of the difference between the second target position and the abscissa of the pole is taken as a first feature, and is expressed as width | c | + | d |. The absolute value of the difference between the gray values of the pole point and the first target position and the difference between the gray values of the pole point and the second target positionIs taken as a second feature and is denoted as diffvalue ═ I (x)0,y0)-I(x0+c,y0)|+|I(x0,y0)-I(x0+d,y0)|。
The first feature and the second feature are taken as target features.
Step S4041 to step S4043 are performed when the peak exists in the gray scale curve, and the pole of the peak is determined according to the peak, so as to extract the target feature. In the following steps S4044 to S4046, when there is a trough in the gray scale curve, the pole of the trough is determined according to the trough, and the target feature is extracted, and steps S4041 to S4043 may be performed first, or steps S4044 to S4046 may be performed first.
And/or
Step S4044, obtaining the pole (x) of the trough according to the position of the trough1,y1)。
Step S4045, determining a third position and a fourth position where the absolute values of the gray-scale change rates of the two sides of the pole of the trough are maximum, and determining a third gray-scale change rate corresponding to the third position and a fourth gray-scale change rate corresponding to the fourth position. And taking the position with the maximum absolute value of the gray change rate at one side of the pole of the trough as a third position, and taking the position with the maximum absolute value of the gray change rate at the other side of the pole of the trough as a fourth position. And then determining a third gray scale change rate corresponding to the third position and a fourth gray scale change rate corresponding to the fourth position. The specific process of determining the third and fourth positions is not the focus of this step and will not be explained in detail. In this step, the third position is denoted as (x)1+a,y1) And the fourth position is marked as (x)1+b,y1) And the third gradation change rate is recorded as
Figure BDA0002492708180000151
The fourth rate of change in gray scale is recorded as
Figure BDA0002492708180000152
And S4046, determining the extracted target feature according to the third and fourth gray change rates. The method comprises the following steps:
and taking the position with the gray change rate being one nth of the third gray change rate as a third target position, taking the position with the gray change rate being one nth of the fourth gray change rate as a fourth target position, wherein N is a preset numerical value, and the value of N in the step is 3. In this step, the change rate at which the gradation change rate is one nth of the third gradation change rate is recorded as
Figure BDA0002492708180000153
Recording the change rate of the fourth gray change rate of which the gray change rate is one Nth of the fourth gray change rate as
Figure BDA0002492708180000154
Denote the third target position as (x)1+c,y1) Denote the fourth target position as (x)1+d,y1)。
The sum of the absolute value of the difference between the abscissa of the pole and the third target position and the absolute value of the difference between the abscissa of the pole and the fourth target position is taken as a first feature, and is expressed as width | + | d |. The sum of the absolute value of the difference between the gray values of the pole and the third target position and the absolute value of the difference between the gray values of the pole and the fourth target position is taken as a second feature and is expressed as Difvalue ═ I (x)1,y1)-I(x1+c,y1)|+|I(x1,y1)-I(x1+d,y1)|。
The first feature and the second feature are taken as target features. Through step S404, target features are extracted according to the positions of the peaks and/or valleys, so as to prepare for training a neural network model and further detecting Mura.
Values are illustrated (refer to fig. 8, 9, and 10), the determination of the gray scale change rate corresponding to each of the first target position, the second target position, the third target position, and the fourth target position is as follows:
in a gray-scale image, the brightest region corresponds to the peak of the gray-scale curve, and the boundary of the region corresponds to the region where black and white intersect, and the change in the gray-scale rate of the region is the largest, that is, the absolute value of the slope (gray-scale change rate) of the gray-scale curve is the largest. However, the gray curve where the partial image exists appearsThe same or similar situation as in fig. 8. When the gray scale curve is shown in fig. 8, it is explained that the sampling region of the sampling line corresponding to the gray scale curve is not a common black-white-black luminance change but a black-white change. Therefore, a monotonous curve is present on the left side of the peak point shown in fig. 8, and the absolute value of the slope (gradation change rate) of the monotonous curve is the maximum. Obviously, this is not in compliance with Mura. Therefore, in the present embodiment, the value of N may be set to 3, and the change rate may be set to 3
Figure BDA0002492708180000161
Or
Figure BDA0002492708180000162
As the target position. Of course, the value of N may also take on 2 or other values.
The position with the maximum absolute value of the gray scale change rate at two sides of the pole of the peak or the trough is referred to as (x)0+a,y0)、(x0+b,y0) Or (x)1+a,y1)、(x1+b,y1) The obtained | a | + | b | is taken as the first feature. However, since the case shown in fig. 8 is considered, a position where the absolute value of the gradation change rate is the largest may exist at the edge of the gradation curve. Therefore, width ═ c | + | d | is taken as the first feature.
Referring to fig. 9a and 9b, the rectangular frame in fig. 9a is the region of white-black-white luminance variation, and the valley in the gray curve in the rectangular frame in fig. 9b is the gray value variation of the white-black-white variation portion in the rectangular frame in fig. 9 a. Since the gray scale curve corresponding to the pixel points covered by the sampling line has larger aliasing due to the adoption of uniform coordinates, which causes difficulty in analysis, the gray scale curve is displayed in a staggered manner in fig. 9b, and the gray scale value is not a true value and is only used for indication.
Referring to fig. 10, a diagram of the first feature and the second feature corresponding to the poles of the valleys in the gray scale curve is shown. In the figure, the double arrow a is a first characteristic determined from the poles of the valleys and the single arrow B is a second characteristic determined from the poles of the valleys.
After step S400, that is, after the first feature and the second feature are taken as target features, the method further includes:
and presetting a first label of the target feature, wherein the first label indicates that the Mura area is included in the image to be processed and can be [ 10 ] for example. And presetting a second label of the target feature, wherein the second label indicates that the Mura region is not included in the image to be processed and can be [ 01 ] for example. And training a neural network model through the target characteristics with the labels to obtain a detection model, and further detecting the Mura of the image to be detected.
Steps S100, S200, S300, S500, and S600 may remain unchanged in this embodiment, or may be combined with other embodiments.
In another embodiment, the embodiment may be combined with step S403, and in step S403, after determining the positions of the peaks and/or the troughs in the gray scale curve according to the gray scale value of each pixel point, the method further includes a step of optimizing the positions of the peaks and/or the troughs. Referring to fig. 11 and 12, the step includes:
and determining adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point. When adjacent or coincident peaks exist within a range, those peaks within the range are determined for further manipulation. The same operation is performed for the valleys. In this embodiment, the preset value of the first pixel point is set to 5.
And after the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point are determined, taking the average value of the corresponding coordinates of the positions of the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point as the deduplication position of the adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point. That is to say, the average value of the coordinates corresponding to the positions of the adjacent or coincident peaks within the first pixel preset value is used as the deduplication position of the adjacent or coincident peaks within the first pixel preset value. The same holds true for the valleys. For example, if the first pixel point has a preset value of 5 pixel points, and a certain peak is taken as a center, and another peak exists within a range of 5 pixel points away from the certain peak, the peak and the corresponding other peak are deduplicated (refer to a circle portion in fig. 11). Since the peaks have positions and are represented in an unknown form, the sum of the coordinates of the peaks is averaged, the averaged value is used as the coordinate of the deduplication position, the x-axis coordinates of the peaks are summed and then averaged to be the x-axis coordinate of the deduplication position, and the y-axis coordinates of the peaks are summed and then averaged to be the y-axis coordinate of the deduplication position. The de-duplication positions are then taken as the positions after processing these peaks.
When the wave crest does not have an adjacent or coincident wave crest in the first pixel point preset value or the wave trough does not have an adjacent or coincident wave trough in the first pixel point preset value, the wave crest or the wave trough is directly processed in the following mode.
Traversing the peaks and the deduplication positions corresponding to the peaks, and/or traversing the troughs and the deduplication positions corresponding to the troughs, and deleting the peaks, the troughs and/or the deduplication positions whose number is smaller than the threshold value within the preset range (refer to the block in fig. 11). The preset range is a range which takes a wave crest, a wave trough or a de-weight position as a center, the width is a second pixel point preset value, and the length is the length of the image to be processed in the direction vertical to the equal division sampling line. The preset value of the second pixel point is the same as that in step S403, the threshold is set to 5, and the bisected sampling line is a sampling line in the vertical direction.
Taking a peak as an example, traversing each peak and the deduplication position corresponding to the peak, taking each peak or the deduplication position corresponding to the peak as a center, determining whether the number of the peaks and/or the deduplication positions corresponding to the peaks in a preset range is smaller than a threshold, and if so, indicating that the peak or the deduplication position corresponding to the peak as the center is a 'solitary point', deleting the peak or the deduplication position corresponding to the peak as the center. Similarly, the same operation is performed for the trough or the corresponding deduplication position of the trough. The rectangular box in fig. 11 is the preset range indicated in this embodiment, and fig. 12 is the optimized result, and the positive triangle represents the peak, and the negative triangle represents the trough.
For the neural network model trained in step S500, the BP neural network model is adopted in this embodiment, which has the characteristics of strong nonlinearity and good robustness, and the error feedback of the BP neural network can more accurately fit the mapping relationship. The structure of the BP neural network is generally divided into three layers: an input layer, a hidden layer, and an output layer. External signals are input by the input layer, and each input unit transmits the input signals to each unit of the hidden layer. The hidden layer is used as a processing unit in the neural network structure, and the number of layers is different according to different network requirements. Since the input data is 2-dimensional data, the number of nodes of the input layer is 2, and the output value is 2-dimensional data, the number of nodes of the output layer is 2. The number of nodes in the hidden layer can be selected according to formula (2), where the number of nodes is 3.
Equation (2) is as follows:
Figure BDA0002492708180000181
wherein m is the number of hidden layer nodes, n is the number of input layer nodes, l is the number of output layer nodes, and alpha is a constant between 1 and 10. The structure of the established neural network (detection model) is shown in fig. 13.
Fig. 14 is a training performance diagram of the BP network, and it can be seen from fig. 14 that the mean square error of the BP network is 0.000030035 after 12 times of training, the recognition accuracy is high, and the actual engineering requirements are met.
Inputting the test sample into the detection model to obtain the detection result of the test sample, wherein the detection result of the detection model is shown in table 1.
Table 1: mura detection rate
Figure BDA0002492708180000182
And grading the flaw by 1-5 grades according to the width of the obtained Mura, wherein the flaw belongs to the 1 st grade Mura below the width of 40 pixels and is gradually increased by 20 pixels, and the flaw belongs to the 5 th grade Mura above the width of 100 pixels. The test results of the test samples are shown in FIG. 15. In 36 test samples, the first 24 samples are samples with Mura, the last 12 samples are samples without Mura, and the detection results of the test samples show that 24 samples with defects can be detected, but 3 screen false detections exist in 12 pictures without defects, wherein the pictures 31 and 32 are regions with suspected Mura after being judged, which shows that the detection model has higher accuracy and higher detection speed, and achieves the purpose of automatic and high-accuracy detection.
As shown in fig. 16, the comparison graph of the detection result of whether Mura exists in the image to be processed by using the gray curve and the result of performing Mura detection on the image to be processed by combining the gray curve with the BP neural network is shown. It can be seen from fig. 16 that no matter the comparison is performed in terms of correct detection rate or in terms of average detection time, the technique of the present application determines the target feature through the gray scale curve corresponding to the pixel point covered by the sampling line, and the effect of performing Mura detection on the image to be detected by combining with the BP neural network is better than the effect of performing Mura detection on the image to be processed based on the gray scale curve. According to the technical scheme, the correct detection rate is improved by 2.8 percentage points, the average detection time is shortened to 2 seconds from 6 seconds, and the average detection time is shortened by forty percent. And the detection model does not need to adjust parameters in the detection process.
The present specification also provides an apparatus for detecting Mura of a display screen, comprising, at least one processor; a memory storing program instructions that, when executed by the at least one processor, cause the apparatus to perform any of the methods described above.
The present specification also provides a computer-readable storage medium storing a program for detecting Mura of a display screen, which when executed by a processor, performs any of the methods described above.
Although the present invention has been described with reference to specific preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the scope of protection of one or more embodiments of the present specification shall be subject to the scope of protection of the claims.

Claims (13)

1. A method of detecting Mura of a display screen, comprising:
taking the collected image of the display screen as an image to be processed;
sampling the image to be processed through a sampling line;
acquiring a gray curve of a pixel point covered by each sampling line;
extracting target features in the image to be processed according to the gray curve and taking the target features as training samples;
training a neural network model by using the training sample with the label to obtain a detection model;
and detecting the image to be detected containing the display screen through the detection model so as to detect whether the image to be detected contains Mura.
2. The method of claim 1, wherein extracting target features in the image to be processed from the gray scale curve comprises:
traversing each pixel point covered by the sampling line;
acquiring the gray value of each pixel point in the gray curve;
determining the positions of wave crests and/or wave troughs in the gray curve according to the gray value of each pixel point;
and extracting the target characteristics according to the positions of the wave crests and/or the wave troughs.
3. The method of claim 2, wherein determining the position of the peak and/or the trough in the gray scale curve according to the gray scale value of each pixel point comprises:
setting the position of a pixel point as (x, y), and when the position of the pixel point meets a first condition, taking the position of the pixel point as the position of a peak in the gray curve, wherein the first condition is represented by a first equation group, and the expression of the first equation group is as follows:
Figure FDA0002492708170000011
when the position of the pixel point meets a second condition, the position of the pixel point is taken as the position of a trough in the gray curve, the second condition is expressed by a second equation set, and the expression of the second equation set is as follows:
Figure FDA0002492708170000021
the gray scale curves representing two sides of the wave crest keep monotonicity preset width, and I (x, y) represents the gray scale value of the pixel point (x, y) in the gray scale curve.
4. The method of claim 2, wherein extracting the target feature from the locations of the peaks and/or valleys comprises:
obtaining the pole of the peak according to the position of the peak;
determining a first position and a second position where the absolute value of the gray-scale change rate on two sides of the pole of the peak is maximum, and a first gray-scale change rate corresponding to the first position and a second gray-scale change rate corresponding to the second position;
determining to extract the target feature according to the first and second gray scale change rates; and/or
Obtaining the pole of the trough according to the position of the trough;
determining a third position and a fourth position where the absolute value of the gray-scale change rates on two sides of the pole of the trough is maximum, and a third gray-scale change rate corresponding to the third position and a fourth gray-scale change rate corresponding to the fourth position;
and determining to extract the target feature according to the third gray change rate and the fourth gray change rate.
5. The method of claim 4, wherein extracting the target feature from the first and second rates of change comprises:
taking a position with a gray change rate of one Nth of the first gray change rate as a first target position, taking a position with a gray change rate of one Nth of the second gray change rate as a second target position, and taking N as a preset numerical value;
taking the sum of the absolute value of the difference between the first target position and the abscissa of the pole and the absolute value of the difference between the second target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the first target position and the absolute value of the difference between the gray values of the extreme point and the second target position as a second characteristic;
and taking the first characteristic and the second characteristic as the target characteristic.
6. The method of claim 4, wherein extracting the target feature according to the third and fourth rates of change comprises:
taking a position with a gray change rate which is one nth of the third gray change rate as a third target position, taking a position with a gray change rate which is one nth of the fourth gray change rate as a fourth target position, and taking N as a preset numerical value;
taking the sum of the absolute value of the difference between the third target position and the abscissa of the pole and the absolute value of the difference between the fourth target position and the abscissa of the pole as a first feature; taking the sum of the absolute value of the difference between the gray values of the extreme point and the third target position and the absolute value of the difference between the gray values of the extreme point and the fourth target position as a second characteristic;
and taking the first characteristic and the second characteristic as the target characteristic.
7. The method of claim 5 or 6, wherein after having the first and second features as the target feature, further comprising:
presetting a first label of the target feature, wherein the first label represents that the image to be processed comprises a Mura area;
presetting a second label of the target feature, wherein the second label represents that the Mura area is not included in the image to be processed.
8. The method according to claim 2, wherein after determining the positions of the peaks and/or the troughs in the gray scale curve according to the gray scale value of each pixel point, further comprising optimizing the positions of the peaks and/or the troughs, comprising:
determining adjacent or coincident wave crests or wave troughs in the preset value of the first pixel point;
taking the average value of the coordinates corresponding to the positions of the adjacent or superposed peaks or troughs in the preset value of the first pixel point as the de-duplication position of the adjacent or superposed peaks or troughs in the preset value of the first pixel point;
traversing the wave crests or the wave troughs and the de-weight positions, and deleting the wave crests, the wave troughs and/or the de-weight positions with the quantity smaller than a threshold value within a preset range, wherein the preset range is a range which takes the wave crests, the wave troughs or the de-weight positions as the center, the width is a second pixel point preset value, and the length is the length of the image to be processed in the direction vertical to the equally-divided sampling lines.
9. The method of claim 1, wherein sampling the image to be processed by a sampling line comprises:
determining the interval of the equal sampling lines according to the length of the image to be processed in the direction vertical to the equal sampling lines and the number of the equal sampling lines;
and equally dividing the image to be processed through the equally dividing sampling line.
10. The method according to claim 9, wherein the interval of the equally divided sampling lines is determined according to the length of the image to be processed in the direction perpendicular to the equally divided sampling lines and the number of equally divided sampling lines, and the expression is as follows:
d=row/(n+1)
wherein d is the interval of the equal sampling lines, row is the length of the image to be processed in the direction vertical to the equal sampling lines, and n is the number of the equal sampling lines.
11. The method of claim 1, wherein prior to sampling the image to be processed by the sampling line, further comprising pre-processing the image to be processed, comprising:
carrying out mean value filtering processing on the image to be processed;
and performing down-sampling on the image after mean filtering.
12. An apparatus for detecting Mura of a display screen, comprising,
at least one processor;
a memory storing program instructions that, when executed by the at least one processor, cause the apparatus to perform the method of any of claims 1-11.
13. A computer-readable storage medium storing a program for detecting Mura of a display screen, which when executed by a processor performs the method of any one of claims 1-11.
CN202010409618.9A 2020-04-03 2020-05-14 Method, equipment and storage medium for detecting Mura of display screen Active CN111598869B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020102579669 2020-04-03
CN202010257966 2020-04-03

Publications (2)

Publication Number Publication Date
CN111598869A true CN111598869A (en) 2020-08-28
CN111598869B CN111598869B (en) 2021-08-20

Family

ID=72185623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010409618.9A Active CN111598869B (en) 2020-04-03 2020-05-14 Method, equipment and storage medium for detecting Mura of display screen

Country Status (1)

Country Link
CN (1) CN111598869B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508922A (en) * 2020-12-14 2021-03-16 深圳精智达技术股份有限公司 Mura detection method, device, terminal equipment and storage medium
CN114088349A (en) * 2021-09-29 2022-02-25 歌尔光学科技有限公司 Method, device and system for testing color-combination prism
CN114627049A (en) * 2022-01-28 2022-06-14 天津市久跃科技有限公司 Injection molding product surface defect detection method
CN116912204A (en) * 2023-07-13 2023-10-20 上海频准激光科技有限公司 Treatment method for fusion splicing of optical fibers
CN117314826A (en) * 2023-08-28 2023-12-29 广州千筱母婴用品有限公司 Performance detection method of display screen

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252056A (en) * 2014-09-18 2014-12-31 京东方科技集团股份有限公司 Detection method and device of substrate
CN105529002A (en) * 2014-09-30 2016-04-27 青岛海信信芯科技有限公司 Method and device for determining luminance compensation coefficients
CN108171707A (en) * 2018-01-23 2018-06-15 武汉精测电子集团股份有限公司 A kind of Mura defects level evaluation method and device based on deep learning
CN108596226A (en) * 2018-04-12 2018-09-28 武汉精测电子集团股份有限公司 A kind of defects of display panel training method and system based on deep learning
CN108844966A (en) * 2018-07-09 2018-11-20 广东速美达自动化股份有限公司 A kind of screen detection method and detection system
US20190318469A1 (en) * 2018-04-17 2019-10-17 Coherent AI LLC Defect detection using coherent light illumination and artificial neural network analysis of speckle patterns

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252056A (en) * 2014-09-18 2014-12-31 京东方科技集团股份有限公司 Detection method and device of substrate
CN105529002A (en) * 2014-09-30 2016-04-27 青岛海信信芯科技有限公司 Method and device for determining luminance compensation coefficients
CN108171707A (en) * 2018-01-23 2018-06-15 武汉精测电子集团股份有限公司 A kind of Mura defects level evaluation method and device based on deep learning
CN108596226A (en) * 2018-04-12 2018-09-28 武汉精测电子集团股份有限公司 A kind of defects of display panel training method and system based on deep learning
US20190318469A1 (en) * 2018-04-17 2019-10-17 Coherent AI LLC Defect detection using coherent light illumination and artificial neural network analysis of speckle patterns
CN108844966A (en) * 2018-07-09 2018-11-20 广东速美达自动化股份有限公司 A kind of screen detection method and detection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU ZHANG ET AL: "A fuzzy neural network approach for quantitative evaluation of mura in TFT-LCD", 《2005 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS AND BRAIN》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508922A (en) * 2020-12-14 2021-03-16 深圳精智达技术股份有限公司 Mura detection method, device, terminal equipment and storage medium
CN114088349A (en) * 2021-09-29 2022-02-25 歌尔光学科技有限公司 Method, device and system for testing color-combination prism
CN114088349B (en) * 2021-09-29 2024-10-11 歌尔光学科技有限公司 Method, device and system for testing color combining prism
CN114627049A (en) * 2022-01-28 2022-06-14 天津市久跃科技有限公司 Injection molding product surface defect detection method
CN116912204A (en) * 2023-07-13 2023-10-20 上海频准激光科技有限公司 Treatment method for fusion splicing of optical fibers
CN116912204B (en) * 2023-07-13 2024-01-26 上海频准激光科技有限公司 Treatment method for fusion splicing of optical fibers
CN117314826A (en) * 2023-08-28 2023-12-29 广州千筱母婴用品有限公司 Performance detection method of display screen

Also Published As

Publication number Publication date
CN111598869B (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN111598869B (en) Method, equipment and storage medium for detecting Mura of display screen
CN114972329B (en) Image enhancement method and system of surface defect detector based on image processing
CN111553929B (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN113450307B (en) Product edge defect detection method
WO2021143343A1 (en) Method and device for testing product quality
WO2021139258A1 (en) Image recognition based cell recognition and counting method and apparatus, and computer device
CN109977191B (en) Problem map detection method, device, electronic equipment and medium
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN111325717B (en) Mobile phone defect position identification method and equipment
WO2017120796A1 (en) Pavement distress detection method and apparatus, and electronic device
EP3770853B1 (en) Image processing method, computer program, and recording medium
CN112819748B (en) Training method and device for strip steel surface defect recognition model
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN114757913A (en) Display screen defect detection method
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN117078651A (en) Defect detection method, device, equipment and storage medium
CN114841992A (en) Defect detection method based on cyclic generation countermeasure network and structural similarity
CN112686896B (en) Glass defect detection method based on frequency domain and space combination of segmentation network
Salih et al. Adaptive local exposure based region determination for non-uniform illumination and low contrast images
CN110738625B (en) Image resampling method, device, terminal and computer readable storage medium
CN109978859B (en) Image display adaptation quality evaluation method based on visible distortion pooling
CN116129320A (en) Target detection method, system and equipment based on video SAR
CN115471494A (en) Wo citrus quality inspection method, device, equipment and storage medium based on image processing
CN114486916A (en) Mobile phone glass cover plate defect detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215011 rooms 101, 102, 901 and 902, floor 1, building 11, 198 Jialingjiang Road, high tech Zone, Suzhou, Jiangsu Province

Applicant after: Gaoshi Technology (Suzhou) Co.,Ltd.

Address before: 516000 West Side of the 4th Floor of CD Building, No. 2 South Road, Huatai Road, Huiao Avenue, Huizhou City, Guangdong Province

Applicant before: HUIZHOU GOVION TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200828

Assignee: Suzhou Gaoshi Semiconductor Technology Co.,Ltd.

Assignor: Gaoshi Technology (Suzhou) Co.,Ltd.

Contract record no.: X2021990000430

Denomination of invention: A method, device and storage medium for detecting mura of display screen

License type: Common License

Record date: 20210722

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 215129 Rooms 101, 102, 901, 902, Floor 9, Building 11, No. 198, Jialing River Road, High tech Zone, Suzhou City, Jiangsu Province

Patentee after: Gaoshi Technology (Suzhou) Co.,Ltd.

Address before: 215011 rooms 101, 102, 901 and 902, floor 1, building 11, 198 Jialingjiang Road, high tech Zone, Suzhou, Jiangsu Province

Patentee before: Gaoshi Technology (Suzhou) Co.,Ltd.