CN114998189A - Color display point defect detection method - Google Patents

Color display point defect detection method Download PDF

Info

Publication number
CN114998189A
CN114998189A CN202210393952.9A CN202210393952A CN114998189A CN 114998189 A CN114998189 A CN 114998189A CN 202210393952 A CN202210393952 A CN 202210393952A CN 114998189 A CN114998189 A CN 114998189A
Authority
CN
China
Prior art keywords
fusion
scale
image
feature
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210393952.9A
Other languages
Chinese (zh)
Other versions
CN114998189B (en
Inventor
陈怀新
解文强
王治玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210393952.9A priority Critical patent/CN114998189B/en
Publication of CN114998189A publication Critical patent/CN114998189A/en
Application granted granted Critical
Publication of CN114998189B publication Critical patent/CN114998189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a color display point defect detection method, which comprises the following steps: carrying out RGB three-channel image decomposition on the acquired display screen image, and carrying out visual perception transformation correction on the RGB channel decomposed image by adopting a constraint coefficient; decomposing and adjusting the image of each color channel to obtain a characteristic image filtered in the scale direction; performing direction fusion processing on the feature images according to a direction fusion criterion and an image minimum value fusion criterion, and performing scale fusion according to an image maximum value criterion to obtain a feature fusion graph of an RGB channel; and respectively calculating the mean value and the standard deviation of the feature fusion image, selecting the maximum mean value and the standard deviation as threshold values to carry out binarization segmentation on the feature fusion image, merging the feature fusion image of the channel by adopting image or operation, and detecting the color point defect of the display screen. The method can effectively highlight the color display point defects, and realizes the color point defect detection of the display screen, which accords with the perception characteristic of human eyes to colors and has high accuracy.

Description

Color display point defect detection method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a color display point defect detection method.
Background
Machine vision systems, by virtue of their stability and reliability, are widely used in production and life. The quality of the display screen is detected by using the machine vision system, convenience is brought to industrial production, the production efficiency is improved, and the cost is saved.
Before the display screen leaves a factory, the display screen needs to be electrified, a specific picture is displayed, and then the quality of the display screen is judged by means of subjective feeling of human eyes. A color display dot defect is a serious defect occurring during power-on inspection, and the defect usually appears in a pure color picture, and the defect appears as a red, green and blue color dot, and the brightness and contrast of the color dot are not consistent, and the number of the color dots is also large.
In the past decades, a large number of defect detection methods for the TFT-LCD display screen are proposed, including reconstructing a background by discrete cosine transform, reconstructing the background by polynomial fitting, and then detecting defects in the display screen by using a difference image mode.
The above type of methods mainly detect luminance information without involving color information, which may result in low accuracy of point defect detection due to inconsistent representation of different colors in luminance.
The above type of methods are designed for all types of defects and cannot detect only point defects, which causes noise in the display screen to be detected and not eliminated.
The above type of methods cannot overcome the structural problem of the acquired image, and when the displayed image is not a complete rectangle, the edge portion may seriously interfere with the defect detection, so that the point defect cannot be detected.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method for detecting defects of color display points.
The specific technical scheme of the invention is as follows: a color display point defect detection method comprises the following steps:
step S1, visual perception transformation correction of a single color channel image,
decomposing the acquired display screen image by three channels of RGB (red, green and blue), and performing visual perception transformation correction on the R, G, B channel decomposed image by using a constraint coefficient to obtain a decomposition adjustment chart of a R, G, B color channel;
s2, decomposing the scale and direction filtering of the adjustment graph,
for the decomposition adjustment diagram of each color channel, a four-scale mean value filtering method is used to obtain four-scale filtering diagrams; filtering 8 directions under different scales by adopting a sliding window mode to obtain a directional filtering characteristic diagram;
s3, fusing the filtering characteristics of the scale and the direction,
for the directional filtering feature map, obtaining a four-directional composite feature map according to a directional fusion criterion, and obtaining a directional feature fusion map according to a minimum fusion criterion; sequentially calculating to obtain four scales of directional feature fusion graphs, and fusing the four scales of directional feature fusion graphs according to a scale maximum value fusion criterion to obtain a scale direction filtering feature fusion graph of a single color channel; respectively calculating the characteristic fusion graphs of the R, G, B channels in sequence;
s4, self-adaptive threshold segmentation of the feature fusion map,
respectively calculating the mean value and the standard deviation of the characteristic fusion map of the R, G, B channels, selecting the maximum mean value and the standard deviation as threshold values, carrying out binarization segmentation on the R, G, B channel characteristic fusion map, merging the characteristic fusion maps of the channels by adopting images or operation, and detecting the color point defects of the display screen.
The invention has the beneficial effects that: the method of the invention combines human eye visual characteristics, multi-scale analysis and information fusion theory, can effectively utilize color information to detect the color point defect, obtain the point defect detection result conforming to the human eye visual characteristics, and utilizes the center surrounding characteristic of human eye vision to ensure that the method can only detect the point defect, can effectively overcome the interference of background noise and the influence of image structure, realizes conforming to the human eye perception characteristics of the color point defect, and can improve the accuracy of the color point defect detection of the display screen.
Drawings
FIG. 1 is a flow chart of a color display point defect detection method according to the present invention.
Fig. 2 is a diagram of an original R, G, B three-channel image according to an embodiment of the present invention.
FIG. 3 is an exploded view of R, G, B according to an embodiment of the present invention.
Fig. 4 is a diagram showing a sample directional filter sliding window according to an embodiment of the present invention.
Fig. 5 is a sample presentation diagram of R, G, B three-channel fusion feature map feature extraction results according to an embodiment of the present invention.
FIG. 6 is a sample partitioning result chart according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings.
The flow of the color display point defect detection method of the present invention is shown in fig. 1, and specifically comprises the following steps:
s1, visual perception transformation correction of a single color channel image,
decomposing the acquired display screen image by using three channels of RGB, and performing visual perception transformation correction on R, G, B channel decomposed images of R (x, y), G (x, y) and B (x, y) by using constraint coefficients to obtain a decomposition adjustment chart of R, G, B color channels.
The specific calculation formula is as follows:
R′ gray(x,y) =w 1 ×R(x,y)+w 2 ×0+w 3 ×0
G′ gray(x,y) =w 1 ×0+w 2 ×G(x,y)+w 3 ×0
B′ gray(x,y) =w 1 ×0+w 2 ×0+w 3 ×B(x,y)
wherein R' gray(x,y) 、G′ gray(x,y) And B' gray(x,y) Respectively representing R, G, B channel gray-scale transformed images, namely, R, G, B color channel decomposition adjustment maps; w is a 1 Constraint coefficient, w, representing the visual perception of red by the human eye 2 Constraint coefficient, w, representing the visual perception of green by the human eye 3 The constraint coefficient refers to the constraint coefficient of color-to-gray image, and can be adjusted according to actual situation. The results are shown in FIGS. 2 and 3.
S2, decomposing the scale and direction filtering of the adjustment graph,
obtaining a filter graph of four scales by using a four-scale mean value filtering method for the decomposition adjustment graph of each color channel; and (3) filtering 8 directions under different scales by adopting a sliding window mode to obtain a directional filtering characteristic diagram. The method specifically comprises the following steps:
s21, filtering the scale mean value template,
and performing mean filtering on the decomposition adjustment map of each color channel obtained in the step S1 by using templates of different scales, wherein the sizes of the filtering templates are 3 × 3, 5 × 5, 7 × 7 and 9 × 9 respectively. The filtered image is divided into four scales, namely 3 × 3, 5 × 5, 7 × 7, and 9 × 9.
S22, defining a sliding window,
as shown in FIG. 4, the filtered image is represented by direction using a sliding window, which is divided into two parts, a central target region T and a neighborhood background region B i The neighborhood background is divided into eight parts, the central area T represents the central area of the target, the background B i Representing the neighborhood of the central point T.
A central region T and a single background region B i Is the same and is related to the single-scale dimension, the calculation formula is as follows:
Size(T)=n×n,(n=3,5,7,9)
where size (T) denotes the size of the central region T, and n is a single-scale size.
S23, calculating the direction filtering,
calculating the central region T and the background B by adopting a mean value difference method i A difference of (i.e.
Figure BDA0003598196980000031
Wherein m is T The mean value of the pixels representing the target area,
Figure BDA0003598196980000032
representing the mean of pixels of the background area.
S3, fusing the filtering characteristics of the scale and the direction,
for the directional filtering feature map, obtaining a four-directional composite feature map according to a directional fusion criterion, and obtaining a directional feature fusion map according to a minimum fusion criterion; sequentially calculating to obtain four scales of directional feature fusion graphs, and fusing the four scales of directional feature fusion graphs according to a scale maximum value fusion criterion to obtain a scale direction filtering feature fusion graph of a single color channel; and respectively calculating the characteristic fusion maps of the R, G, B channels in sequence. The method specifically comprises the following steps:
s31, obtaining a four-direction composite characteristic diagram according to a direction fusion criterion,
the calculation formula of the direction feature fusion criterion is as follows:
Figure BDA0003598196980000033
wherein,
Figure BDA0003598196980000041
representing the central region T and the surrounding neighborhoods B i And (3) fusing the feature maps in 8 directions into a four-direction composite feature map according to the gray level difference in different directions.
S32, calculating to obtain a direction feature fusion graph of four scales,
and fusing the four direction composite feature maps according to the minimum value of the corresponding points to obtain direction feature map fusion under a single scale, wherein the fusion criterion is as follows:
Figure BDA0003598196980000042
wherein (x) ic ,y ic ) Coordinates, S (x), representing the center position of the target area ic ,y ic ) Representing a single scale direction fused feature map.
S33, fusing the multi-scale characteristic graphs,
and respectively calculating directional feature fusion graphs of four scales, then performing scale fusion according to a scale maximum value fusion criterion of the following formula to realize scale feature graph fusion under a single channel, and further calculating R, G, B feature fusion graphs of three channels.
The fusion rule is as follows:
Figure BDA0003598196980000043
wherein, C (x) ic ,y ic ) Representing a multi-scale feature map fusion graph, S i A directional fusion feature map representing a scale.
S4, self-adaptive threshold segmentation of the feature fusion map,
respectively calculating the mean value and the standard deviation of the characteristic fusion map of the R, G, B channels, selecting the maximum mean value and the standard deviation as threshold values, carrying out binarization segmentation on the R, G, B channel characteristic fusion map, merging the characteristic fusion maps of the channels by adopting images or operation, and detecting the color point defects of the display screen. The method specifically comprises the following steps:
s41, extracting the multi-scale feature fusion graph features of the three channels,
extracting the multi-scale feature fusion graph C (x) of the three color channels in the step S3 ic ,y ic ) The mean value μ and the standard deviation δ of (d) are calculated as follows:
Figure BDA0003598196980000044
Figure BDA0003598196980000045
where M represents the length of the multi-scale feature fusion map and N represents the width of the multi-scale feature fusion map, the result is shown in fig. 5.
S42, fusing the maximum value characteristics,
because the maximum characteristic value can better reflect the characteristics of the target generally, and the background gray scale and the background uniformity degree can influence the fusion effect, the characteristics of the image are fused by adopting a maximum and minimum method.
The fusion rule is as follows:
μ max =max(μ i )
δ max =max(δ i )
wherein, mu i Mean, δ, representing the fusion feature map calculated from R, G and B three channels i Represents the standard deviation, mu, of the fusion profile calculated for the R, G and B channels max Represents the maximum mean, δ, obtained by fusion max The maximum standard deviation obtained by fusion is indicated.
S43, threshold segmentation and image OR operation,
the mean and the standard deviation obtained by calculating the three channels retain the maximum value as the standard of segmentation, and the segmentation in this embodiment is implemented by using the following formula:
th=μ max +Kδ max
where th denotes a segmentation threshold, K denotes a standard deviation parameter,
the local contrast maps of the three channels are segmented by using a threshold th, and the binary maps of the three channels are subjected to image or operation to obtain a final defect binary map, so as to obtain a color point defect result map, wherein the result is shown in fig. 6. FIG. 6c is a graph showing the results of the standard test.
The method combines the human eye visual characteristics, the multi-scale analysis and the information fusion theory, can effectively highlight the color display point defects for the color point defect detection, realizes the color perception characteristics according with the human eye, and can improve the accuracy of the color point defect detection of the display screen.

Claims (5)

1. A color display point defect detection method comprises the following steps:
s1, visual perception transformation correction of a single color channel image,
decomposing the acquired display screen image by three channels of RGB (red, green and blue), and performing visual perception transformation correction on the R, G, B channel decomposed image by using a constraint coefficient to obtain a decomposition adjustment chart of a R, G, B color channel;
s2, decomposing the scale and direction filtering of the adjustment graph,
for the decomposition adjustment diagram of each color channel, a four-scale mean value filtering method is used to obtain four-scale filtering diagrams; filtering 8 directions under different scales by adopting a sliding window mode to obtain a directional filtering characteristic diagram;
s3, fusing the filtering characteristics of the scale and the direction,
for the directional filtering feature map, obtaining a four-directional composite feature map according to a directional fusion criterion, and obtaining a directional feature fusion map according to a minimum fusion criterion; sequentially calculating to obtain four scales of direction feature fusion graphs, and fusing the four scales of direction feature fusion graphs according to a scale maximum value fusion criterion to obtain a scale direction filtering feature fusion graph of a single color channel; respectively calculating the characteristic fusion graphs of the R, G, B channels in sequence;
s4, self-adaptive threshold segmentation of the feature fusion map,
respectively calculating the mean value and the standard deviation of the feature fusion image of the R, G, B channels, selecting the maximum mean value and the standard deviation as threshold values, carrying out binarization segmentation on the R, G, B channel feature fusion image, merging the feature fusion images of the channels by adopting images or operations, and detecting the color point defects of the display screen.
2. The method as claimed in claim 1, wherein the visual perception transform correction of step S1 is specifically calculated as follows:
R′ gray(x,y) =w 1 ×R(x,y)+w 2 ×0+w 3 ×0
G′ gray(x,y) =w 1 ×0+w 2 ×G(x,y)+w 3 ×0
B′ gray(x,y) =w 1 ×0+w 2 ×0+w 3 ×B(x,y)
wherein R (x, y), G (x, y) and B (x, y) are R, G, B channel decomposition images R' gray(x,y) 、G′ gray(x,y) 、B′ gray(x,y) Respectively representing R, G, B channel gray-scale transformed images, w 1 Constraint coefficient, w, representing the perception of red by human vision 2 Constraint coefficient, w, representing the visual perception of green by the human eye 3 Representing the constraint coefficient for human eye to visually perceive blue.
3. The method as claimed in claim 2, wherein the step S2 comprises the following sub-steps:
s21, filtering the scale mean value template,
performing mean filtering on the decomposition adjustment map of each color channel obtained in step S1 by using templates of different scales, wherein the sizes of the filtering templates are 3 × 3, 5 × 5, 7 × 7, and 9 × 9, respectively, and the filtered image is divided into four scales, that is, four scales of 3 × 3, 5 × 5, 7 × 7, and 9 × 9;
s22, defining a sliding window,
the direction of the filtered image is expressed by using a sliding window, and the sliding window is divided into two parts, namely a central target area T and a neighborhood background area B i The neighborhood background is divided into eight parts, the central area T represents the central area of the target, and the background B i A neighborhood portion representing a center point T;
a central region T and a single background region B i Is the same and is related to the single-scale dimension, the calculation formula is as follows:
Size(T)=n×n,(n=3,5,7,9)
wherein size (T) represents the size of the central region T, and n is a single-scale size;
s23, calculating the direction filtering,
calculating the central region T and the background B by adopting a mean value difference method i A difference of (i.e.
Figure FDA0003598196970000021
Wherein m is T The mean value of the pixels representing the target area,
Figure FDA0003598196970000022
representing the mean of pixels of the background area.
4. The method as claimed in claim 3, wherein the step S3 comprises the following steps:
s31, obtaining a four-direction composite characteristic diagram according to a direction fusion criterion,
the calculation formula of the direction feature fusion criterion is as follows:
Figure FDA0003598196970000023
wherein,
Figure FDA0003598196970000024
representing the central region T and the surrounding neighborhoods B i Fusing the feature maps in 8 directions into a four-direction composite feature map according to the gray level difference in different directions;
s32, calculating to obtain a direction feature fusion graph of four scales,
and fusing the four direction composite feature maps according to the minimum value of the corresponding points to obtain direction feature map fusion under a single scale, wherein the fusion criterion is as follows:
Figure FDA0003598196970000025
wherein (x) ic ,y ic ) Coordinates, S (x), representing the center position of the target area ic ,y ic ) Representing a single-scale direction fusion feature map;
s33, fusing the multi-scale characteristic graphs,
respectively calculating direction feature fusion graphs of four scales, then carrying out scale fusion according to a scale maximum value fusion criterion to realize scale feature graph fusion under a single channel, further calculating R, G, B feature fusion graphs of three channels,
the fusion rule is as follows:
Figure FDA0003598196970000026
wherein, C (x) ic ,y ic ) Representing a multi-scale feature map fusion map, S i A direction-fused feature map representing a scale.
5. The method as claimed in claim 4, wherein the step S4 comprises the following steps:
s41, extracting the multi-scale feature fusion graph features of the three channels,
extracting the three color channel multi-scale feature fusion chart C (x) in the step S3 ic ,y ic ) The mean value μ and the standard deviation δ of (d) are calculated as follows:
Figure FDA0003598196970000031
Figure FDA0003598196970000032
wherein M represents the length of the multi-scale feature fusion graph, and N represents the width of the multi-scale feature fusion graph;
s42, fusing the maximum value characteristics,
the maximum and minimum value method is adopted to fuse the characteristics of the image,
the fusion rule is as follows:
μ max =max(μ i )
δ max =max(δ i )
wherein, mu i Mean, δ, representing the fusion feature map calculated from R, G and B three channels i Represents the standard deviation, mu, of the fusion profile calculated for the R, G and B channels max Representation fusionMaximum mean value, δ max Represents the maximum standard deviation obtained by fusion;
s43, threshold segmentation and image OR operation,
and (3) reserving the maximum value of the mean value and the standard deviation obtained by calculating the three channels as the standard of segmentation, and realizing the segmentation by adopting the following formula:
th=μ max +Kδ max
where th denotes a segmentation threshold, K denotes a standard deviation parameter,
and (4) segmenting the local contrast maps of the three channels by using a threshold th, and carrying out image or operation on the binary maps of the three channels to obtain a final defect binary map so as to obtain a color point defect result map.
CN202210393952.9A 2022-04-15 2022-04-15 Color display point defect detection method Active CN114998189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210393952.9A CN114998189B (en) 2022-04-15 2022-04-15 Color display point defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210393952.9A CN114998189B (en) 2022-04-15 2022-04-15 Color display point defect detection method

Publications (2)

Publication Number Publication Date
CN114998189A true CN114998189A (en) 2022-09-02
CN114998189B CN114998189B (en) 2024-04-16

Family

ID=83023794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210393952.9A Active CN114998189B (en) 2022-04-15 2022-04-15 Color display point defect detection method

Country Status (1)

Country Link
CN (1) CN114998189B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721082A (en) * 2023-06-13 2023-09-08 电子科技大学 Display screen color Mura defect detection method based on channel separation and filtering

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007315967A (en) * 2006-05-26 2007-12-06 Sharp Corp Defect detection apparatus, defect detection method, defect detection program, and computer-readable recording medium stored with the progtram
CN103369347A (en) * 2012-03-05 2013-10-23 苹果公司 Camera blemish defects detection
CN108765402A (en) * 2018-05-30 2018-11-06 武汉理工大学 Non-woven fabrics defects detection and sorting technique
CN111563889A (en) * 2020-05-06 2020-08-21 深圳市斑马视觉科技有限公司 Liquid crystal screen Mura defect detection method based on computer vision
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
CN112014413A (en) * 2020-08-04 2020-12-01 贵州乐道科技有限公司 Mobile phone glass cover plate window area defect detection method based on machine vision
FR3108727A1 (en) * 2020-03-28 2021-10-01 Safran Method and system for non-destructive testing of an aeronautical part by radiography

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007315967A (en) * 2006-05-26 2007-12-06 Sharp Corp Defect detection apparatus, defect detection method, defect detection program, and computer-readable recording medium stored with the progtram
CN103369347A (en) * 2012-03-05 2013-10-23 苹果公司 Camera blemish defects detection
CN108765402A (en) * 2018-05-30 2018-11-06 武汉理工大学 Non-woven fabrics defects detection and sorting technique
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
FR3108727A1 (en) * 2020-03-28 2021-10-01 Safran Method and system for non-destructive testing of an aeronautical part by radiography
CN111563889A (en) * 2020-05-06 2020-08-21 深圳市斑马视觉科技有限公司 Liquid crystal screen Mura defect detection method based on computer vision
CN112014413A (en) * 2020-08-04 2020-12-01 贵州乐道科技有限公司 Mobile phone glass cover plate window area defect detection method based on machine vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHIXI WANG等: "Color Point Defect Detection Method Based on Color Salient Features", 《COLOR POINT DEFECT DETECTION METHOD BASED ON COLOR SALIENT FEATURES》, 25 August 2022 (2022-08-25) *
张腾达;卢荣胜;张书真;: "基于二维DFT的TFT-LCD平板表面缺陷检测", 光电工程, no. 03, 15 March 2016 (2016-03-15) *
徐科;艾永好;周鹏;杨朝霖;: "基于Contourlet变换的连铸坯表面缺陷识别", 北京科技大学学报, no. 09, 29 September 2013 (2013-09-29) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721082A (en) * 2023-06-13 2023-09-08 电子科技大学 Display screen color Mura defect detection method based on channel separation and filtering

Also Published As

Publication number Publication date
CN114998189B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110246108B (en) Image processing method, device and computer readable storage medium
CN107767354B (en) Image defogging algorithm based on dark channel prior
JP3862140B2 (en) Method and apparatus for segmenting a pixelated image, recording medium, program, and image capture device
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
US10148895B2 (en) Generating a combined infrared/visible light image having an enhanced transition between different types of image information
EP2339533B1 (en) Saliency based video contrast enhancement method
CN102883175B (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN109544464A (en) A kind of fire video image analysis method based on contours extract
CN105069801A (en) Method for preprocessing video image based on image quality diagnosis
CN102420985B (en) Multi-view video object extraction method
CN110298812B (en) Image fusion processing method and device
US9898953B2 (en) Offset method and equipment of RGBW panel subpixel
CN108389215B (en) Edge detection method and device, computer storage medium and terminal
CN107358585A (en) Misty Image Enhancement Method based on fractional order differential and dark primary priori
CN103942756B (en) A kind of method of depth map post processing and filtering
CN112862832B (en) Dirt detection method based on concentric circle segmentation positioning
CN118195902B (en) Super-resolution image processing method and processing system based on interpolation algorithm
CN103247049A (en) SMT (Surface Mounting Technology) welding spot image segmentation method
CN114998189B (en) Color display point defect detection method
CN113298763B (en) Image quality evaluation method based on significance window strategy
CN114519694A (en) Seven-segment digital tube liquid crystal display screen identification method and system based on deep learning
CN117495719A (en) Defogging method based on atmospheric light curtain and fog concentration distribution estimation
JP3636936B2 (en) Grayscale image binarization method and recording medium recording grayscale image binarization program
CN115187551A (en) Display screen weak cross defect detection method for frequency domain filtering of significant color channel
CN115131327B (en) Color line defect detection method for display screen with fused color features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant