CN117274263B - Display scar defect detection method - Google Patents

Display scar defect detection method Download PDF

Info

Publication number
CN117274263B
CN117274263B CN202311559341.8A CN202311559341A CN117274263B CN 117274263 B CN117274263 B CN 117274263B CN 202311559341 A CN202311559341 A CN 202311559341A CN 117274263 B CN117274263 B CN 117274263B
Authority
CN
China
Prior art keywords
channel
layer
defect
value
output end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311559341.8A
Other languages
Chinese (zh)
Other versions
CN117274263A (en
Inventor
史正府
凡文兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luzhou Tongyuan Electronic Technology Co ltd
Original Assignee
Luzhou Tongyuan Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luzhou Tongyuan Electronic Technology Co ltd filed Critical Luzhou Tongyuan Electronic Technology Co ltd
Priority to CN202311559341.8A priority Critical patent/CN117274263B/en
Publication of CN117274263A publication Critical patent/CN117274263A/en
Application granted granted Critical
Publication of CN117274263B publication Critical patent/CN117274263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting scar defects of a display, which belongs to the technical field of image processing. According to the invention, the R channel value, the G channel value and the B channel value are respectively processed, and the scar defect degree is analyzed through the characteristic condition of each channel value, so that the detection precision of the scar defect of the display is improved.

Description

Display scar defect detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for detecting scar defects of a display.
Background
When the display leaves the factory, the appearance of the display needs to be checked, so that the normal display content on the display is ensured. The existing defect detection method is to perform gray level processing on an appearance image of a display to obtain a gray level image, and to process the gray level image by using a neural network to identify defects on the gray level image. However, in the gray scale processing process, the R channel value, the G channel value and the B channel value are omitted, the gray scale value is adopted for characterization, and the color characteristic is omitted, but the distribution condition of the R channel value, the G channel value and the B channel value of the appearance image of the display respectively determines the defect degree of the appearance of the display.
Disclosure of Invention
The invention aims to provide a method for detecting scar defects of a display, which solves the problem of low detection precision of the existing defect detection method.
The embodiment of the invention is realized by the following technical scheme: a method for detecting scar defects of a display, comprising the steps of:
s1, acquiring a display surface image;
s2, extracting an R channel value, a G channel value and a B channel value of the display surface image to obtain an R channel image, a G channel image and a B channel image;
s3, respectively inputting the R channel image, the G channel image and the B channel image into a defect detection neural network to obtain an R channel defect value, a G channel defect value and a B channel defect value;
s4, calculating the scar defect degree of the display according to the R channel defect value, the G channel defect value and the B channel defect value.
Further, the defect detection neural network in S3 includes: the device comprises a shallow layer feature extraction unit, a deep layer feature extraction unit and a defect value estimation unit;
the input end of the shallow feature extraction unit is used as the input end of the defect detection neural network, and the output end of the shallow feature extraction unit is connected with the input end of the deep feature extraction unit; the input end of the defect value estimation unit is connected with the output end of the deep feature extraction unit, and the output end of the defect value estimation unit is used as the output end of the defect detection neural network.
The beneficial effects of the above further scheme are: the shallow layer feature extraction unit is adopted to extract shallow layer features, the deep layer feature extraction unit is adopted to process the shallow layer features to obtain deep layer features, and the defect value estimation unit is used for obtaining channel defect values according to the deep layer features.
Further, the shallow feature extraction unit includes: a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a first Concat layer;
the input end of the first convolution layer is used as the input end of the shallow layer feature extraction unit, and the output end of the first convolution layer is respectively connected with the input end of the second convolution layer, the input end of the third convolution layer and the input end of the fourth convolution layer; the input end of the first Concat layer is respectively connected with the output end of the second convolution layer, the output end of the third convolution layer and the output end of the fourth convolution layer, and the output end of the first Concat layer is used as the output end of the shallow layer feature extraction unit.
Further, the convolution kernel size of the first convolution layer is 1*1, the convolution kernel size of the second convolution layer is 3*3, the convolution kernel size of the third convolution layer is 5*5, and the convolution kernel size of the fourth convolution layer is 7*7.
The beneficial effects of the above further scheme are: in the invention, after the characteristic is extracted from the R channel image, the G channel image or the B channel image by adopting the first convolution layer, the characteristic is further extracted by adopting three convolution layers with different convolution kernels, so that the characteristic quantity is enriched.
Further, the deep feature extraction unit includes: a maximum pooling layer, an average pooling layer and a second Concat layer;
the input end of the maximum pooling layer is connected with the input end of the average pooling layer and is used as the input end of the deep feature extraction unit; the input end of the second Concat layer is respectively connected with the output end of the maximum pooling layer and the output end of the average pooling layer, and the output end of the second Concat layer is used as the output end of the deep feature extraction unit.
The beneficial effects of the above further scheme are: the deep feature extraction unit adopts the maximum pooling layer to extract the obvious features, adopts the average pooling layer to extract the global features, and fully extracts the data features while reducing the data quantity.
Further, the defect value estimating unit includes: a feature enhancement layer, a fifth convolution layer and a channel defect value estimation layer;
the input end of the characteristic enhancement layer is used as the input end of the defect value estimation unit, and the output end of the characteristic enhancement layer is connected with the input end of the fifth convolution layer; the input end of the channel defect value estimation layer is connected with the output end of the fifth convolution layer, and the output end of the channel defect value estimation layer is used as the output end of the defect value estimation unit.
The beneficial effects of the above further scheme are: the characteristic enhancement layer is used for enhancing deep characteristics, so that the characteristics of an R channel image, a G channel image or a B channel image are highlighted, the distinction degree is increased, and defect value estimation is better carried out.
Further, the expression of the feature enhancement layer is:
wherein,the +.>Personal characteristic value->Input for feature enhancement layer +.>The value of the characteristic is a value of,for enhancing the coefficient->The number of feature values entered for the feature enhancement layer.
The beneficial effects of the above further scheme are: according to the invention, the characteristic value input by the characteristic enhancement layer is normalized, the data distribution condition is reflected, and then the data distribution condition is enhanced by the enhancement coefficient, so that the channel image characteristic is highlighted.
Further, the calculation formula of the enhancement coefficient is:
wherein,for enhancing the coefficient->As an arctangent function, +.>Is the +.sup.th on R-channel image, G-channel image or B-channel image>Individual channel values->For the number of channel values, in processing the R channel image, the +.>For the R channel image +.>R channel values, & lt/L & gt, when processing G channel images>For G channel image +.>G channel values, & gt, when processing B channel images>For the B channel picture +.>And B is the channel value, and I is the absolute value.
The beneficial effects of the above further scheme are: the enhancement coefficient is derived from the non-uniformity condition of the channel value distribution, and the non-uniformity condition can clearly represent the smoothness degree of the channel image, so that the characteristics of the channel image can be amplified through the enhancement coefficient, and the defect detection precision is improved.
Further, the channel defect value estimation layer is used for carrying out segmentation processing on all the characteristic values output by the fifth convolution layer, constructing each segment of characteristic value into a characteristic sequence, extracting a maximum characteristic value and a characteristic mean value from each characteristic sequence, and calculating a channel defect value according to the maximum characteristic value and the characteristic mean value;
the formula for calculating the channel defect value is as follows:
wherein,for channel defect value, ++>Is->Maximum eigenvalue in the individual eigenvalues, < >>Is->Feature mean in the individual feature sequences, +.>Is->Personal specialWeight of maximum eigenvalue in the sign sequence, +.>Is->Weights of feature means in the individual feature sequences, +.>To activate the function +.>For the number of feature sequences, +.>Is the channel characteristic quantity without scar defect.
The beneficial effects of the above further scheme are: in the invention, all the characteristic values output by the fifth convolution layer are subjected to sectional processing so as to perform sectional extraction of the characteristics, ensure the richness of the characteristics, further extract the rich characteristics, respectively assign different weights to the maximum characteristic value and the characteristic mean value when calculating the channel defect value, further ensure the accuracy of defect detection, and then calculate the channel characteristic quantity without scar defectsThe gap of (2) characterizes the defect condition.
Further, the formula for calculating the scar defect degree of the display in the step S4 is as follows:
wherein,for the display scar defect level, +.>For the R channel defect value, +.>For G channel defect value, +.>Is a B-channel defect value.
The technical scheme of the embodiment of the invention has at least the following advantages and beneficial effects: the invention extracts the R channel value, the G channel value and the B channel value of the display surface image respectively, so as to split the display surface image into the R channel image, the G channel image and the B channel image, input the R channel image, the G channel image and the B channel image into a defect detection neural network respectively to obtain the defect value corresponding to each channel, and comprehensively evaluate the scar defect degree of the display according to the defect values of the three channels. According to the invention, the R channel value, the G channel value and the B channel value are respectively processed, and the scar defect degree is analyzed through the characteristic condition of each channel value, so that the detection precision of the scar defect of the display is improved.
Drawings
FIG. 1 is a flow chart of a method for detecting scar defects in a display;
FIG. 2 is a schematic diagram of a shallow feature extraction unit;
FIG. 3 is a schematic diagram of a deep feature extraction unit;
fig. 4 is a schematic diagram of a defect value estimating unit.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
As shown in fig. 1, a method for detecting scar defect of a display includes the following steps:
s1, acquiring a display surface image;
s2, extracting an R channel value, a G channel value and a B channel value of the display surface image to obtain an R channel image, a G channel image and a B channel image;
s3, respectively inputting the R channel image, the G channel image and the B channel image into a defect detection neural network to obtain an R channel defect value, a G channel defect value and a B channel defect value;
the defect detection neural network in S3 includes: the device comprises a shallow layer feature extraction unit, a deep layer feature extraction unit and a defect value estimation unit;
the input end of the shallow feature extraction unit is used as the input end of the defect detection neural network, and the output end of the shallow feature extraction unit is connected with the input end of the deep feature extraction unit; the input end of the defect value estimation unit is connected with the output end of the deep feature extraction unit, and the output end of the defect value estimation unit is used as the output end of the defect detection neural network.
The shallow layer feature extraction unit is adopted to extract shallow layer features, the deep layer feature extraction unit is adopted to process the shallow layer features to obtain deep layer features, and the defect value estimation unit is used for obtaining channel defect values according to the deep layer features.
As shown in fig. 2, the shallow feature extraction unit includes: a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a first Concat layer;
the input end of the first convolution layer is used as the input end of the shallow layer feature extraction unit, and the output end of the first convolution layer is respectively connected with the input end of the second convolution layer, the input end of the third convolution layer and the input end of the fourth convolution layer; the input end of the first Concat layer is respectively connected with the output end of the second convolution layer, the output end of the third convolution layer and the output end of the fourth convolution layer, and the output end of the first Concat layer is used as the output end of the shallow layer feature extraction unit.
The convolution kernel size of the first convolution layer is 1*1, the convolution kernel size of the second convolution layer is 3*3, the convolution kernel size of the third convolution layer is 5*5, and the convolution kernel size of the fourth convolution layer is 7*7.
In the invention, after the characteristic is extracted from the R channel image, the G channel image or the B channel image by adopting the first convolution layer, the characteristic is further extracted by adopting three convolution layers with different convolution kernels, so that the characteristic quantity is enriched.
In the invention, the R channel image is an image with both the G channel and the B channel being 0, or is a set of R channel values, the G channel image is an image with both the R channel and the B channel being 0, or is a set of G channel values, and the B channel image is an image with both the R channel and the G channel being 0, or is a set of B channel values.
As shown in fig. 3, the deep feature extraction unit includes: a maximum pooling layer, an average pooling layer and a second Concat layer;
the input end of the maximum pooling layer is connected with the input end of the average pooling layer and is used as the input end of the deep feature extraction unit; the input end of the second Concat layer is respectively connected with the output end of the maximum pooling layer and the output end of the average pooling layer, and the output end of the second Concat layer is used as the output end of the deep feature extraction unit.
The deep feature extraction unit adopts the maximum pooling layer to extract the obvious features, adopts the average pooling layer to extract the global features, and fully extracts the data features while reducing the data quantity.
As shown in fig. 4, the defect value estimating unit includes: a feature enhancement layer, a fifth convolution layer and a channel defect value estimation layer;
the input end of the characteristic enhancement layer is used as the input end of the defect value estimation unit, and the output end of the characteristic enhancement layer is connected with the input end of the fifth convolution layer; the input end of the channel defect value estimation layer is connected with the output end of the fifth convolution layer, and the output end of the channel defect value estimation layer is used as the output end of the defect value estimation unit.
The characteristic enhancement layer is used for enhancing deep characteristics, so that the characteristics of an R channel image, a G channel image or a B channel image are highlighted, the distinction degree is increased, and defect value estimation is better carried out.
The expression of the characteristic enhancement layer is as follows:
wherein,the +.>Personal characteristic value->Input for feature enhancement layer +.>The value of the characteristic is a value of,for enhancing the coefficient->The number of feature values entered for the feature enhancement layer.
According to the invention, the characteristic value input by the characteristic enhancement layer is normalized, the data distribution condition is reflected, and then the data distribution condition is enhanced by the enhancement coefficient, so that the channel image characteristic is highlighted.
The calculation formula of the enhancement coefficient is as follows:
wherein,for enhancing the coefficient->As an arctangent function, +.>Is the +.sup.th on R-channel image, G-channel image or B-channel image>Individual channel values->Number of channel valuesQuantity, in processing R channel image, +.>For the R channel image +.>R channel values, & lt/L & gt, when processing G channel images>For G channel image +.>G channel values, & gt, when processing B channel images>For the B channel picture +.>And B is the channel value, and I is the absolute value.
The enhancement coefficient is derived from the non-uniformity condition of the channel value distribution, and the non-uniformity condition can clearly represent the smoothness degree of the channel image, so that the characteristics of the channel image can be amplified through the enhancement coefficient, and the defect detection precision is improved.
The channel defect value estimation layer is used for carrying out segmentation processing on all the characteristic values output by the fifth convolution layer, constructing each segment of characteristic value into a characteristic sequence, extracting a maximum characteristic value and a characteristic mean value from each characteristic sequence, and calculating a channel defect value according to the maximum characteristic value and the characteristic mean value;
the formula for calculating the channel defect value is as follows:
wherein,for channel defect value, ++>Is->Maximum eigenvalue in the individual eigenvalues, < >>Is->Feature mean in the individual feature sequences, +.>Is->Weight of maximum eigenvalue in each eigenvalue,/->Is->Weights of feature means in the individual feature sequences, +.>To activate the function +.>For the number of feature sequences, +.>Is the channel characteristic quantity without scar defect +.>Is the channel characteristic quantity to be processed.
Channel characteristics without scar defect in the inventionWith to be treatedChannel characteristic quantity->The acquisition process is the same, except that the channel feature without scar defect is +.>From the display surface image without scar defects, whereas the channel feature quantity to be processed is +.>Derived from the display surface image to be detected.
In the invention, all the characteristic values output by the fifth convolution layer are subjected to sectional processing so as to perform sectional extraction of the characteristics, ensure the richness of the characteristics, further extract the rich characteristics, respectively assign different weights to the maximum characteristic value and the characteristic mean value when calculating the channel defect value, further ensure the accuracy of defect detection, and then calculate the channel characteristic quantity without scar defectsThe gap of (2) characterizes the defect condition.
S4, calculating the scar defect degree of the display according to the R channel defect value, the G channel defect value and the B channel defect value.
The formula for calculating the scar defect degree of the display in the step S4 is as follows:
wherein,for the display scar defect level, +.>For the R channel defect value, +.>For G channel defect value, +.>Is a B-channel defect value.
The invention extracts the R channel value, the G channel value and the B channel value of the display surface image respectively, so as to split the display surface image into the R channel image, the G channel image and the B channel image, input the R channel image, the G channel image and the B channel image into a defect detection neural network respectively to obtain the defect value corresponding to each channel, and comprehensively evaluate the scar defect degree of the display according to the defect values of the three channels. According to the invention, the R channel value, the G channel value and the B channel value are respectively processed, and the scar defect degree is analyzed through the characteristic condition of each channel value, so that the detection precision of the scar defect of the display is improved.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A method for detecting scar defects of a display, comprising the steps of:
s1, acquiring a display surface image;
s2, extracting an R channel value, a G channel value and a B channel value of the display surface image to obtain an R channel image, a G channel image and a B channel image;
s3, respectively inputting the R channel image, the G channel image and the B channel image into a defect detection neural network to obtain an R channel defect value, a G channel defect value and a B channel defect value;
s4, calculating the scar defect degree of the display according to the R channel defect value, the G channel defect value and the B channel defect value; the defect detection neural network in S3 includes: the device comprises a shallow layer feature extraction unit, a deep layer feature extraction unit and a defect value estimation unit;
the input end of the shallow feature extraction unit is used as the input end of the defect detection neural network, and the output end of the shallow feature extraction unit is connected with the input end of the deep feature extraction unit; the input end of the defect value estimation unit is connected with the output end of the deep feature extraction unit, and the output end of the defect value estimation unit is used as the output end of the defect detection neural network; the shallow feature extraction unit includes: a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, and a first Concat layer;
the input end of the first convolution layer is used as the input end of the shallow layer feature extraction unit, and the output end of the first convolution layer is respectively connected with the input end of the second convolution layer, the input end of the third convolution layer and the input end of the fourth convolution layer; the input end of the first Concat layer is respectively connected with the output end of the second convolution layer, the output end of the third convolution layer and the output end of the fourth convolution layer, and the output end of the first Concat layer is used as the output end of the shallow layer feature extraction unit;
the defect value estimation unit includes: a feature enhancement layer, a fifth convolution layer and a channel defect value estimation layer;
the input end of the characteristic enhancement layer is used as the input end of the defect value estimation unit, and the output end of the characteristic enhancement layer is connected with the input end of the fifth convolution layer; the input end of the channel defect value estimation layer is connected with the output end of the fifth convolution layer, and the output end of the channel defect value estimation layer is used as the output end of the defect value estimation unit;
the channel defect value estimation layer is used for carrying out segmentation processing on all the characteristic values output by the fifth convolution layer, constructing each segment of characteristic value into a characteristic sequence, extracting a maximum characteristic value and a characteristic mean value from each characteristic sequence, and calculating a channel defect value according to the maximum characteristic value and the characteristic mean value;
the formula for calculating the channel defect value is as follows:
wherein,for channel defect value, ++>Is->Maximum eigenvalue in the individual eigenvalues, < >>Is->Feature mean in the individual feature sequences, +.>Is->Weight of maximum eigenvalue in each eigenvalue,/->Is->Weights of feature means in the individual feature sequences, +.>To activate the function +.>For the number of feature sequences, +.>Is the channel characteristic quantity without scar defect;
the formula for calculating the scar defect degree of the display in the step S4 is as follows:
wherein,for the display scar defect level, +.>For the R channel defect value, +.>For G channel defect value, +.>Is a B-channel defect value.
2. The method of claim 1, wherein the first convolution layer has a convolution kernel size of 1*1, the second convolution layer has a convolution kernel size of 3*3, the third convolution layer has a convolution kernel size of 5*5, and the fourth convolution layer has a convolution kernel size of 7*7.
3. The display scar defect detection method of claim 1, wherein the deep feature extraction unit includes: a maximum pooling layer, an average pooling layer and a second Concat layer;
the input end of the maximum pooling layer is connected with the input end of the average pooling layer and is used as the input end of the deep feature extraction unit; the input end of the second Concat layer is respectively connected with the output end of the maximum pooling layer and the output end of the average pooling layer, and the output end of the second Concat layer is used as the output end of the deep feature extraction unit.
4. The method for detecting scar defect of a display of claim 1, wherein the expression of the feature enhancement layer is:
wherein,the +.>Personal characteristic value->Input for feature enhancement layer +.>Personal characteristic value->For enhancing the coefficient->The number of feature values entered for the feature enhancement layer.
5. The method for detecting scar defect in a display according to claim 4, wherein the calculation formula of the enhancement coefficient is:
wherein,for enhancing the coefficient->As an arctangent function, +.>Is the +.sup.th on R-channel image, G-channel image or B-channel image>Individual channel values->For the number of channel values, in processing the R channel image, the +.>For the R channel image +.>R channel values, & lt/L & gt, when processing G channel images>For G channel image +.>G channel values, & gt, when processing B channel images>For the B channel picture +.>And B is the channel value, and I is the absolute value.
CN202311559341.8A 2023-11-22 2023-11-22 Display scar defect detection method Active CN117274263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311559341.8A CN117274263B (en) 2023-11-22 2023-11-22 Display scar defect detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311559341.8A CN117274263B (en) 2023-11-22 2023-11-22 Display scar defect detection method

Publications (2)

Publication Number Publication Date
CN117274263A CN117274263A (en) 2023-12-22
CN117274263B true CN117274263B (en) 2024-01-26

Family

ID=89218153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311559341.8A Active CN117274263B (en) 2023-11-22 2023-11-22 Display scar defect detection method

Country Status (1)

Country Link
CN (1) CN117274263B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593651B (en) * 2024-01-18 2024-04-05 四川交通职业技术学院 Tunnel crack segmentation recognition method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629370A (en) * 2018-04-28 2018-10-09 广东工业大学 A kind of classification and identification algorithm and device based on depth confidence network
CN109376792A (en) * 2018-11-07 2019-02-22 河北工业大学 Photovoltaic cell open defect classification method based on multichannel residual error neural network
CN109447977A (en) * 2018-11-02 2019-03-08 河北工业大学 A kind of defects of vision detection method based on multispectral depth convolutional neural networks
CN110146509A (en) * 2019-05-07 2019-08-20 无锡先导智能装备股份有限公司 Battery detection method and battery detection equipment
CN112070751A (en) * 2020-09-10 2020-12-11 深兰人工智能芯片研究院(江苏)有限公司 Wood floor defect detection method and device
CN112085735A (en) * 2020-09-28 2020-12-15 西安交通大学 Aluminum image defect detection method based on self-adaptive anchor frame
CN112767339A (en) * 2021-01-13 2021-05-07 哈尔滨工业大学 Surface defect detection method based on visual attention model
CN112791989A (en) * 2021-03-29 2021-05-14 常州三点零智能制造有限公司 Automatic license plate detection method and device
CN113066079A (en) * 2021-04-19 2021-07-02 北京滴普科技有限公司 Method, system and storage medium for automatically detecting wood defects
CN113744252A (en) * 2021-09-07 2021-12-03 全芯智造技术有限公司 Method, apparatus, storage medium and program product for marking and detecting defects
CN113807318A (en) * 2021-10-11 2021-12-17 南京信息工程大学 Action identification method based on double-current convolutional neural network and bidirectional GRU
CN114663346A (en) * 2022-01-30 2022-06-24 河北工业大学 Strip steel surface defect detection method based on improved YOLOv5 network
CN114820521A (en) * 2022-04-27 2022-07-29 电子科技大学 Defect detection method for complex picture of display screen
CN114820582A (en) * 2022-05-27 2022-07-29 北京工业大学 Mobile phone surface defect accurate classification method based on mixed attention deformation convolution neural network
CN116797580A (en) * 2023-06-28 2023-09-22 广西大学 Train body welding quality detection method based on improved YOLOX

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230125477A1 (en) * 2021-10-26 2023-04-27 Nvidia Corporation Defect detection using one or more neural networks

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629370A (en) * 2018-04-28 2018-10-09 广东工业大学 A kind of classification and identification algorithm and device based on depth confidence network
CN109447977A (en) * 2018-11-02 2019-03-08 河北工业大学 A kind of defects of vision detection method based on multispectral depth convolutional neural networks
CN109376792A (en) * 2018-11-07 2019-02-22 河北工业大学 Photovoltaic cell open defect classification method based on multichannel residual error neural network
CN110146509A (en) * 2019-05-07 2019-08-20 无锡先导智能装备股份有限公司 Battery detection method and battery detection equipment
CN112070751A (en) * 2020-09-10 2020-12-11 深兰人工智能芯片研究院(江苏)有限公司 Wood floor defect detection method and device
CN112085735A (en) * 2020-09-28 2020-12-15 西安交通大学 Aluminum image defect detection method based on self-adaptive anchor frame
CN112767339A (en) * 2021-01-13 2021-05-07 哈尔滨工业大学 Surface defect detection method based on visual attention model
CN112791989A (en) * 2021-03-29 2021-05-14 常州三点零智能制造有限公司 Automatic license plate detection method and device
CN113066079A (en) * 2021-04-19 2021-07-02 北京滴普科技有限公司 Method, system and storage medium for automatically detecting wood defects
CN113744252A (en) * 2021-09-07 2021-12-03 全芯智造技术有限公司 Method, apparatus, storage medium and program product for marking and detecting defects
CN113807318A (en) * 2021-10-11 2021-12-17 南京信息工程大学 Action identification method based on double-current convolutional neural network and bidirectional GRU
CN114663346A (en) * 2022-01-30 2022-06-24 河北工业大学 Strip steel surface defect detection method based on improved YOLOv5 network
CN114820521A (en) * 2022-04-27 2022-07-29 电子科技大学 Defect detection method for complex picture of display screen
CN114820582A (en) * 2022-05-27 2022-07-29 北京工业大学 Mobile phone surface defect accurate classification method based on mixed attention deformation convolution neural network
CN116797580A (en) * 2023-06-28 2023-09-22 广西大学 Train body welding quality detection method based on improved YOLOX

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Display Line Defect Detection Method Based on Color Feature Fusion;Wenqiang Xie等;《machines》;第10卷;1-12 *
基于多尺度特征的太阳能硅片和电池片缺陷检测方法研究;刘佳丽;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第03期);C042-3319 *
基于改进YOLOv5s的大尺寸导光板缺陷检测;刘霞等;《计算机系统应用》;第32卷(第2期);339-346 *
基于深度卷积神经网络的手机屏幕缺陷检测;宋威;《中国优秀硕士学位论文全文数据库 信息科技辑》(第01期);I136-1017 *
基于相机RGB通道图像融合的手机壳表面缺陷检测研究;雷娇;《中国优秀硕士学位论文全文数据库 信息科技辑》(第01期);I136-746 *
定向识别航拍绝缘子及其缺陷检测方法研究;赵博等;《电子测量与仪器学报》;1-15 *

Also Published As

Publication number Publication date
CN117274263A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN117274263B (en) Display scar defect detection method
CN106529559A (en) Pointer-type circular multi-dashboard real-time reading identification method
CN108711158B (en) Pointer instrument image identification method based on contour fitting and radial segmentation
CN105574533B (en) A kind of image characteristic extracting method and device
CN110097537B (en) Meat quality quantitative analysis and evaluation method based on three-dimensional texture features
CN116758077B (en) Online detection method and system for surface flatness of surfboard
CN111598869B (en) Method, equipment and storage medium for detecting Mura of display screen
CN108648184A (en) A kind of detection method of remote sensing images high-altitude cirrus
CN110766657B (en) Laser interference image quality evaluation method
CN110660048B (en) Leather surface defect detection method based on shape characteristics
CN115294109A (en) Real wood board production defect identification system based on artificial intelligence, and electronic equipment
CN117475154A (en) Instrument image recognition method and system based on optimized Canny edge detection
CN117115161B (en) Plastic defect inspection method
CN107369163B (en) Rapid SAR image target detection method based on optimal entropy dual-threshold segmentation
CN116503426A (en) Ultrasonic image segmentation method based on image processing
CN108985307B (en) Water body extraction method and system based on remote sensing image
CN110930393A (en) Chip material pipe counting method, device and system based on machine vision
CN109859145A (en) It is a kind of that texture method is gone with respect to the image of total variance based on multistage weight
CN110866911B (en) Dial defect detection method and device, image processing equipment and storage medium
CN109241147B (en) Method for evaluating variability of statistical value
CN111968136A (en) Coal rock microscopic image analysis method and analysis system
CN110599456A (en) Method for extracting specific region of medical image
CN115795370B (en) Electronic digital information evidence obtaining method and system based on resampling trace
CN116245881B (en) Renal interstitial fibrosis assessment method and system based on full-field recognition
Zhang et al. Large crowd count based on improved SURF algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant