CN113971649A - Generation method and detection method of panel defect detection model and terminal equipment - Google Patents

Generation method and detection method of panel defect detection model and terminal equipment Download PDF

Info

Publication number
CN113971649A
CN113971649A CN202010706579.9A CN202010706579A CN113971649A CN 113971649 A CN113971649 A CN 113971649A CN 202010706579 A CN202010706579 A CN 202010706579A CN 113971649 A CN113971649 A CN 113971649A
Authority
CN
China
Prior art keywords
panel
defect
detection
area
panel image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010706579.9A
Other languages
Chinese (zh)
Inventor
王俊卜
王树朋
魏涛
刘阳兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202010706579.9A priority Critical patent/CN113971649A/en
Publication of CN113971649A publication Critical patent/CN113971649A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8861Determining coordinates of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8874Taking dimensions of defect into account
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N2021/9513Liquid crystal panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a generation method, a detection method and terminal equipment of a panel defect detection model, wherein the generation method of the panel defect detection model comprises the steps of obtaining a panel image set and determining a plurality of target detection areas corresponding to the panel image set; configuring a plurality of target detection areas in a preset network model; and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model. According to the method and the device, a plurality of target detection areas corresponding to the panel image set are determined according to the panel image set, and a preset network model provided with the target detection areas is trained through the panel image set to obtain a defect detection model. Therefore, the defect area in the detected panel image can be determined based on the defect detection model, and the efficiency of detecting the panel defects is improved.

Description

Generation method and detection method of panel defect detection model and terminal equipment
Technical Field
The application relates to the technical field of panel processing, in particular to a generation method and a detection method of a panel defect detection model and a terminal device.
Background
The product quality is one of the most important production indexes in the manufacturing industry, and in order to ensure the product quality, it becomes an indispensable process for detecting defects of products in the production process of the products, for example, Display panels (for example, Thin Film Transistor Liquid Crystal displays (TFT-LCDs), etc.) are taken as examples, and each production line needs to detect defects of the Display panels. However, at present, the defect of the display panel is generally detected manually, however, the quality of the display panel is affected by misjudgment caused by the occurrence of asthenopia during a great amount of repetitive labor of the inspector.
Disclosure of Invention
The technical problem to be solved by the present application is to provide a generation method, a detection method and a terminal device for a panel defect detection model, aiming at the defects in the prior art.
In order to solve the above technical problem, a first aspect of the embodiments of the present application provides a method for generating a panel defect detection model, where the method includes:
acquiring a panel image set, and determining a plurality of target detection areas corresponding to the panel image set;
configuring a plurality of target detection areas in a preset network model;
and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model.
The generation method of the panel defect detection model, wherein the acquiring a panel image set and determining a plurality of target detection areas corresponding to the panel image set specifically include:
acquiring a panel image set, wherein the panel image set comprises a plurality of training panel images;
for each training panel image in a plurality of training panel image sets, acquiring a real detection area included by the training panel image;
and performing cluster analysis on all the obtained real detection areas to obtain a plurality of target detection areas corresponding to the panel image set.
The generation method of the panel defect detection model, wherein the performing cluster analysis on all the obtained real detection regions to obtain a plurality of target detection regions corresponding to the panel image set specifically includes:
determining the size of a detection area corresponding to each obtained real detection area;
and performing cluster analysis on each detection area based on the acquired size of each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
The generation method of the panel defect detection model, wherein the performing cluster analysis on each detection area based on the obtained size of each detection area to obtain a plurality of target detection areas corresponding to the panel image set specifically includes:
performing cluster analysis on each detection area based on the acquired size of each detection area to divide the panel image set into a plurality of detection area sets, wherein the area difference value of any two detection areas in each detection area set of the plurality of detection area sets meets a preset condition;
and selecting a detection area from each detection area as a target detection area corresponding to each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
The generation method of the panel defect detection model, wherein the preset network model comprises a detection module and a classification module, and the training of the preset network model obtained by configuration based on the panel image set to obtain the panel defect detection model specifically comprises the following steps:
outputting training panel images in the panel image set to the detection module, and outputting a plurality of feature maps through the detection module, wherein at least two feature maps with different image sizes exist in the plurality of feature maps;
inputting a plurality of feature maps into the classification module, and outputting a predicted defect region corresponding to the training panel image through the classification module;
and training the model parameters of the preset network model based on the preset defect area and the preset defect area label corresponding to the training panel to obtain a panel defect detection model.
The method for generating the panel defect detection model comprises the step of generating a plurality of characteristic graphs, wherein the image sizes of the characteristic graphs in the characteristic graphs are different from each other.
The panel defect detection model generation method comprises the steps that the detection module comprises a plurality of cascaded feature extraction units, at least one first feature extraction unit exists in the plurality of feature extraction units, the number of feature extraction layers included in the first feature extraction unit is larger than the preset number, the number of feature extraction layers included in each second feature extraction unit except the first feature extraction unit in the plurality of feature extraction units is smaller than or equal to the preset number, and the first feature extraction unit is located in front of each second feature extraction unit according to the cascade sequence.
The panel defect detection model generation method comprises the steps that each feature map in the feature maps corresponds to a plurality of candidate detection areas, the candidate detection areas are contained in a plurality of target detection areas, and the candidate detection areas corresponding to the feature maps are different from one another.
The panel defect detection model generation method comprises the steps that for any two feature maps in a plurality of feature maps, the area size of any candidate detection area in a first feature map in the two feature maps is larger than the area size of any candidate detection area in a second feature map in the two feature maps, and the image size of the first feature map is larger than that of the second feature map.
The method for generating the panel defect detection model, wherein the acquiring the panel image set specifically includes:
acquiring an initial panel image set;
determining a defect type corresponding to each initial panel image in an initial panel image set, and dividing the initial panel image set into a plurality of sub-panel image sets according to the defect type;
and selecting a preset number of initial panel images from each sub-panel image set according to a preset weight, and determining the panel image set based on the selected initial panel images.
A second aspect of the embodiments of the present application provides a panel defect detection method, which is applied to any one of the panel defect detection models described above, and the method includes: the method comprises the following steps:
inputting a panel image to be detected into the panel defect detection model;
and outputting the defect area corresponding to the panel image to be detected through the panel defect detection model.
The panel defect detection method comprises the following steps that when the defect detection areas are a plurality of defect detection areas; after the defect area corresponding to the panel image to be detected is output through the panel defect detection model, the method further includes:
obtaining confidence degrees corresponding to the defect regions, and screening the defect regions based on the confidence degrees to obtain screened defect regions;
and taking the screened defect area as a defect area corresponding to the panel image to be detected.
A third aspect of the present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the method for generating a panel defect detection model as described above, and/or to implement the steps in the method for generating a panel defect detection model as described above.
A fourth aspect of the present embodiment provides a terminal device, which includes: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method for generating a panel defect detection model as described above, and/or implements the steps in the method for generating a panel defect detection model as described above.
Has the advantages that: compared with the prior art, the generation method, the detection method and the terminal device of the panel defect detection model are provided, wherein the generation method of the panel defect detection model comprises the steps of obtaining a panel image set and determining a plurality of target detection areas corresponding to the panel image set; configuring a plurality of target detection areas in a preset network model; and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model. According to the method and the device, a plurality of target detection areas corresponding to the panel image set are determined according to the panel image set, and a preset network model provided with the target detection areas is trained through the panel image set to obtain a defect detection model. Therefore, the defect area in the detected panel image can be determined based on the defect detection model, the efficiency of panel defect detection is improved, meanwhile, the target detection area is determined based on the panel image set, so that the target detection area configured by the defect network model can be matched with the standard area of the defect area carried by each training panel image in the panel image set, and the accuracy of defect area detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without any inventive work.
Fig. 1 is a flowchart of a method for generating a panel defect detection model according to the present application.
Fig. 2 is a schematic diagram of a model structure of a preset network model in the method for generating a panel defect detection model provided by the present application.
Fig. 3 is a flowchart of a panel defect detecting method provided in the present application.
Fig. 4 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
In order to make the purpose, technical scheme and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptops, or tablet computers with touch sensitive surfaces (e.g., touch displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch-sensitive display screen and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may also include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a video conferencing application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a data camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video playing application, etc.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. The first or more functions of the touch-sensitive surface and the corresponding information displayed on the terminal may be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical framework (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers and sizes of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process is determined by its function and inherent logic, and should not constitute any limitation on the implementation process of this embodiment.
The inventors have found that the product quality is one of the most important production indexes in the manufacturing industry, and in order to ensure the product quality, it is an indispensable process for detecting defects in the product during the production process of the product, for example, in the case of a Display panel (for example, a Thin Film Transistor-Liquid Crystal Display (TFT-LCD), etc.), each production line needs to detect defects in the Display panel. The defect detection process of the display panel which is commonly adopted at present is generally as follows: the method comprises the steps of firstly detecting a display panel in the panel production process through automatic optical detection equipment, preliminarily photographing the display panel with defects, and then sending a photographed image of the defect area of the panel to a technician with professional knowledge for detection. The method relying on manual detection faces the problems of high labor cost, low efficiency and the like, and meanwhile, the visual fatigue of personnel is easily caused by a large amount of repetitive labor, so that misjudgment is caused.
In order to improve the efficiency and quality of the panel production line, a panel defect detection method based on deep learning is receiving attention from a great number of researchers, and is also gaining favor of manufacturers. However, the defect distribution of the industrial panel is not uniform, and the shape and size of the defects in the panel defect image are also greatly different (for example, a defect with a length and width of less than two percent of the length and width of the photo, a defect with a large position occupying the whole photo, a defect with an aspect ratio of more than twenty times, and the like), which makes the panel defect detection method based on deep learning poor in detection effect.
In order to solve the above problems, in an embodiment of the present application, a method for generating a panel defect detection model, a method for detecting a panel defect detection model, and a terminal device are provided, where the method for generating a panel defect detection model includes acquiring a panel image set, and determining a plurality of target detection regions corresponding to the panel image set; configuring a plurality of target detection areas in a preset network model; and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model. According to the method and the device, a plurality of target detection areas corresponding to the panel image set are determined according to the panel image set, and a preset network model provided with the target detection areas is trained through the panel image set to obtain a defect detection model. Thus, the target detection area is determined based on the panel image set, and the determined target detection area is used as the default detection area of the defect detection model, so that the target detection area configured by the defect network model can be matched with the standard area of the defect area carried by each training panel image in the panel image set, and the accuracy of defect area detection is improved
The following further describes the content of the application by describing the embodiments with reference to the attached drawings.
The present embodiment provides a method for generating a panel defect detection model, as shown in fig. 1, the method includes:
s10, acquiring a panel image set, and determining a plurality of target detection areas corresponding to the panel image set.
Specifically, the panel image set includes a plurality of training panel images, each of the plurality of training panel images includes a panel defect region, the defect classes corresponding to the panel defect regions in each of the training panel images are the same, and the number of training panel images corresponding to each of the defect classes is the same. The defect type is used for the defect cause of the directional panel defect (i.e. the cause of the panel defect), and the defect type may include manufacturing process errors, repair errors, and the like. For example, if the training panel image a has a dot defect on the panel corresponding to the training panel image a, and the cause of the dot defect is a manufacturing process error, the cause of the defect corresponding to the training panel image a is a manufacturing process error, and the type of the defect corresponding to the training panel image a is a manufacturing process error
Further, in one implementation of this embodiment, the obtaining an initial set of panel images;
dividing the initial panel image set into a plurality of sub-panel image sets according to defect categories;
and selecting a preset number of initial panel images from each sub-panel image set according to a preset weight, and determining the panel image set based on the selected initial panel images.
Specifically, a plurality of initial panel images of the initial panel images all carry the labeled defect area and the defect type corresponding to the labeled defect area, and the defect types of the panel defect areas carried by each initial panel image are the same. For example, the initial panel image includes an initial panel image a, the initial panel image a carries a panel defect area a and a panel defect area B, and the defect type corresponding to the panel defect area a is the same as the defect type corresponding to the panel defect area B. In a specific implementation manner of this embodiment, the initial panel image may be a panel image acquired in real time or at preset intervals in the panel production process by an image acquisition device (e.g., a camera or a camera) on a production line that is preset for producing the panel; or, a panel image obtained from a local storage space of the electronic device running the generation method of the panel defect detection model; or, sending an image acquisition request to an image storage server, and receiving a panel image returned by the server based on the image acquisition request; of course, the initial panel image may be acquired by other methods, and the specific acquisition method is not limited herein. The panel to be detected may include a TFT-LCD panel, an integrated circuit panel, or a chip panel, and the panel to be detected may include a circuit region, a non-circuit region, and the like.
Further, after the initial panel image set is obtained, a plurality of initial panel images in the initial panel image set may be divided into a plurality of sub-panel image sets according to defect categories corresponding to the panel-carrying regions of the initial panel images, each of the plurality of sub-panel image sets corresponds to one defect category, and the defect categories corresponding to the sub-panel image sets are different from each other. After each sub-panel image set is obtained through division, a preset number of initial panel images are selected from each sub-panel image set according to preset weight, and a set formed by all the selected initial panel images is used as a panel image set. The preset number is smaller than the number of initial panel images contained in a target sub-panel image set in the plurality of sub-panel image sets, wherein the target sub-panel image set is the sub-panel image set which contains the least number of initial panel images in the plurality of sub-panel image sets.
Further, in an implementation manner of this embodiment, the panel image set includes real detection areas corresponding to the training panel images, and the real detection areas are used to reflect position information of image areas corresponding to the panel defects in the training panel images, where one or more real detection areas corresponding to the training panel images may be used. For example, when the training panel image includes a defective region, the training panel image includes a real inspection region; when the training panel image includes a plurality of defect regions, the training panel image includes a plurality of real inspection regions.
The real detection area may be marked in the training panel image, the real detection area may adopt a rectangular area, a square area, or the like, and the respective shapes of the real detection areas corresponding to the training images may be different, for example, a part of the real detection areas in the plurality of training panel images is the rectangular area, and a part of the real detection areas is the square area, or the like. In addition, in practical applications, the real detection area may be a detection frame, and the detection frame may be marked in a training panel image, and an image area in the training panel image within the detection frame is a detection area (i.e., a panel defect area) corresponding to the training panel image.
Further, in an implementation manner of this embodiment, the acquiring a panel image set and determining a plurality of target detection regions corresponding to the panel image set specifically include:
s11, acquiring a panel image set, wherein the panel image set comprises a plurality of training panel images;
s12, acquiring a real detection area included by each training panel image in a plurality of training panel image sets;
and S13, performing cluster analysis on all the obtained real detection areas to obtain a plurality of target detection areas corresponding to the panel image set.
Specifically, the target detection areas are used as default detection areas of a preset network model, so that the preset network model labels a training panel image input into the preset network model based on the default detection areas, wherein the target detection areas are partial real detection areas of all acquired real detection areas, and the number of the target detection areas is less than the number of all acquired real detection areas. It can be understood that, for a training panel image input to a preset network model, a predicted detection area corresponding to the training panel image output by the preset network model corresponds to one of a plurality of target detection areas, wherein the area size corresponding to the predicted detection area is equal to the area size of the target detection area corresponding to the predicted detection area.
Further, the target detection area is determined by performing cluster analysis on all the obtained real detection areas, wherein each of the obtained real detection areas includes area position information and area size information, the area size information is used for reflecting the area size of the detection area corresponding to the real detection area, and the position information is used for the position of the detection area corresponding to the real detection area in the training panel image. For example, the size information is the length and width of the detection region corresponding to the real detection region, and the position information is the distance between the center of the detection region corresponding to the real detection region and the image center of the training panel image, and the like. Of course, in practical applications, the position information may also be determined in other manners, for example, a distance from a central point of the detection region corresponding to the real detection region to an upper left corner of the training panel image, a distance from the upper left corner of the detection region corresponding to the real detection region to the upper left corner of the training panel image, and the like. The size information may be determined in other manners, for example, a perimeter of the detection region corresponding to the real detection region, an area of the detection region corresponding to the real detection region, and the like.
Further, since the target detection area is used for configuring a preset network model, the preset network model needs to detect a defect position of a defect area in the training panel image. It can be understood that the preset network model is used for detecting a defect position in the training panel image, and the detected defect area is normalized by a target detection area configured at the detected defect position, so that the default detection area carries the detected defect position information. Therefore, the target detection area configured by the preset network model does not carry position information, so that after the real detection areas are obtained, the position information in the real detection areas can be removed, and only the size information in the real detection areas is reserved.
Based on this, in an implementation manner of this embodiment, the performing cluster analysis on all the obtained candidate detection regions to obtain a plurality of target detection regions corresponding to the panel image set specifically includes:
determining the size of a detection area corresponding to each obtained real detection area;
and performing cluster analysis on each detection area based on the acquired size of each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
Specifically, the size of the detection area is used to reflect the size of the detection area corresponding to the real detection area. The detection area size comprises the length of the detection area and the width of the detection area, wherein the detection area is a rectangular and/or square area. It can be understood that, part of the obtained real detection areas may be rectangular detection frames, and part may be square detection frames, or all of the obtained real detection areas may be rectangular detection frames, or all of the obtained real detection areas may be square detection frames. For example, the obtained real detection area includes a real detection area a, a real detection area B and a real detection area C, the real detection area a is a rectangular detection frame with a length of 5 and a width of 3, the real detection area B is a rectangular detection frame with a length of 5 and a width of 4, and the rectangular detection area B is a rectangular detection frame with a length of 4 and a width of 4, then the detection area size corresponding to the real detection area a is (5,3), the detection area size corresponding to the real detection area B is (5,4), and the detection area size corresponding to the real detection area C is (3,3), where (a) in (a, B) represents the length of the detection area, and B represents the width of the detection area.
Further, in an implementation manner of this embodiment, the performing cluster analysis on each detection region based on the obtained size of each detection region to obtain a plurality of target detection regions corresponding to the panel image set specifically includes:
performing cluster analysis on each detection area based on the acquired size of each detection area to divide the panel image set into a plurality of detection area sets, wherein the area difference value of any two detection areas in each detection area set of the plurality of detection area sets meets a preset condition;
and selecting a detection area from each detection area as a target detection area corresponding to each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
Specifically, after the sizes of the detection regions are obtained, cluster analysis may be performed on the obtained sizes of the detection regions to obtain a plurality of data classes, and each data class is used as a detection region set to obtain a plurality of detection region sets, where each detection region set in the plurality of detection region sets includes a plurality of detection regions, and any two detection regions in the plurality of detection regions satisfy a preset condition, where the preset condition may be preset, for example, the preset condition is that the region difference is smaller than or equal to a preset threshold value. In addition, when a plurality of detection area sets are obtained, one detection area can be randomly selected from each detection area set as a target detection area, or a cluster center of each detection area set is used as a target detection area.
Further, in a specific implementation manner of this embodiment, after the area sizes of the detection areas are obtained, each detection area size may be represented as a two-dimensional data group, and all the detection area sizes may form a two-dimensional data group set. When performing cluster analysis on each detection region based on the obtained size of each detection region, cluster analysis may be performed on the two-dimensional data set to obtain a plurality of data classes. Then, taking the clustering center in each data class as the area size of a target detection area, and then obtaining the area sizes of a plurality of target detection areas; and finally, determining the target detection area according to the acquired area size of each target detection area. The two-dimensional data groups of the clustering centers of the data classes are different, and correspondingly, the area sizes of the target detection areas are different from each other. The clustering analysis can adopt a K-means clustering algorithm, and the two-dimensional data set is subjected to clustering analysis through the K-means clustering algorithm to obtain a plurality of data classes. In one possible implementation manner of this embodiment, the number of the data classes is 12, that is, the number of the target detection areas is 12.
For example, the following steps are carried out: assuming that 12 clustering centers are used for clustering analysis on the two-dimensional data set, each clustering center corresponds to one two-dimensional data group, and the two-dimensional data groups corresponding to the data centers are different from each other, for example, 12 clustering centers comprise two-dimensional data groups (5,3) and two-dimensional data groups (20, 10); then, based on the 12 two-dimensional data sets, 12 different target detection areas can be obtained, so that 12 target areas can be determined based on the 12 target detection areas, where the target detection area corresponding to the two-dimensional data set (5,3) is a rectangular detection area with a length of 5 and a width of 3, and the target detection area corresponding to the two-dimensional data set (20,10) is a rectangular detection area with a length of 20 and a width of 10.
S20, configuring a plurality of target detection areas in a preset network model.
Specifically, the preset network model is preset and is used for generating a panel detection model based on a panel image set, and it can be understood that, after the preset network model is trained based on the panel image set, a panel defect detection model can be obtained, wherein a model structure of the panel defect detection model is the same as a model structure of the preset network model, and the panel defect detection model is different from the preset network model in that: the model parameters configured by the panel defect detection model are model parameters subjected to reverse learning, and the model parameters configured by the preset network model are initial model parameters.
Further, the preset network model is preset with a plurality of configuration parameters, wherein the configuration parameters may include a learning rate parameter, a reverse learning measurement parameter, and a default detection area parameter. After the preset network model is established, a plurality of corresponding configuration parameters need to be configured for the preset network model, and the preset network model configured with the configuration parameters is used as the preset network model. The default detection area parameter is used to configure a default detection area corresponding to the preset network model, so that the preset network model can label the training panel image with the default detection area, for example, when the default detection area is a rectangle, the default detection area parameter may include a length value and a width value corresponding to the default detection area, and for example, when the default detection area is a circle, the default detection area parameter may include a radius value corresponding to the default detection area, and the like. Therefore, the specific step of configuring the target detection areas in the preset network model is as follows: and taking the target detection areas as default detection area parameters, configuring the default detection area parameters in the preset network model, and enabling the default detection areas configured by the preset network model to be the target detection areas, so that when the preset network model predicts the prediction detection areas corresponding to the training panel images, one or more of the target detection areas are adopted to mark the predicted detection areas. It can be understood that, for a training panel image input to a preset network model, a predicted detection area corresponding to the training panel image output by the preset network model corresponds to one of a plurality of target detection areas, where the correspondence refers to that the area size of the predicted detection area is equal to the area size of the target detection area corresponding to the predicted detection area.
And S30, training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model.
Specifically, the input item of the prediction network model is a training panel image in the panel image set, and the output item of the preset network model is a prediction detection area corresponding to the training panel image. It can be understood that the training of the preset network model configured based on the panel image set may be to use the panel image set as a training sample set of the preset network model, and train the preset network model using each training panel image in the panel image set to obtain a trained panel defect detection model. Wherein, training the preset network model by adopting each training panel image in the panel image set can be specifically as follows: inputting training panel images in a panel image set into a preset network model, outputting a prediction detection area corresponding to the training panel images through the preset network model, determining a loss function based on the prediction detection area and a real detection area corresponding to the training panel images, and finally training model parameters of the preset network model based on the loss function until the model parameters of the classification structure meet preset conditions to obtain a panel defect detection model.
Further, the preset condition may include that the loss function value meets a preset requirement or the training times reach a preset number. The preset requirement may be determined according to the accuracy of the panel defect detection model, which is not described in detail herein, and the preset number may be a maximum training number of the preset neural network model, for example, 5000 times. Therefore, a prediction detection area corresponding to a training panel image is output in a preset network model, and after a loss function is determined based on the prediction detection area and a real detection area corresponding to the training panel image, whether the loss function value meets a preset requirement is judged; if the loss function value meets the preset requirement, ending the training; if the loss function value does not meet the preset requirement, judging whether the training times of the preset neural network model reach the prediction times, and if not, correcting the network parameters of the preset neural network model according to the loss function value; and if the preset times are reached, ending the training. Therefore, whether the training of the preset neural network model is finished or not is judged through the loss function value and the training times, and the phenomenon that the training of the preset neural network model enters a dead cycle due to the fact that the loss function value cannot meet the preset requirement can be avoided. In this embodiment, when the preset network model is trained based on the loss function, an improved batch gradient descent method (such as Adam, SGD, etc.) may be adopted for reverse learning, etc.
Further, since the network parameters of the preset neural network model are corrected when the training condition of the preset neural network model does not satisfy the preset condition (that is, the loss function value does not satisfy the preset requirement and the training times do not reach the preset times), after the network parameters of the preset neural network model are corrected according to the loss function value, the neural network model needs to be trained continuously, that is, the step of inputting the training panel images in the panel image set into the preset neural network model is continuously performed. And continuing to input the training panel images in the panel image set into the training panel images in the preset neural network model, wherein the training panel images are the training panel images which are not input into the preset neural network model as input items. For example, the training panel images in the panel image set have unique image identifiers (e.g., image numbers), and the training panel images input for the first training have different image identifiers from the training panel images input for the second training, e.g., the training panel images input for the first training have an image number of 1, the training panel images input for the second training have an image number of 2, and the training panel images input for the nth training have an image number of N. Certainly, in practical application, because the number of the training panel images in the training set is limited, in order to improve the training effect of the neural network model, the training panel images in the panel image set may be sequentially input to the preset neural network model to train the preset neural network model, and after all the training panel images in the panel image set are input to the preset neural network model, the operation of sequentially inputting the training panel images in the target panel image set to the preset neural network model may be continuously performed, so that the sample images in the training set are input to the preset neural network model in a cycle. Of course, in practical applications, when the preset network model is trained again by using the panel image set, the order of inputting the preset network model by each training panel image in the panel image set may be adjusted.
Further, in an implementation manner of this embodiment, the preset network model includes a detection module and a classification module, and the training of the preset network model obtained by configuration based on the panel image set to obtain the panel defect detection model specifically includes:
s31, outputting the training panel images in the panel image set to the detection module, and outputting a plurality of feature maps through the detection module, wherein at least two feature maps with different image sizes exist in the plurality of feature maps;
s32, inputting a plurality of feature maps into the classification module, and outputting the predicted defect area corresponding to the training panel image through the classification module;
and S33, training the model parameters of the preset network model based on the preset defect area and the preset defect area label corresponding to the training panel to obtain a panel defect detection model.
Specifically, each of the feature maps includes feature information of a defect region in a training panel image, and each feature map is used to determine a predicted defect region corresponding to the training panel image, where at least two feature maps with different image sizes exist in the feature maps, and in one implementation, the image sizes of the feature maps in the feature maps may be different from each other, for example, the feature maps include a feature map a, a feature map B, and a feature map C, where the image size of the feature map a is different from the image size of the feature map C. In addition, in practical applications, the image sizes of the feature maps in the feature maps may be different from each other, for example, the feature maps include a feature map a, a feature map B, and a feature map C, where the image size of the feature map a is different from the image sizes of the feature map B and the feature map C, respectively, and the image size of the feature map B is different from the image size of the feature map C.
In an implementation manner of this embodiment, the ratio of the image size width to the image size height of the feature maps is the same, and in a feature map sequence formed by arranging the feature maps in the order from small to large in image size, the image size ratio of any two adjacent feature maps is the same, where the image size refers to the image size width ratio or the image size height ratio. For example, the plurality of feature maps include feature map a, feature map B, and feature map C, the image size of feature map a is 19 × 19, and the image size of feature map B is 38 × 38. The image size of feature C is 76 × 76; then the characteristic diagram sequences formed by arranging the characteristic diagram A, the characteristic diagram B and the characteristic diagram C according to the sequence of the image sizes from small to large are the characteristic diagram A, the characteristic diagram B and the characteristic diagram C, and the ratio of the image size of the characteristic diagram B to the image size of the characteristic diagram A is 2; the ratio of the image size of the feature map C to the image size of the feature map B is 2.
Further, each of the feature maps corresponds to a plurality of candidate detection regions, wherein the candidate detection regions are included in the target detection regions, and the candidate detection regions corresponding to the feature maps are different from each other. In addition, for any two feature maps in the feature maps, the area size of any candidate detection area in a first feature map in the two feature maps is larger than the area size of any candidate detection area in a second feature map in the two feature maps, wherein the image size of the first feature map is larger than that of the second feature map. For example, the plurality of feature maps include feature map a and feature map B, the image size of feature map a is 76 × 76, and the image size of feature map B is 38 × 38; then the size of the candidate detection region in the feature map a is smaller than that in the feature map B, for example, the size of the candidate detection region in the feature map a is 20 × 10, and the size of the candidate detection region in the feature map B is 5 × 3.
Further, in an implementation manner of this embodiment, the detection module includes a plurality of cascaded feature extraction units, and according to a connection order of the plurality of feature extraction units, an output item of a previous feature extraction unit in two adjacent feature extraction units is an input item of a next feature extraction unit; and the input item of the most previous feature extraction unit is a training panel image. In addition, the number of the feature extraction units is the same as that of the feature maps, and the feature maps are determined by the feature extraction units.
Further, at least one first feature extraction unit exists in the plurality of feature extraction units, the number of feature extraction layers included in the first feature extraction unit is greater than the preset number, the number of feature extraction layers included in each second feature extraction unit except the first feature extraction unit in the plurality of feature extraction units is less than or equal to the preset number, and the first feature extraction unit is located before each second feature extraction unit according to the cascade order. For example, according to the cascade order of a plurality of feature extraction units, the first feature map output by the most front feature extraction unit determines the basis of the first feature map for the feature extraction unit behind the first feature extraction unit, so that the first feature map output by the most front feature extraction unit comprises abundant image detail features of the panel defect area; the characteristic extraction layer included by the characteristic extraction unit at the forefront is more than that included by any one of the characteristic extraction units positioned behind the characteristic extraction unit. For example, the plurality of feature extraction units include a feature extraction unit 1, a feature extraction unit 2, and a feature extraction unit 3; the feature extraction unit 1 is connected with the feature extraction unit 2, and the feature extraction unit 2 is connected with the feature extraction unit 3, so that the feature extraction layer included in the feature extraction unit 1 is more than the feature extraction layer included in the feature extraction unit 2, and the feature extraction layer included in the feature extraction unit 3, for example, the feature extraction unit 1 includes 5 feature extraction layers, the feature extraction unit 2 includes 3 feature extraction layers, and the feature extraction unit 3 includes 3 feature extraction layers. Further, for each feature extraction unit, the image sizes of the output items of the respective feature extraction layers included in the feature extraction unit are the same, for example, the feature extraction unit a includes a feature extraction layer 1, a feature extraction layer 2, and a feature extraction layer 3, and then the image size of the output item of the feature extraction layer 1, the image size of the output item of the feature extraction layer 2, and the image size of the output item of the feature extraction layer 3 are the same, for example, 128 × 128, and the like.
Further, the detection module comprises a plurality of fusion units, and the number of the fusion units is one less than that of the feature extraction units, for example, the number of the feature extraction units is 3, and the number of the fusion units is 2. The input items of the fusion units positioned at the second position are the output items of the feature extraction unit positioned at the third position and the output item of the fusion unit positioned at the forefront, and so on, and the input items of the fusion units positioned at the last position are the output items of the fusion unit positioned at the second position and the output item of the first feature extraction unit. Correspondingly, the feature maps comprise output items of the fusion units and output items of the feature extraction unit positioned at the last. Therefore, after the panel image to be detected is downsampled through the feature extraction unit, semantic information used for reflecting the panel defects in the panel image can be obtained, the collected reference feature map is upsampled through the fusion units, the image size of the feature map can be enlarged under the condition that the semantic information used for reflecting the panel defects in each training panel image is reserved, the feature map with the enlarged image size can be used for detecting small-size panel defect areas, the accuracy of detecting the small-size panel defect areas is improved on the basis that the defect detection model can detect the large-size panel defect areas, and the accuracy of the defect detection model is integrally improved.
For example, the following steps are carried out: as shown in fig. 2, the plurality of feature extraction units include a first feature map unit, a second feature extraction unit and a third feature extraction unit; the plurality of fusion units comprise a first fusion unit and a second fusion unit, wherein the output item of the third feature extraction unit and the output item of the second feature extraction unit are input items of the first fusion unit, the output item of the first feature extraction unit and the output item of the first fusion unit are input items of the second fusion unit, and the output item of the third feature extraction unit, the output item of the first fusion unit and the output item of the second fusion unit are three feature graphs corresponding to the training panel image.
In summary, the present embodiment provides a method for generating a panel defect detection model, where the method for generating a panel defect detection model includes obtaining a panel image set, and determining a plurality of target detection areas corresponding to the panel image set; configuring a plurality of target detection areas in a preset network model; and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model. According to the method and the device, a plurality of target detection areas corresponding to the panel image set are determined according to the panel image set, and a preset network model provided with the target detection areas is trained through the panel image set to obtain a defect detection model. Therefore, the defect area in the detected panel image can be determined based on the defect detection model, the efficiency of panel defect detection is improved, meanwhile, the target detection area is determined based on the panel image set, so that the target detection area configured by the defect network model can be matched with the standard area of the defect area carried by each training panel image in the panel image set, and the accuracy of defect area detection is improved.
Based on the generation method of the panel defect detection model, this embodiment provides a panel defect detection method, which is applied to the panel defect detection model in the above embodiment, as shown in fig. 3, and the method includes:
n10, inputting the panel image to be detected into the panel defect detection model;
and N20, outputting a defect area corresponding to the panel image to be detected through the panel defect detection model, wherein the defect area is contained in a plurality of target detection areas.
Specifically, the panel image to be detected may be a panel image acquired in real time or at preset time intervals in the panel production process by presetting an image acquisition device (such as a camera or a camera) on a production line for producing the panel; or, a panel image obtained from a local storage space of the electronic device running the generation method of the panel defect detection model; or, sending an image acquisition request to an image storage server, and receiving a panel image returned by the server based on the image acquisition request; of course, the panel image to be detected may also be acquired by other manners, and the specific acquisition manner is not limited herein. The panel to be detected may include a TFT-LCD panel, an integrated circuit panel, or a chip panel, and the panel to be detected may include a circuit region, a non-circuit region, and the like.
Further, the default detection areas configured by the panel defect detection model are a plurality of target detection areas, and therefore, when the defect area corresponding to the panel image to be detected is determined based on the panel defect detection model, the defect area output by the panel defect detection model is marked by using one of the plurality of target detection areas.
Further, in one implementation of the present embodiment, since many different defects may exist on the display panel, and the positions of the defects in the display panel may be very close to each other, this may cause the defect frames detected during the defect detection of the display panel to have intersecting or nested defect regions; based on that, in an implementation manner of the embodiment, after the defect area corresponding to the panel image to be detected is output by the panel defect detection model, the method further includes:
obtaining confidence degrees corresponding to the defect regions, and screening the defect regions based on the confidence degrees to obtain screened defect regions;
and taking the screened defect area as a defect area corresponding to the panel image to be detected.
Specifically, the defect area is used for reflecting the position of an image area corresponding to a panel defect in the panel to be detected in the panel image, and the intersection or nesting of partial defect areas can exist in each defect area. Each of the plurality of defect regions may have a rectangular frame, a square frame, or the like, and the shape of each of the plurality of defect regions may be different, for example, a part of the defect regions is a rectangular candidate region, and a part of the defect regions is a square candidate region. In addition, each defect region comprises a corresponding confidence coefficient, and the confidence coefficient corresponding to each defect region is used for reflecting the credibility of the defect existing in the image region corresponding to each defect region; in an implementation manner of this embodiment, the confidence value ranges from 0 to 1, and the greater the confidence value, the higher the confidence level of the defect region is; conversely, the smaller the confidence value, the lower the confidence level of the defect region. For example, the confidence level of a defective region is higher when the confidence level is 1 than when the confidence level is 0.1.
Further, based on the confidence corresponding to each defect region, filtering all the defect regions may be performed by using a non-maximum suppression algorithm to filter all the detected defect regions, so as to determine the defect region corresponding to the panel image. For example, all defect regions are regarded as a defect region set, and all defect regions with confidence degrees smaller than or equal to a first threshold (e.g., 0.1, etc.) in the defect region set are deleted to update the defect region set; arranging all defect regions in the updated defect region set from high confidence to low confidence, selecting the defect region with the highest confidence, determining the intersection ratio (the intersection ratio can be understood as the ratio of the intersection and the union of two detection frames) of other defect regions in the defect region set and the defect region with the highest confidence, filtering the defect regions with the intersection ratio being larger than a second threshold (such as 0.3) from the updated defect region set, taking the filtered defect region set as the updated defect region set, continuously arranging all defect regions in the updated defect region set from high confidence to low confidence, and executing that the filtered defect region set does not contain the defect regions; and selecting all the defect areas as the panel defect areas corresponding to the panel.
In an implementation manner of this embodiment, the filtering, by the defect detection model, all the defect regions based on the confidence degrees corresponding to the defect regions to obtain the filtered defect regions specifically includes:
acquiring position parameters corresponding to each defect area;
and filtering each defect region based on the confidence coefficient and the position parameter corresponding to each defect region to obtain the filtered defect region.
Specifically, the position parameter is used to reflect a region of the defect region in the panel image, and it can be understood that the position parameter is region information of an image region corresponding to the defect region in the panel image. The position parameters comprise position information and size information, the size information is used for reflecting the size of the defect area, and the position information is used for reflecting the position of the defect area in the panel image. For example, the size information is the area of the defect region, and the position information is the distance between the center of the defect region and the image center of the panel image, and the like. Of course, in practical applications, the position information may also be determined in other manners, for example, a distance from a center point of the defect region to an upper left corner of the panel image, a distance from the upper left corner of the defect region to the upper left corner of the panel image, and the like. The size information may be determined in other ways, such as the perimeter of the defect area, etc.
Further, in an implementation manner of this embodiment, a determination process of the position parameter is described by taking the defect region as a rectangular candidate region, size information as an area of the defect region, and the position information as a distance between a center of the defect region and an image center of the panel image as an example. The determining process of the position parameter of the defect area may be: firstly, acquiring vertex coordinates of four vertexes of the defect area; secondly, determining four side lengths and central point coordinates of the defect area according to the four vertex coordinates; and finally, determining the size information of the defect area according to the four side lengths, and determining the position information of the defect area according to the center point coordinate. For example, the image center coordinates of the panel image are (0,0), and four of the defective region AThe vertex coordinates are (0, 2), (2, 0) and (0,0), respectively, then the position information of the defect area A is
Figure BDA0002595054590000191
The size information of the defective area a is 4.
The filtering of the defect regions refers to screening each defect region in the defect regions to remove some defect regions that do not satisfy a filtering condition in all the defect regions, wherein the filtering condition is determined based on the confidence and the position parameter corresponding to each defect region. It can be understood that two factors, namely the confidence coefficient and the position parameter, are used as filtering factors of the defect region, so that whether the panel region exists in the defect region is considered on one hand, and region information of the region where the defect region is located in the panel image is considered on the other hand, when the intersected or nested defect regions are filtered, the defect region with high confidence coefficient is retained, and meanwhile, the candidate region with the size and the position meeting the conditions is retained through the limitation of the position parameter, so that the accuracy of the retained defect region is improved, and the accuracy of the panel defect detection is further improved.
The two factors of the confidence coefficient and the position parameter are adopted as the factors of filtering the defect area in the embodiment: the defects of the display panel are generally discrete defect small particles, so that the defect region is a defect region corresponding to each defect small particle, and the defect region corresponding to each defect small particle has a high confidence (for example, the confidence value is 0-1, and the high confidence can be understood as a confidence greater than a preset confidence, for example, a confidence greater than 0.7, and the like), so that when the defect region is screened based on the confidence, the defect region can be screened to be the defect region corresponding to each defect small particle, and a large detection frame containing the defects of the defect small particles cannot be obtained, and then the defect region cannot be screened to determine the defect type corresponding to the display panel, thereby causing the problem of low defect detection accuracy.
In an implementation manner of this embodiment, the filtering each defect region based on the confidence and the position parameter corresponding to each defect region to obtain a filtered defect region specifically includes:
determining a confidence score corresponding to each defect area according to the confidence coefficient and the position parameter corresponding to each defect area;
and filtering each defect area according to the corresponding confidence score of each defect area to obtain the filtered defect area.
Specifically, the confidence score is used for reflecting the possibility of selecting the defect region, and the larger the value of the confidence score is, the higher the credibility of the defect region is; conversely, the smaller the value of the confidence score, the lower the confidence level of the defect area. For example, the probability of the defect region being selected with a confidence score of 5 is higher than the probability of the defect region with a confidence score of 1. The target defect region set is a candidate region set formed by defect regions obtained by filtering based on the confidence scores, wherein the target defect region set is a subset of the defect region set, and when the defect regions are filtered based on the confidence scores, the target defect candidate set is a true subset of the defect candidate set; when no defect region is filtered out based on the confidence score, the target defect candidate set is a defect candidate set. For example, the defect region set includes a defect region a, a defect region B, a defect region C, and a defect region D, the defect region a and the defect region D are filtered out based on the confidence scores corresponding to the defect regions, and then the target defect region set includes the defect region B and the defect region D.
Further, in an implementation manner of this embodiment, the location parameter includes location information and size information; and determining the confidence score corresponding to each defect region according to the confidence coefficient and the position parameter corresponding to each defect region as follows:
and weighting the confidence coefficient, the position information and the size information corresponding to the defect region to obtain a confidence score corresponding to the defect region.
Specifically, the size information is used to reflect the size of the defect region, for example, the size information is the area of the defect region, the size information is the side length of the left side of the subsequent frame of the defect, and the like. The position information is used for the position of the mode defect area in the panel image. For example, the size information is the area of the defect region, and the position information is the distance between the center of the defect region and the image center of the panel image, the distance from the center point of the defect region to the upper left corner of the panel image, the distance from the upper left corner of the defect region to the upper left corner of the panel image, and the like. The confidence is the confidence carried by the defect region.
Further, before performing weighting processing on the confidence, the position information, and the size information, it is necessary to determine weighting coefficients corresponding to the confidence, the position information, and the size information. The weight coefficient corresponding to the confidence coefficient and the weight coefficient corresponding to the size information are both preset, for example, the weight coefficient of the confidence coefficient is 1, and the weight coefficient of the size information is 0.5. In addition, in practical application, the weight coefficient corresponding to the size information may be set according to an actual detection situation, and the weight coefficient corresponding to the size information may be determined according to the importance degree of the size in the detection task, and when the importance degree of the size is high, the weight coefficient corresponding to the size information is large; conversely, when the importance degree of the size is low, the weighting coefficient corresponding to the size information is small, so that when the confidence of the defect region with the larger size is lower, the confidence score corresponding to the defect region with the larger size is also increased, so that the probability that the defect region with the larger size is selected is increased. For example, in the task of detecting the size of the defect region, the value of the weight coefficient corresponding to the size information is greater than the value of the weight coefficient corresponding to the size information in the task of detecting the position information of the defect region, for example, in the task of detecting the size of the defect region, the value of the weight coefficient corresponding to the size information is 0.8; in the detection task focusing on the position information of the defective region, the value of the weight coefficient corresponding to the size information is 0.4 or the like.
Further, in an implementation manner of this embodiment, the process of obtaining the weight parameter corresponding to the location information specifically includes:
for each edge in the defect area, determining a distance between the edge and a target edge, wherein the target edge is an edge of the panel image corresponding to the edge;
and determining a second weight coefficient corresponding to the position information according to all the determined distances.
Specifically, the distance refers to a distance between a region edge and a target edge, the region edge is parallel to the target edge, and thus, the distance may be determined by: and selecting an initial point on the area edge, making a perpendicular line from the initial point to the target edge, and taking the distance between the initial point and the perpendicular point as the distance between the area edge and the target edge. For example, for an edge a in the defect area, a frame a and a frame B are translated in the panel image, the distance between the frame a and the edge a is a, the distance between the frame B and the edge a is B, a > B, and the frame B is a target edge corresponding to the edge a. Of course, it should be noted that the defect area may be the same as the shape of the edge frame of the panel image, for example, all rectangular frames.
Further, the distance refers to a distance between two parallel edges, one of the two parallel edges is an initial edge, and the other is a target edge, a point on the initial edge may be selected when determining the distance between the initial edge and the target edge, a perpendicular line is drawn from the point to the target edge, and the distance between the point and the perpendicular point is taken as the distance between the initial edge and the target edge. For example, the defect area is a rectangular frame ABCD, the edge frame of the panel image is a rectangular width ABCD, the target edge corresponding to the AB edge of the rectangular frame ABCD is AB, and the distance between the AB edge and the AB edge is d _ h 1; the target edge corresponding to the BC edge is BC, and the distance between the BC edge and the BC edge is d _ w 2; the target side corresponding to the CD side is a CD, and the distance between the CD side and the CD side is d _ h 2; the target side corresponding to the DA side is DA, and the distance between the DA side and the DA side is d _ w 2.
Further, in one implementation of this example, the defect area is a rectangular frame; after the distance corresponding to each side in the defect area is obtained, dividing four sides of the defect area into a first side group and a second side group; the first edge group comprises a first wide edge and a first high edge, and the first wide edge is intersected with the first high edge; the second edge group comprises a second wide edge and a second high edge; the second broad side and the second high side intersect. After the first edge group and the second edge group are obtained, calculating the sum of a first wide edge and a first high edge in the first edge group and the ratio of the sum of a target edge corresponding to the first wide edge and a target edge corresponding to the first high edge; and the sum of the second broadside and the second high side in the second side group, and the ratio of the sum of the target side corresponding to the second broadside and the target side corresponding to the second high side; and finally, determining a weight coefficient corresponding to the position information according to the two ratios obtained by calculation, for example, taking the smaller ratio of the two ratios as the weight coefficient corresponding to the position information, or taking the larger ratio of the two ratios as the weight coefficient corresponding to the position information, and the like.
In a specific implementation manner, the smaller ratio of the two ratios is used as the weight coefficient corresponding to the position information, so that the influence of the position information on the corresponding confidence score can be retained, and the influence of the position information on the proportion of the confidence score due to overhigh proportion of the position information in the confidence score, which affects the proportion of the confidence score, and causes the selection error of the defect region, is avoided. Therefore, the calculation formula of the weight coefficient corresponding to the position information can be as follows:
Figure BDA0002595054590000221
wherein, W1Weight coefficient corresponding to position information, dw1The distance from the first wide side to the corresponding target side; dh1The distance from the first high edge to the corresponding target edge; dw2The distance from the second wide side to the corresponding target side; dh2The distance from the second wide side to the corresponding target side; w is a width value of the edge frame of the panel image, and H is a height value of the edge frame of the panel image.
Further, after determining the confidence level, the size information, the position information, the weight coefficient corresponding to the confidence level, the weight coefficient corresponding to the size information, and the weight coefficient corresponding to the position information, the calculation formula of the confidence score may be:
ComBox=az+W0s+W1p
wherein ComBox is confidence score, z is confidence, a is weight coefficient corresponding to confidence, W0Is a weight coefficient corresponding to the size information, s is the size information, W1Is a weight coefficient corresponding to the position information, and p is the position information.
Further, in an implementation manner of this embodiment, the filtering the defect regions according to the confidence scores corresponding to the defect regions to obtain filtered defect regions specifically includes:
determining a target candidate region and a reference candidate region set corresponding to the defect candidate region set according to the confidence score corresponding to each defect candidate region in the defect candidate region set, wherein the reference candidate region set comprises the defect candidate region set except the rest defect candidate regions outside the target candidate region, and the target candidate region is a defect region with the highest confidence score in each defect region;
for each reference candidate region in a set of reference candidate regions, determining a first area parameter and a second area parameter between the target candidate region and the reference candidate region;
filtering the reference candidate region set according to the first area parameter and the second area parameter corresponding to each reference candidate region to obtain a filtered reference candidate region;
taking the filtered reference candidate regions as defect regions, and continuously executing the step of determining target candidate regions and reference candidate region sets corresponding to the defect regions according to the confidence scores corresponding to the defect regions in the defect regions; until the filtered reference candidate region does not contain the reference candidate region;
and determining a target defect area set corresponding to the panel image according to all the determined target candidate areas.
Specifically, the target candidate region is a defective region in a defective region set, and the reference candidate region set includes the defective regions except the remaining defective regions outside the target candidate region. It is understood that all defective areas in the target candidate area and the reference candidate area set are all defective areas in the defective area set. Therefore, the process of determining the target candidate region and the reference candidate region set corresponding to each defective region according to the confidence score corresponding to each defective region in each defective region may be: selecting a defect area from the defect area set as a target defect area according to the confidence score of each defect area; and dividing the target defect region by each defect region to obtain a reference candidate region set, wherein the target candidate region may be a defect region with the largest confidence score in the defect region set. For example, the defect regions include a defect region a, a defect region B, a defect region C, a defect region D, and a defect region E, where the confidence score for the defect region a is 5, the confidence score for the defect region B is 4.8, the confidence score for the defect region C is 4.9, the confidence score for the defect region D is 3.8, and the confidence score for the defect region E is 5.8, then the defect region E is a target candidate region, and the defect region a, the defect region B, the defect region C, and the defect region D constitute a reference candidate region set.
Further, the first area parameter is used for reflecting proportion information of an intersection area and a parallel area of the target candidate area and the reference candidate area, and the second area parameter is used for reflecting proportion information of the intersection area of the target candidate area and the reference candidate area and the target candidate area. Thus, for each reference candidate region in the reference candidate region set, determining the first area parameter and the second area parameter between the target candidate region and the reference candidate region specifically includes:
for each reference candidate region in the reference candidate region set, determining an intersection region and a phase-parallel region between the target candidate region and the reference candidate region;
determining a first area parameter corresponding to the reference candidate area according to the intersection area and the phase-parallel area;
and determining a second area parameter corresponding to the reference candidate area according to the intersection area and the image area corresponding to the target candidate area.
Specifically, the intersection region between the target candidate region and the reference candidate region refers to an intersection region of an image region of an edge frame of the target candidate region and an image region of an edge frame of the reference candidate region; the phase-parallel region between the target candidate region and the reference candidate region refers to a phase-parallel region of an image region of an edge frame of the target candidate region and an image region of an edge frame of the reference candidate region. The first area parameter is the ratio of the area of the intersection area to the area of the phase-parallel area. In addition, the image region corresponding to the target candidate region is an image region whose edge frame is the target candidate region; the second area parameter is a ratio of a region area of the intersection region to a region area of the image region corresponding to the target candidate region. Of course, it should be noted that the image areas corresponding to the intersection area, the phase area, and the target candidate area may all be partial image areas of the panel image.
Further, in an implementation manner of this embodiment, the filtering the reference candidate region set according to the first area parameter and the second area parameter corresponding to each reference candidate region to obtain the filtered reference candidate region specifically includes:
for each reference candidate region, if the first area parameter or the second area parameter corresponding to the reference candidate region is greater than a preset threshold, filtering the reference candidate region from the reference candidate region set to obtain a filtered reference candidate region.
Specifically, the preset threshold is a preset standard for measuring whether each reference candidate region needs to be filtered, and it can be understood that the preset threshold is a filtering basis for filtering the reference candidate region set. Therefore, when the first area parameter R1 and the second area parameter R2 are obtained, the first area parameter R1 and the second area parameter R2 may be measured based on a preset threshold to determine whether the reference candidate region needs to be filtered.
In a specific implementation manner of this embodiment, the condition that the reference candidate region needs to be filtered out may be that the first area parameter R1 or the second area parameter R2 is greater than a preset threshold. Thus, after the first area parameter R1 and the second area parameter R2 are obtained, the first area parameter R1 and the second area parameter R2 may be compared with preset thresholds, respectively; if the first area parameter R1 is greater than a preset threshold or the second area parameter R2 is greater than a preset threshold, taking the reference candidate region as a reference candidate region to be filtered, and filtering the reference candidate region from the reference candidate region set; in this way, the step of comparing the first area parameter R1 and the second area parameter R2 with the preset threshold is performed for each reference candidate region, so that all reference candidate regions in the reference candidate region set that need to be filtered out can be filtered out to obtain filtered reference candidate regions.
Further, after the filtered reference candidate set is obtained, whether the filtered reference candidate set contains a defect region or not can be judged; if the defect area is included, taking the filtered reference candidate set as the defect area, and continuously executing the step of determining a target candidate area and a reference candidate area set corresponding to each defect area according to the confidence score corresponding to each defect area in each defect area; and if the defect area is not included, acquiring each determined target candidate area, and taking each determined target candidate area as the filtered defect area to obtain the defect area corresponding to the panel image.
Based on the above generation method of the panel defect detection model, the present embodiment provides a computer-readable storage medium, which stores one or more programs that can be executed by one or more processors to implement the steps in the generation method of the panel defect detection model according to the above embodiment.
Based on the above generation method of the panel defect detection model, the present application further provides a terminal device, as shown in fig. 4, including at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the mobile terminal are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. A method for generating a panel defect detection model, the method comprising:
acquiring a panel image set, and determining a plurality of target detection areas corresponding to the panel image set;
configuring a plurality of target detection areas in a preset network model;
and training the preset network model obtained by configuration based on the panel image set to obtain a panel defect detection model.
2. The method for generating a panel defect inspection model according to claim 1, wherein the acquiring a panel image set and determining a plurality of target inspection regions corresponding to the panel image set specifically includes:
acquiring a panel image set, wherein the panel image set comprises a plurality of training panel images;
for each training panel image in a plurality of training panel image sets, acquiring a real detection area included by the training panel image;
and performing cluster analysis on all the obtained real detection areas to obtain a plurality of target detection areas corresponding to the panel image set.
3. The method for generating the panel defect detection model according to claim 2, wherein the performing cluster analysis on all the obtained real detection regions to obtain a plurality of target detection regions corresponding to the panel image set specifically comprises:
determining the size of a detection area corresponding to each obtained real detection area;
and performing cluster analysis on each detection area based on the acquired size of each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
4. The method for generating the panel defect detection model according to claim 3, wherein the performing cluster analysis on each detection region based on the obtained size of each detection region to obtain a plurality of target detection regions corresponding to the panel image set specifically comprises:
performing cluster analysis on each detection area based on the acquired size of each detection area to divide the panel image set into a plurality of detection area sets, wherein the area difference value of any two detection areas in each detection area set of the plurality of detection area sets meets a preset condition;
and selecting a detection area from each detection area as a target detection area corresponding to each detection area to obtain a plurality of target detection areas corresponding to the panel image set.
5. The method for generating a panel defect detection model according to claim 1, wherein the preset network model includes a detection module and a classification module, and the training of the preset network model obtained by configuration based on the panel image set to obtain the panel defect detection model specifically includes:
outputting training panel images in the panel image set to the detection module, and outputting a plurality of feature maps through the detection module, wherein at least two feature maps with different image sizes exist in the plurality of feature maps;
inputting a plurality of feature maps into the classification module, and outputting a predicted defect region corresponding to the training panel image through the classification module;
and training the model parameters of the preset network model based on the preset defect area and the preset defect area label corresponding to the training panel to obtain a panel defect detection model.
6. The method of generating a panel defect inspection model according to claim 5, wherein the image sizes of the feature maps are different from each other.
7. The method for generating the panel defect detection model according to claim 5, wherein the detection module includes a plurality of cascaded feature extraction units, at least one first feature extraction unit exists in the plurality of feature extraction units, the number of feature extraction layers included in the first feature extraction unit is greater than a preset number, the number of feature extraction layers included in each second feature extraction unit except the first feature extraction unit in the plurality of feature extraction units is less than or equal to the preset number, and the first feature extraction unit is located before each second feature extraction unit according to a cascaded sequence.
8. The method as claimed in claim 1, wherein each of the feature maps corresponds to a plurality of candidate inspection regions, wherein the candidate inspection regions are included in the target inspection regions, and the candidate inspection regions corresponding to each feature map are different from each other.
9. The method of claim 8, wherein for any two of the plurality of feature maps, a size of a region of any candidate detection region in a first feature map of the two feature maps is larger than a size of a region of any candidate detection region in a second feature map of the two feature maps, wherein a size of an image of the first feature map is larger than a size of an image of the second feature map.
10. The method for generating a panel defect inspection model according to claim 1, wherein the acquiring a panel image set specifically comprises:
acquiring an initial panel image set;
determining a defect type corresponding to each initial panel image in an initial panel image set, and dividing the initial panel image set into a plurality of sub-panel image sets according to the defect type;
and selecting a preset number of initial panel images from each sub-panel image set according to a preset weight, and determining the panel image set based on the selected initial panel images.
11. A panel defect detecting method applied to the panel defect detecting model according to any one of claims 1 to 10, the method comprising:
inputting a panel image to be detected into the panel defect detection model;
and outputting the defect area corresponding to the panel image to be detected through the panel defect detection model.
12. The panel defect detecting method according to claim 11, wherein when the defect detecting areas are a plurality of defect detecting areas; after the defect area corresponding to the panel image to be detected is output through the panel defect detection model, the method further includes:
obtaining confidence degrees corresponding to the defect regions, and screening the defect regions based on the confidence degrees to obtain screened defect regions;
and taking the screened defect area as a defect area corresponding to the panel image to be detected.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps in the method for generating a panel defect detection model according to any one of claims 1 to 10 and/or to implement the steps in the method for generating a panel defect detection model according to any one of claims 11 to 12.
14. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method for generating a panel defect detection model according to any one of claims 1 to 10, and/or implements the steps in the method for generating a panel defect detection model according to any one of claims 11 to 12.
CN202010706579.9A 2020-07-21 2020-07-21 Generation method and detection method of panel defect detection model and terminal equipment Pending CN113971649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706579.9A CN113971649A (en) 2020-07-21 2020-07-21 Generation method and detection method of panel defect detection model and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706579.9A CN113971649A (en) 2020-07-21 2020-07-21 Generation method and detection method of panel defect detection model and terminal equipment

Publications (1)

Publication Number Publication Date
CN113971649A true CN113971649A (en) 2022-01-25

Family

ID=79584596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706579.9A Pending CN113971649A (en) 2020-07-21 2020-07-21 Generation method and detection method of panel defect detection model and terminal equipment

Country Status (1)

Country Link
CN (1) CN113971649A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082722A (en) * 2022-08-22 2022-09-20 四川金信石信息技术有限公司 Equipment defect detection method, system, terminal and medium based on forward sample

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082722A (en) * 2022-08-22 2022-09-20 四川金信石信息技术有限公司 Equipment defect detection method, system, terminal and medium based on forward sample
CN115082722B (en) * 2022-08-22 2022-11-01 四川金信石信息技术有限公司 Equipment defect detection method, system, terminal and medium based on forward sample

Similar Documents

Publication Publication Date Title
US10565560B2 (en) Alternative people charting for organizational charts
CN111340054A (en) Data labeling method and device and data processing equipment
CN109345553A (en) A kind of palm and its critical point detection method, apparatus and terminal device
US11341319B2 (en) Visual data mapping
CN116206012A (en) Element layout method and related equipment
WO2023029491A1 (en) Panel array short circuit detection method and apparatus, electronic device, and storage medium
CN111753954A (en) Hyper-parameter optimization method of sparse loss function
CN113971649A (en) Generation method and detection method of panel defect detection model and terminal equipment
CN109582269B (en) Physical splicing screen display method and device and terminal equipment
CN113971650A (en) Product flaw detection method, computer device and storage medium
CN113971648A (en) Panel defect detection method, storage medium and terminal equipment
CN114723649A (en) Panel defect detection method, storage medium and terminal equipment
CN112784818B (en) Identification method based on grouping type active learning on optical remote sensing image
CN112712119B (en) Method and device for determining detection accuracy of target detection model
JP2005301789A (en) Cluster analysis device, cluster analysis method and cluster analysis program
CN114660065A (en) Panel defect detection method, storage medium and terminal equipment
CN112084364A (en) Object analysis method, local image search method, device, and storage medium
CN110781973B (en) Article identification model training method, article identification device and electronic equipment
JP2020013378A (en) Image classification method and image classification device
CN113139956B (en) Generation method and identification method of section identification model based on language knowledge guidance
CN111612023A (en) Classification model construction method and device
CN114757871A (en) Defect classification model generation method, defect classification model classification method and terminal equipment
CN114782710B (en) Image feature extraction method and device and display panel restoration method
CN113112497B (en) Industrial appearance defect detection method based on zero sample learning, electronic equipment and storage medium
CN115294078B (en) Glass asymmetric chamfer identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination