CN117058150B - Method and device for detecting defects of lamp beads - Google Patents

Method and device for detecting defects of lamp beads Download PDF

Info

Publication number
CN117058150B
CN117058150B CN202311320730.5A CN202311320730A CN117058150B CN 117058150 B CN117058150 B CN 117058150B CN 202311320730 A CN202311320730 A CN 202311320730A CN 117058150 B CN117058150 B CN 117058150B
Authority
CN
China
Prior art keywords
defect
picture
area
data
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311320730.5A
Other languages
Chinese (zh)
Other versions
CN117058150A (en
Inventor
熊海飞
杨光
于洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202311320730.5A priority Critical patent/CN117058150B/en
Publication of CN117058150A publication Critical patent/CN117058150A/en
Application granted granted Critical
Publication of CN117058150B publication Critical patent/CN117058150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0257Testing optical properties by measuring geometrical properties or aberrations by analyzing the image formed by the object to be tested
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0278Detecting defects of the object to be tested, e.g. scratches or dust
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/44Testing lamps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention provides a method and a device for detecting defects of lamp beads, wherein the method comprises the following steps: after the lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel; inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result; detecting a first pseudo defect in the defect identification result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect identification result; after the lamp panel is powered off, collecting a second picture of the lamp panel; and detecting a second pseudo defect in the defect area according to the first picture and the second picture. According to the embodiment of the invention, the technical problem of inaccurate detection of the lamp bead defects in the related technology is solved, the false defects of lamp bead identification are reduced, and the detection precision of the lamp bead is improved.

Description

Method and device for detecting defects of lamp beads
Technical Field
The invention relates to the technical fields of machine vision and industrial robots, in particular to a method and a device for detecting defects of lamp beads.
Background
In the industrial production process of the related technology, in the process of attaching the LED lamp beads to the PCB bonding pads by the chip mounter, product defects are generated due to factors such as temperature and humidity of a production environment, process flow, raw material factors, equipment or people, and the like, so that some lamp beads are not lighted.
In the related art, by adopting the SSD destination detection method, LED beads that are not on are detected, and a threshold value setting manner, such as brightness or edge gradient, is also adopted. By adopting the SSD destination detection method, the lamp beads with dirt on the surface are often identified as not-on lamp beads, and textures at the positions other than the lamp beads are also identified as not-on lamp beads. The method of setting the threshold value is greatly influenced by illumination, environment, complex background and the like during photographing, and the threshold value is required to be frequently modified according to the environment during use, so that the method is greatly limited, the generalization capability of an algorithm is poor, the problem of more false defect lamp beads is generated, the problem of the false defect is solved, and a defect detection network takes the lamp beads which are on but not on as output; and secondly, the defect detection network takes the other non-lamp-bead-like structure as the output of the lamp beads which are not on, and finally, the lamp beads are inaccurately detected, so that the product yield is affected.
In view of the above problems in the related art, an efficient and accurate solution has not been found.
Disclosure of Invention
The invention provides a method and a device for detecting defects of lamp beads, which are used for solving the technical problems in the related art.
According to an embodiment of the present invention, there is provided a method for detecting a defect of a lamp bead, including: after the lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel; inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result; detecting a first pseudo defect in the defect identification result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect identification result; after the lamp panel is powered off, collecting a second picture of the lamp panel; and detecting a second pseudo defect in the defect area according to the first picture and the second picture.
Optionally, detecting a first pseudo defect in the defect recognition result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect recognition result, including: determining the defect position and defect probability of each recognition object in the defect recognition result; judging whether the defect probability is smaller than a preset value for each identification object, and judging whether the aspect ratio of the identification object is in a preset range according to the minimum outer surrounding frame of the defect position; if the defect probability is smaller than a preset value, or the aspect ratio of the identification object is within a preset range, determining the identification result of the identification object as a first pseudo defect, and determining the corresponding first identification object as a non-defect object; if the defect probability is greater than or equal to a preset value, and the aspect ratio of the defect object is not in a preset range, determining that the corresponding second identification object is the defect object; and determining the areas occupied by all the non-defective objects and all the defective objects in the first picture as non-defective areas and defective areas respectively.
Optionally, detecting a second pseudo defect in the defect area from the first picture and the second picture includes: performing differential operation on the first picture and the second picture to obtain a differential graph; performing AND operation on the defect area and the differential graph to obtain a third picture; and detecting a second pseudo defect in the defect area by adopting the third picture.
Optionally, detecting the second pseudo defect in the defect area with the third picture includes: detecting a defect communication area in the third picture, wherein the defect communication area comprises at least two adjacent defect objects; calculating the maximum gray value of the defect connected region, and calculating the brightness gradient of non-defect objects around the defect connected region; and detecting whether a second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient.
Optionally, detecting whether the second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient includes: judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than a first threshold value, judging whether the brightness gradient of a non-defect area around the defect object is larger than a second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
Optionally, after detecting a second pseudo defect in the defect area according to the first picture and the second picture, the method further comprises: filtering the first and second pseudo defects on the defect recognition result; judging whether a true defect exists in the filtered defect identification result; if the filtered defect identification result has true defects, determining that the lamp panel is defective; and if the filtered defect identification result does not have a true defect, determining that the lamp panel is good.
Optionally, inputting the first picture and the energizing current into a defect detection network to obtain a defect identification result, including: dividing the first picture into a plurality of image blocks by adopting a feature extraction network, and digitizing the electrified current to obtain an embedded vector, wherein the defect detection network comprises the feature extraction network and a transducer model; extracting a block feature map of each image block in the plurality of image blocks to obtain a feature map of the first picture; fusing the feature map and the embedded vector to obtain fusion features of different modal data; and inputting the fusion characteristics into the transducer model, and outputting a defect identification result.
Optionally, before inputting the fusion feature into the transducer model, the method further comprises: configuring training data; configuring a first data amount of training data in an initial transducer model using the following expressionAnd second data quantity of model parameters +.>
Performance =);
=1-/>
Wherein,is a function of training data and model parameters, when ∈>When (I)>The function value is the largest, the model performs the Performance best,/->,/>The parameters are the number of the transducers of the transducer model, and Key and Layer respectively represent the number of Key and Layer number of the transducers of the transducer model;
training the initial transducer model by optimizing a minimization loss function to obtain the transducer model.
Optionally, training the initial transducer model by optimizing a minimization loss function, the obtaining the transducer model includes: training the initial transducer model by optimizing and minimizing the following loss functions to obtain the transducer model:
wherein,for loss function->Representing the probability of choosing the ith loss function, n being the number of loss functions, +.>Class probability learned by neural network, +.>Is a random variable of standard gummel distribution which is independently and uniformly distributed,super-parameters for controlling smoothness of Softmax +. >For predictive value +.>Is true value +.>Representing the distance between the predicted value and the actual value.
Optionally, configuring the training data includes: obtaining a defect picture and a non-defect picture according to a preset proportion; marking the defect picture to obtain defect picture data, and marking the non-defect picture to obtain non-defect picture data; identifying a background area and a defect area in the defect picture data, and classifying the defect picture data into defect position data and defect background data, wherein the background area is a position area of the defect picture which does not comprise a defect object; configuring a first loss weight of the defect location data, and configuring a second loss weight of the defect location data and the non-defect picture data, wherein the first loss weight is twice the second loss weight, and the first loss weight and the second loss weight are used for calculating class cross entropy loss in training; the defect position data, the defect background data, and the non-defect picture data are configured as the training data.
According to another embodiment of the present invention, there is provided a device for detecting defects of a lamp bead, including: the first acquisition module is used for acquiring a first picture and an energizing current of the lamp panel after the lamp panel is energized, wherein a plurality of LED lamp beads are arranged on the lamp panel; the first detection module is used for inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result; the processing module is used for detecting a first pseudo defect in the defect identification result based on the length-width ratio and dividing the first picture into a defect area and a non-defect area according to the defect identification result; the second acquisition module is used for acquiring a second picture of the lamp panel after the lamp panel is powered off; and the second detection module is used for detecting a second pseudo defect in the defect area according to the first picture and the second picture.
Optionally, the processing module includes: a determining unit, configured to determine a defect position and a defect probability of each recognition object in the defect recognition result; a judging unit, configured to judge, for each recognition object, whether the defect probability is smaller than a preset value, and judge whether an aspect ratio of the recognition object is within a preset range according to a minimum outer bounding box of the defect position; the detection unit is used for determining the identification result of the identification object as a first pseudo defect and determining the corresponding first identification object as a non-defect object if the defect probability is smaller than a preset value or the length-width ratio of the identification object is in a preset range; if the defect probability is greater than or equal to a preset value, and the aspect ratio of the defect object is not in a preset range, determining that the corresponding second identification object is the defect object; and the determining unit is used for determining the areas occupied by all the non-defective objects and all the defective objects in the first picture as a non-defective area and a defective area.
Optionally, the second detection module includes: the difference unit is used for carrying out difference operation on the first picture and the second picture to obtain a difference picture; the operation unit is used for performing AND operation on the defect area and the differential graph to obtain a third picture; and a detection unit, configured to detect a second pseudo defect in the defect area by using the third picture.
Optionally, the detection unit includes: a detection subunit, configured to detect a defect connected region in the third picture, where the defect connected region includes at least two adjacent defect objects; a calculating subunit for calculating a maximum gray value of the defect connected region and calculating a brightness gradient of a non-defective object around the defect connected region; and the detection subunit is used for detecting whether a second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient.
Optionally, the detection subunit is further configured to: judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than a first threshold value, judging whether the brightness gradient of a non-defect area around the defect object is larger than a second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
Optionally, the method further comprises: a filtering module, configured to filter the first and second pseudo defects on the defect recognition result after the second detection module detects the second pseudo defect in the defect area according to the first and second pictures; the judging module is used for judging whether the filtered defect identification result has a true defect or not; the determining module is used for determining that the lamp panel is a defective product if the filtered defect identification result has a true defect; and if the filtered defect identification result does not have a true defect, determining that the lamp panel is good.
Optionally, the first detection module includes: the processing unit is used for dividing the first picture into a plurality of image blocks by adopting a feature extraction network and digitizing the electrified current to obtain an embedded vector, wherein the defect detection network comprises the feature extraction network and a transducer model; an extracting unit, configured to extract a block feature map of each of the plurality of image blocks, to obtain a feature map of the first picture; the fusion unit is used for fusing the feature map and the embedded vector to obtain fusion features of different modal data; and the identification unit is used for inputting the fusion characteristic into the transducer model and outputting a defect identification result.
Optionally, the first detection module further includes: a first configuration unit configured to configure training data; a second configuration unit for configuring a first data amount of training data in the initial transducer model using the following expression before the identification unit inputs the fusion feature into the transducer modelAnd a second data amount of the model parameters
Performance =);
=1-/>
Wherein,is a function of training data and model parameters, when ∈>When (I)>The function value is the largest, the model performs the Performance best,/- >,/>The parameters are the number of the transducers of the transducer model, and Key and Layer respectively represent the number of Key and Layer number of the transducers of the transducer model;
and the training unit is used for training the initial transducer model through optimizing the minimized loss function to obtain the transducer model.
Optionally, the training unit includes: a training subunit, configured to train the initial transducer model by optimizing and minimizing the following loss functions, to obtain the transducer model:
wherein,for loss function->Representing the probability of choosing the ith loss function, n being the number of loss functions, +.>Class probability learned by neural network, +.>Is a random variable of standard gummel distribution which is independently and uniformly distributed,super-parameters for controlling smoothness of Softmax +.>For predictive value +.>Is true value +.>Representing the distance between the predicted value and the actual value.
Optionally, the first configuration unit is further configured to: obtaining a defect picture and a non-defect picture according to a preset proportion; marking the defect picture to obtain defect picture data, and marking the non-defect picture to obtain non-defect picture data; identifying a background area and a defect area in the defect picture data, and classifying the defect picture data into defect position data and defect background data, wherein the background area is a position area of the defect picture which does not comprise a defect object; configuring a first loss weight of the defect location data, and configuring a second loss weight of the defect location data and the non-defect picture data, wherein the first loss weight is twice the second loss weight, and the first loss weight and the second loss weight are used for calculating class cross entropy loss in training; the defect position data, the defect background data, and the non-defect picture data are configured as the training data.
According to a further embodiment of the invention there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the device embodiments described above.
According to the embodiment of the invention, after the lamp panel is electrified, a first picture and electrifying current of the lamp panel are collected, wherein a plurality of LED lamp beads are arranged on the lamp panel, the first picture and electrifying current are input into a defect detection network to obtain a defect identification result, a first pseudo defect in the defect identification result is detected based on an aspect ratio, the first picture is divided into a defect area and a non-defect area according to the defect identification result, after the lamp panel is powered off, a second picture of the lamp panel is collected, a second pseudo defect in the defect area is detected according to the first picture and the second picture, and the first pseudo defect is detected based on the aspect ratio, and the second pseudo defect in the divided defect area is detected, so that the technical problem of inaccuracy in detecting the lamp bead defects in the related technology is solved, the pseudo defect of lamp bead identification is reduced, and the lamp bead detection precision is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of a computer according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for detecting defects of a lamp bead according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a feature extraction network in an embodiment of the invention;
FIG. 4 is an overall flow chart of an embodiment of the present invention;
fig. 5 is a block diagram of a device for detecting defects of a lamp bead according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided in the first embodiment of the present application may be executed in a controller, a computer, an industrial robot, or a similar computing device. Taking a computer as an example, fig. 1 is a block diagram of a hardware structure of a computer according to an embodiment of the present invention. As shown in fig. 1, the computer may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is merely illustrative and is not intended to limit the configuration of the computer described above. For example, the computer may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for detecting a bead defect in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to the computer via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of a computer. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for detecting a bead defect is provided, and fig. 2 is a flowchart of a method for detecting a bead defect according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
step S202, after a lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel;
the lamp panel of this embodiment can be the PCB pad, installs a plurality of LED lamp pearls above, before the packing of leaving the factory, need detect it.
Step S204, inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result;
Optionally, the defect detection network of the present embodiment includes a feature extraction network and a transducer model, where the transducer model is a LEDNet network and is a transducer multi-modal large model.
After the PCB bonding pads of the LED lamp beads are powered on, all normal lamp beads emit light, a camera is triggered to shoot, a first picture is obtained, an LEDET defect lamp bead detection network is fed, the position of the unlit defect lamp beads in the first picture is output, the length and width of pixels (such as the length and the width of a target frame of an identification object), the category and the confidence (defect probability) are obtained, and a defect identification result is obtained.
Step S206, detecting a first pseudo defect in the defect recognition result based on the length-width ratio, and dividing the first picture into a defect area and a non-defect area according to the defect recognition result;
the defect recognition result of this embodiment is a preliminary recognition result, and the defect recognition result includes a plurality of defect objects, possibly including a pseudo defect, which needs to be detected again, and the pseudo defect of this embodiment includes two types, namely a first pseudo defect and a second pseudo defect, wherein the first pseudo defect is a pseudo defect that outputs a light bead that is not lit by a similar structure to other non-light beads, and the second pseudo defect is a pseudo defect that outputs a light bead that is not lit.
Step S208, after the lamp panel is powered off, a second picture of the lamp panel is collected;
after de-energizing, the second picture is acquired without brightness features relative to the first picture.
Step S210, detecting a second pseudo defect in the defect area according to the first picture and the second picture.
Through the steps, after the lamp panel is electrified, a first picture and electrifying current of the lamp panel are collected, wherein a plurality of LED lamp beads are installed on the lamp panel, the first picture and electrifying current are input into a defect detection network to obtain a defect identification result, a first pseudo defect in the defect identification result is detected based on an aspect ratio, the first picture is divided into a defect area and a non-defect area according to the defect identification result, after the lamp panel is powered off, a second picture of the lamp panel is collected, a second pseudo defect in the defect area is detected according to the first picture and the second picture, and the first pseudo defect is detected based on the aspect ratio, and the second pseudo defect in the divided defect area is detected.
In this embodiment, detecting a first pseudo defect in a defect recognition result based on an aspect ratio, and dividing a first picture into a defective area and a non-defective area according to the defect recognition result includes: determining the defect position and defect probability of each recognition object in the defect recognition result; judging whether the defect probability is smaller than a preset value for each identification object, and judging whether the aspect ratio of the identification object is in a preset range according to the minimum outer surrounding frame of the defect position; if the defect probability is smaller than a preset value or the aspect ratio of the identification object is within a preset range, determining the identification result of the identification object as a first pseudo defect, and determining the corresponding first identification object as a non-defect object; if the defect probability is greater than or equal to a preset value, and the aspect ratio of the defect object is not in a preset range, determining the corresponding second identification object as the defect object; the areas occupied by all the non-defective objects and all the defective objects in the first picture, respectively, are determined as non-defective areas and defective areas.
Optionally, the preset range includes a range of aspect ratios greater than 1.2 and a range of less than 0.8.
In this embodiment, a camera with a resolution of 3000×5000 may be used to photograph a PCB pad (lamp panel), and the size of each LED lamp bead is about 20×20 pixels, so that other non-lamp beads with similar structures output by the defect detection network may be classified into a square or regular rectangle according to the pixel size and aspect ratio of the lamp beads (e.g. 0.8-1.2), the aspect ratio of the lamp beads is relatively regular (approximately 1), other non-lamp bead structures (e.g. pads, wire heads, structural gaps, etc.) are irregular, the aspect ratio is irregular, and the method is treated as a pseudo defect, for example, the defect probability is less than or equal to 0.2 or the target with the aspect ratio greater than 1.2 or less than 0.8 is classified as a pseudo defect, and filtered. Meanwhile, the first picture is divided into a plurality of defect areas and non-defect areas according to all the calculated defect positions.
In one implementation of the present embodiment, detecting a second pseudo defect in the defective area from the first picture and the second picture includes:
s11, performing differential operation on the first picture and the second picture to obtain a differential graph;
s12, performing AND operation on the defect area and the differential graph to obtain a third picture;
And (3) removing electricity from the lamp panel, taking a picture after all the lamp beads are extinguished, obtaining a picture 2 (a second picture), subtracting the picture 2 from the pixel-by-pixel gray value of the picture 1 (a first picture), obtaining a difference map, and performing AND operation on the obtained defect area and the difference map to obtain a picture 3 (a third picture), wherein a plurality of defect areas possibly exist in the picture 3.
S13, detecting a second pseudo defect in the defect area by using the third picture.
In one example, detecting the second pseudo defect in the defect region with the third picture includes: detecting a defect communication area in the third picture, wherein the defect communication area comprises at least two adjacent defect objects; calculating the maximum gray value of the defect connected region and calculating the brightness gradient of the non-defect objects around the defect connected region; and detecting whether a second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient.
Optionally, detecting whether the second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient includes: judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than the first threshold value, judging whether the brightness gradient of the non-defect area around the defect object is larger than the second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
In this embodiment, a defect object in the defect connected region corresponding to the defect identification result is illustrated by taking the first threshold value as 20 as an example, for each connected defect region in the picture 3, the maximum value of gray is calculated, and if the maximum value of a certain connected defect region is less than or equal to 20 (the first threshold value), the defect in the region is a true defect, that is, the damaged non-lighting bead; if the maximum value is larger than 20, traversing the calculated brightness gradient of the non-defect area, if the brightness gradient of the non-defect area around a certain defect area is smaller than or equal to 4 (a second threshold value), judging the defect in the area as a true defect, otherwise, judging the defect as a false defect, and filtering and removing the false defect.
According to the characteristics of the image after the LED emits light and does not emit light, the method for removing the pseudo defects is custom designed, after the LED emits light, the brightness value of the LED is very high, the intensity of the light is gradually attenuated due to the flashing effect of light caused by impurities in the air, the brightness of the LED gradually attenuates, the brightness of the LED gradually decreases to the periphery by taking the lamp bead as the center, namely, a bright-dark positive gradient is formed, otherwise, if the LED does not emit light, the LED can only be used for forming a situation that a bright lamp bead exists beside, an obvious negative gradient is formed from the bright lamp bead to the bright lamp bead, and b, if the lamp bead does not emit light beside, the brightness gradient is 0, or the negative gradient is not obvious.
In one implementation scenario of the present embodiment, after detecting the second pseudo defect in the defect area according to the first picture and the second picture, further includes: filtering the first and second pseudo defects on the defect recognition result; judging whether a true defect exists in the filtered defect identification result; if the filtered defect identification result has true defects, determining that the lamp panel is defective; and if the filtered defect identification result does not have a true defect, determining that the lamp panel is good.
And repeating the processing for all the defect areas in the picture 3 until all the defects are judged to be finished, and judging the lamp panel as defective if true defects exist at last.
The invention collects the picture and current after the lamp bead is charged, inputs the picture and current into a large model of LEDET (defect detection network), the background of the large model transfers the transducer model to the field of computer vision based on the design paradigm of the transducer, so that the transducer can process the image, the large model has a large number of parameters and complex structures, data representation and characteristic extraction, the information of the picture and the current is dynamically fused through a cross-modal attention mechanism, and the type, the position and the size of the defect are output through a detection head.
In one implementation manner of this embodiment, inputting the first picture and the energizing current into the defect detection network to obtain a defect identification result includes:
s21, segmenting a first picture into a plurality of image blocks by adopting a feature extraction network, and digitizing an electrified current to obtain an embedded vector, wherein the defect detection network comprises a feature extraction network and a transducer model;
s22, extracting a block feature map of each image block in the plurality of image blocks to obtain a feature map of a first picture;
s23, fusing the feature map and the embedded vector to obtain fusion features of different mode data;
in one embodiment, the acquired first picture is segmented into 64 patches (image blocks), the 64 patches are converted into acceptable inputs of a transducer through linear mapping, barriers between CV and NLP are opened, the problem that the transducer cannot be applied to the image field because the size of input data is solved, and information such as edges, textures, sizes, backgrounds, corner points, brightness, contrast and the like in the image is extracted for each Patch to form Patch Feature Map; the acquired current value Tokenize is converted into an Embedding Vector, data information of different modes is fused in the middle of the model, so that the model can fully utilize the information of the different modes, the performance and generalization capability of the model are improved, the Patch Embedding extraction of image features is efficient, the embodiment uses the visual features based on the Patch, which are almost equal to one linear Embedding, and the operation complexity is reduced. Fig. 3 is a schematic diagram of a Feature extraction network in an embodiment of the present invention, where input data includes image and current data, and finally, feature Map (Feature Map) and Embedding Vector (Embedding Vector) are extracted.
In this embodiment, the model of the defect detection network uses a deformable attention mechanism of a transducer, and has wider receptive field and fewer parameters compared with the common attention mechanism, and a multi-scale feature extractor is constructed, and features with the scales of 1/64, 1/32, 1/16 and 1/8 of the pixel decoder are sequentially used as K, V input corresponding to Transformer decoder block. And modeling by using a Mask2Former global context, adding a masked coverage, limiting the cross coverage to a foreground area concerned by each query, and adapting to a downstream semantic segmentation prediction task. In selecting and processing sample data, the sample data is subjected to the following adaptive special processing: 1. and (3) data enhancement, wherein the data enhancement amplitude is 512-1024 pixels, the size of the original LED image is adapted, the image is horizontally and vertically turned, the maximum clipping proportion is set to be 100%, and the sample is expanded. 2. For negative-sample image processing, no pan scaling and other brightness transformations are performed, because the morphology, size, and location of the pseudo-defects are substantially uniform, especially on the PBC plates. 3. The background is participated in the calculation of the cross entropy loss of the category, the weight of the background loss is increased, and the overstock is reduced.
In this embodiment, when training the defect detection network, the sub-networks (such as the self-attention module, the forward propagation network and the input sequence) of the defect detection network are trained by adopting a manner of sharing weight parameters, when training a certain sub-network, certain modules inside the model can share weight parameters, the model weight sharing is mainly because the self-attention module and the forward propagation network are irrelevant to the length of the input sequence, the model uses large-scale defect data of certain categories to pretrain the visual network and the self-attention module, then freezes the trained network and the self-attention module, trains another network for a large number of data of other defect categories, and pretrains the whole model by using data of all categories. This weight sharing concept is equally suitable for use in fine tuning of multi-modal models. For example, in multi-modal fine tuning of images and currents, the weight parameters obtained by image training may be used to train the currents, with the result that these frozen weight parameters remain valid, even without fine tuning again.
S24, inputting the fusion characteristics into a transducer model, and outputting a defect identification result.
In this embodiment, the initial transducer model may also be trained using samples to obtain the final transducer model before the fusion features are input into the transducer model.
The embodiment is implemented in the following manner when training the initial transducer model:
s31, configuring training data;
in one example, configuring the training data includes: obtaining a defect picture and a non-defect picture according to a preset proportion; marking the defect picture to obtain defect picture data, and marking the non-defect picture to obtain non-defect picture data; identifying a background area and a defect area in the defect picture data, and classifying the defect picture data into defect position data and defect background data, wherein the background area is a position area of the defect picture which does not comprise a defect object; configuring a first loss weight of defect position data, and configuring a second loss weight of defect position data and non-defect picture data, wherein the first loss weight is twice as large as the second loss weight, and the first loss weight and the second loss weight are used for calculating class cross entropy loss in training; the defect position data, defect background data, and non-defect picture data are configured as training data.
The preset ratio of the defect picture to the non-defect picture is 5: the transducer model trained in the embodiment is a supervised learning algorithm model, so that non-defect positions in the image data with defect labels, namely background types and image data with defect labels with 1/5 of the defect label data amount are added into the calculation of class cross entropy loss, the loss weight of the background types is increased, and the overstock is reduced.
S32, configuring a first data volume of training data in the initial transducer model by adopting the following expressionAnd second data quantity of model parameters +.>
Performance =);
=1-/>
Wherein,is a function of training data and model parameters, when ∈>When (I)>The function value is the largest, the model performs the Performance best,/->,/>The parameters are the number of the transducers of the transducer model, and Key and Layer respectively represent the number of Key and Layer number of the transducers of the transducer model;
s33, training an initial transducer model through optimizing a minimum loss function to obtain a transducer model.
The defect detection network of the embodiment has larger capacity, better universality, precision and efficiency, integrates multiple technologies in the field of multiple artificial intelligence core researches, can train on a data set through activation of a pre-training sparse part, is added with a task layer on a pre-training encoder, can efficiently process computer vision complex tasks through fine adjustment, and realizes the emerging capability of 1+1>2 integration.
The AI provided by the invention has large parameter and complex structure on a large scale, needs large-scale training and optimization, needs a large amount of training data and fine tuning data in the training process, and the proportion of the configuration training data and the fine tuning data is kept at 3:1 and adapting the parameters of the model using appropriate optimization algorithms and techniques. The data volume, the parameter volume and the number of keys of a transducer of large model training have close relation with the generalization capability of the model, the performance and the parameter scale of the model are strongly related, and the shape of the model is weakly related.
Performance =);
Wherein the method comprises the steps ofIs a function of training data and model parameters, the proportional relationship between which is preferably kept +.>I.e. the model parameters are increased by 8 times, the data also need to be increased by 5 times to exert the full potential of the model parameters,/->,/>The parameters are the number of model keys (vector dimensions corresponding to image features and current features, respectively), and Key and Layer represent the number of keys and the number of layers in the transducer, respectively.
The key of the training and processing tasks of the multi-mode large model is to design a proper model structure, a loss function and an optimization method so that the model can effectively utilize multi-mode data to learn and infer. The model uses an adaptive learning rate optimization algorithm (Adaptive Learning Rate Optimization) such as Adagrad, and the algorithm can adaptively adjust the learning rate according to the historical gradient information of the parameters, so that the model is better suitable for updating requirements of different parameters.
Optionally, training the initial transducer model by optimizing the minimization of the loss function, the obtaining the transducer model includes: training an initial transducer model by optimizing and minimizing the following loss functions to obtain a transducer model:
wherein,for loss function->Representing the probability of choosing the ith loss function, n being the number of loss functions, +.>Class probability learned by neural network, +.>Is a random variable of standard gummel distribution which is independently and uniformly distributed,super-parameters for controlling smoothness of Softmax +.>For predictive value +.>Is true value +.>Representing the distance between the predicted value and the actual value.
Since the Softmax Function has a side effect on the computation of the transition inside the transition, the Loss Function of this embodiment avoids the use of the Softmax Function, and is as follows:
represents the probability of choosing the ith loss function,/->Class probability learned by neural network, +.>Is a random variable of standard gummel distribution which is independently and uniformly distributed. CDF of standard Gumbel distribution is:>the method comprises the steps of carrying out a first treatment on the surface of the The difference from Softmax is that a super parameter +.>The smoothness of Softmax is controlled. I.e. < ->The closer to infinity, the closer p is to argmax operation.
In addition, the defect detection network of the embodiment also uses technologies such as distributed training and model compression to accelerate the training process, reduce the storage space of the model and improve the reasoning efficiency.
FIG. 4 is an overall flow chart of an embodiment of the invention, wherein the lamp panel is a PCB pad, the defect detection network is LEDNET, and the flow comprises:
after the PCB bonding pads on which the LED lamp beads are attached to the chip mounter are electrified, all normal lamp beads emit light, a camera is triggered to shoot, a picture 1 is obtained, an LED Net defect lamp bead detection network is fed, and the position of the unlit defect lamp beads, the pixel width and height, the category and the confidence are output;
dividing the target with defect confidence less than or equal to 0.2 or with aspect ratio greater than 1.2 or less than 0.8 into non-defects, and filtering;
dividing the picture 1 into a plurality of defect areas and non-defect areas according to all the calculated defect positions, and calculating the brightness gradient of the non-defect areas;
removing electricity from the PCB bonding pad, taking a picture after all the lamp beads are extinguished to obtain a picture 2, subtracting the picture 2 from the pixel-by-pixel gray value of the picture 1 to obtain a difference map, and performing AND operation on the obtained defect area and the difference map to obtain a picture 3, wherein a plurality of defect areas possibly exist in the picture 3;
calculating the maximum value of gray scale for each communicated defect area in the picture 3, and if the maximum value of a certain communicated defect area is less than or equal to 20, the defect in the area is a true defect, namely the damaged non-lighting lamp beads are damaged; if the maximum value is greater than 20, proceeding to the next step;
Traversing the calculated brightness gradient of the non-defect area, judging that the defect in the area is a true defect if the brightness gradient of the non-defect area around a certain defect area is less than or equal to 4, otherwise, removing the false defect;
and repeating the steps for all the defect areas in the picture 3 until all the defects are judged to be finished, and judging the PCB bonding pad as a defective product if the true defects exist at last.
By adopting the scheme of the embodiment, the damaged LED lamp beads are detected by using an AI large model technology, and a method for removing the pseudo defects is added on the basis of an LEDET defect detection network, so that a PCB (printed circuit board) after the LED lamp beads are pasted on a surface mount device is divided into a defect area and a non-defect area, and all the lamp beads are divided into defect lamp beads, pseudo defect lamp beads and defect-free lamp beads. Through verification, by using the scheme of the embodiment, the efficiency of detecting the damaged LED lamp beads is improved by 21%, the time consumption of removing the false defects and detecting the whole system once is only increased by 0.03ms on the original basis, and the accuracy of detecting the defective lamp beads by the system is increased by 19%.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a controller, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
In this embodiment, a device for detecting a defect of a lamp bead is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have been described and will not be repeated. The term "module" as used below may implement a combination of software and hardware for subscription functions. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also contemplated.
Fig. 5 is a block diagram of a device for detecting defects of a lamp bead according to an embodiment of the present invention, as shown in fig. 5, the device includes:
the first collecting module 50 is configured to collect a first picture and an energizing current of the lamp panel after the lamp panel is energized, where a plurality of LED lamp beads are installed on the lamp panel;
the first detection module 52 is configured to input the first picture and the current to a defect detection network, so as to obtain a defect identification result;
a processing module 54, configured to detect a first pseudo defect in the defect identification result based on an aspect ratio, and divide the first picture into a defect area and a non-defect area according to the defect identification result;
a second acquisition module 56, configured to acquire a second picture of the lamp panel after the lamp panel is powered off;
A second detection module 58 is configured to detect a second pseudo defect in the defect area according to the first picture and the second picture.
Optionally, the processing module includes: a determining unit, configured to determine a defect position and a defect probability of each recognition object in the defect recognition result; a judging unit, configured to judge, for each recognition object, whether the defect probability is smaller than a preset value, and judge whether an aspect ratio of the recognition object is within a preset range according to a minimum outer bounding box of the defect position; the detection unit is used for determining the identification result of the identification object as a first pseudo defect and determining the corresponding first identification object as a non-defect object if the defect probability is smaller than a preset value or the length-width ratio of the identification object is in a preset range; if the defect probability is greater than or equal to a preset value, and the aspect ratio of the defect object is not in a preset range, determining that the corresponding second identification object is the defect object; and the determining unit is used for determining the areas occupied by all the non-defective objects and all the defective objects in the first picture as a non-defective area and a defective area.
Optionally, the second detection module includes: the difference unit is used for carrying out difference operation on the first picture and the second picture to obtain a difference picture; the operation unit is used for performing AND operation on the defect area and the differential graph to obtain a third picture; and a detection unit, configured to detect a second pseudo defect in the defect area by using the third picture.
Optionally, the detection unit includes: a detection subunit, configured to detect a defect connected region in the third picture, where the defect connected region includes at least two adjacent defect objects; a calculating subunit for calculating a maximum gray value of the defect connected region and calculating a brightness gradient of a non-defective object around the defect connected region; and the detection subunit is used for detecting whether a second pseudo defect exists in the defect communication area according to the maximum gray value and the brightness gradient.
Optionally, the detection subunit is further configured to: judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than a first threshold value, judging whether the brightness gradient of a non-defect area around the defect object is larger than a second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
Optionally, the method further comprises: a filtering module, configured to filter the first and second pseudo defects on the defect recognition result after the second detection module detects the second pseudo defect in the defect area according to the first and second pictures; the judging module is used for judging whether the filtered defect identification result has a true defect or not; the determining module is used for determining that the lamp panel is a defective product if the filtered defect identification result has a true defect; and if the filtered defect identification result does not have a true defect, determining that the lamp panel is good.
Optionally, the first detection module includes: the processing unit is used for dividing the first picture into a plurality of image blocks by adopting a feature extraction network and digitizing the electrified current to obtain an embedded vector, wherein the defect detection network comprises the feature extraction network and a transducer model; an extracting unit, configured to extract a block feature map of each of the plurality of image blocks, to obtain a feature map of the first picture; the fusion unit is used for fusing the feature map and the embedded vector to obtain fusion features of different modal data; and the identification unit is used for inputting the fusion characteristic into the transducer model and outputting a defect identification result.
Optionally, the first detection module further includes: a first configuration unit configured to configure training data; a second configuration unit for configuring a first data amount of training data in the initial transducer model using the following expression before the identification unit inputs the fusion feature into the transducer modelAnd a second data amount of the model parameters
Performance =);/>
=1-/>
Wherein,is a function of training data and model parameters, when ∈>When (I)>The function value is the largest, the model performs the Performance best,/- >,/>The parameters are the number of the transducers of the transducer model, and Key and Layer respectively represent the number of Key and Layer number of the transducers of the transducer model;
and the training unit is used for training the initial transducer model through optimizing the minimized loss function to obtain the transducer model.
Optionally, the training unit includes: a training subunit, configured to train the initial transducer model by optimizing and minimizing the following loss functions, to obtain the transducer model:
wherein,for loss function->Representing the probability of choosing the ith loss function, n being the number of loss functions, +.>Class probability learned by neural network, +.>Is a random variable of standard gummel distribution which is independently and uniformly distributed,super-parameters for controlling smoothness of Softmax +.>For predictive value +.>Is true value +.>Representing the distance between the predicted value and the actual value.
Optionally, the first configuration unit is further configured to: obtaining a defect picture and a non-defect picture according to a preset proportion; marking the defect picture to obtain defect picture data, and marking the non-defect picture to obtain non-defect picture data; identifying a background area and a defect area in the defect picture data, and classifying the defect picture data into defect position data and defect background data, wherein the background area is a position area of the defect picture which does not comprise a defect object; configuring a first loss weight of the defect location data, and configuring a second loss weight of the defect location data and the non-defect picture data, wherein the first loss weight is twice the second loss weight, and the first loss weight and the second loss weight are used for calculating class cross entropy loss in training; the defect position data, the defect background data, and the non-defect picture data are configured as the training data.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for execution:
s1, after a lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel;
s2, inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result;
s3, detecting a first pseudo defect in the defect identification result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect identification result;
s4, after the lamp panel is powered off, collecting a second picture of the lamp panel;
S5, detecting a second pseudo defect in the defect area according to the first picture and the second picture.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, after a lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel;
S2, inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result;
s3, detecting a first pseudo defect in the defect identification result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect identification result;
s4, after the lamp panel is powered off, collecting a second picture of the lamp panel;
s5, detecting a second pseudo defect in the defect area according to the first picture and the second picture.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a controller, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (8)

1. The method for detecting the defects of the lamp beads is characterized by comprising the following steps of:
after the lamp panel is electrified, collecting a first picture and electrified current of the lamp panel, wherein a plurality of LED lamp beads are arranged on the lamp panel;
inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result;
detecting a first pseudo defect in the defect identification result based on an aspect ratio, and dividing the first picture into a defect area and a non-defect area according to the defect identification result;
after the lamp panel is powered off, collecting a second picture of the lamp panel;
detecting a second pseudo defect in the defect area according to the first picture and the second picture;
wherein detecting a second pseudo defect in the defect area from the first picture and the second picture comprises: performing differential operation on the first picture and the second picture to obtain a differential graph; performing AND operation on the defect area and the differential graph to obtain a third picture; detecting a second pseudo defect in the defect area by using the third picture; wherein detecting a second pseudo defect in the defect area using the third picture includes: detecting a defect communication area in the third picture, wherein the defect communication area comprises at least two adjacent defect objects; calculating the maximum gray value of the defect connected region, and calculating the brightness gradient of non-defect objects around the defect connected region; judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than a first threshold value, judging whether the brightness gradient of a non-defect area around the defect object is larger than a second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
2. The method according to claim 1, wherein detecting a first pseudo defect in the defect recognition result based on an aspect ratio and dividing the first picture into a defective area and a non-defective area according to the defect recognition result, comprises:
determining the defect position and defect probability of each recognition object in the defect recognition result;
judging whether the defect probability is smaller than a preset value for each identification object, and judging whether the aspect ratio of the identification object is in a preset range according to the minimum outer surrounding frame of the defect position;
if the defect probability is smaller than a preset value, or the aspect ratio of the identification object is within a preset range, determining the identification result of the identification object as a first pseudo defect, and determining the corresponding first identification object as a non-defect object; if the defect probability is greater than or equal to a preset value, and the aspect ratio of the defect object is not in a preset range, determining that the corresponding second identification object is the defect object;
and determining the areas occupied by all the non-defective objects and all the defective objects in the first picture as non-defective areas and defective areas respectively.
3. The method of claim 1, wherein after detecting a second pseudo defect in the defective area from the first picture and the second picture, the method further comprises:
Filtering the first and second pseudo defects on the defect recognition result;
judging whether a true defect exists in the filtered defect identification result;
if the filtered defect identification result has true defects, determining that the lamp panel is defective; and if the filtered defect identification result does not have a true defect, determining that the lamp panel is good.
4. The method of claim 1, wherein inputting the first picture and the energizing current into a defect detection network to obtain a defect identification result comprises:
dividing the first picture into a plurality of image blocks by adopting a feature extraction network, and digitizing the electrified current to obtain an embedded vector, wherein the defect detection network comprises the feature extraction network and a transducer model;
extracting a block feature map of each image block in the plurality of image blocks to obtain a feature map of the first picture;
fusing the feature map and the embedded vector to obtain fusion features of different modal data;
and inputting the fusion characteristics into the transducer model, and outputting a defect identification result.
5. The method of claim 4, wherein prior to inputting the fusion feature into the transducer model, the method further comprises:
Configuring training data;
configuring a first data amount of training data in an initial transducer model using the following expressionAnd second data quantity of model parameters +.>
Performance = );
=1-/>
Wherein,is a function of training data and model parameters, when ∈>In the time-course of which the first and second contact surfaces,the function value is the largest, the model performs the Performance best,/->,/>The parameters are the number of the wafer of the transducer model, key and Layer respectively represent the number of Key and Layer number in the transducer model, n is the channel number of the feed forward network FFN of the transducer model, and v is the Layer number of the fully connected FC network of the transducer model;
training the initial transducer model by optimizing a minimization loss function to obtain the transducer model.
6. The method of claim 5, wherein training the initial transducer model by optimizing a minimization loss function to obtain the transducer model comprises:
training the initial transducer model by optimizing and minimizing the following loss functions to obtain the transducer model:
wherein,for loss function->Representing the probability of choosing the i-th class loss, k being the number of defect classes, +.>Probability of the ith and j categories learned through neural networks, < > >Random variables of the ith and j categories of the independent and equidistributed standard Gumbel distribution,/>Super-parameters for controlling smoothness of Softmax +.>For predictive value +.>Is true value +.>Representing the distance between the i-th class predictor and the true value.
7. The method of claim 4, wherein configuring training data comprises:
obtaining a defect picture and a non-defect picture according to a preset proportion;
marking the defect picture to obtain defect picture data, and marking the non-defect picture to obtain non-defect picture data;
identifying a background area and a defect area in the defect picture data, and classifying the defect picture data into defect position data and defect background data, wherein the background area is a position area of the defect picture which does not comprise a defect object;
configuring a first loss weight of the defect location data, and configuring a second loss weight of the defect location data and the non-defect picture data, wherein the first loss weight is twice the second loss weight, and the first loss weight and the second loss weight are used for calculating class cross entropy loss in training;
The defect position data, the defect background data, and the non-defect picture data are configured as the training data.
8. The utility model provides a detection device of lamp pearl defect which characterized in that includes:
the first acquisition module is used for acquiring a first picture and an energizing current of the lamp panel after the lamp panel is energized, wherein a plurality of LED lamp beads are arranged on the lamp panel;
the first detection module is used for inputting the first picture and the electrified current into a defect detection network to obtain a defect identification result;
the processing module is used for detecting a first pseudo defect in the defect identification result based on the length-width ratio and dividing the first picture into a defect area and a non-defect area according to the defect identification result;
the second acquisition module is used for acquiring a second picture of the lamp panel after the lamp panel is powered off;
the second detection module is used for obtaining a defect communication area based on the first picture and the second picture, and detecting a second pseudo defect in the defect area according to the defect communication area, wherein the defect communication area comprises at least two adjacent defect objects;
wherein the second detection module is further configured to: performing differential operation on the first picture and the second picture to obtain a differential graph; performing AND operation on the defect area and the differential graph to obtain a third picture; detecting a second pseudo defect in the defect area by using the third picture; wherein detecting a second pseudo defect in the defect area using the third picture includes: detecting a defect communication area in the third picture, wherein the defect communication area comprises at least two adjacent defect objects; calculating the maximum gray value of the defect connected region, and calculating the brightness gradient of non-defect objects around the defect connected region; judging whether the maximum gray value is larger than a first threshold value or not; if the maximum gray value is larger than a first threshold value, judging whether the brightness gradient of a non-defect area around the defect object is larger than a second threshold value for each defect object in the defect communication area; and if the brightness gradient of the non-defective objects around the defective object is larger than a second threshold value, determining that the identification result of the defective object is a second pseudo defect.
CN202311320730.5A 2023-10-12 2023-10-12 Method and device for detecting defects of lamp beads Active CN117058150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311320730.5A CN117058150B (en) 2023-10-12 2023-10-12 Method and device for detecting defects of lamp beads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311320730.5A CN117058150B (en) 2023-10-12 2023-10-12 Method and device for detecting defects of lamp beads

Publications (2)

Publication Number Publication Date
CN117058150A CN117058150A (en) 2023-11-14
CN117058150B true CN117058150B (en) 2024-01-12

Family

ID=88661279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311320730.5A Active CN117058150B (en) 2023-10-12 2023-10-12 Method and device for detecting defects of lamp beads

Country Status (1)

Country Link
CN (1) CN117058150B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011085821A (en) * 2009-10-16 2011-04-28 Sony Corp Device and method for defect correction
CN110033724A (en) * 2019-04-19 2019-07-19 陈波 A kind of advertisement liquid crystal display defect automatic checkout system
CN112485935A (en) * 2020-12-09 2021-03-12 岳阳市海圣祥科技有限责任公司 Automatic defect detection and adjustment system for liquid crystal display screen
CN115222653A (en) * 2021-12-17 2022-10-21 荣耀终端有限公司 Test method and device
CN115255731A (en) * 2022-07-26 2022-11-01 江苏徐工工程机械研究院有限公司 Welding quality on-line detection welding seam marking device and method
WO2023108545A1 (en) * 2021-12-16 2023-06-22 Jade Bird Display (Shanghai) Method for constructing defect detection model of micro led array panel, apparatures for dectectig pixel defect and devices
CN116503340A (en) * 2023-04-13 2023-07-28 厦门特仪科技有限公司 Micro oled panel defect detection method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011085821A (en) * 2009-10-16 2011-04-28 Sony Corp Device and method for defect correction
CN110033724A (en) * 2019-04-19 2019-07-19 陈波 A kind of advertisement liquid crystal display defect automatic checkout system
CN112485935A (en) * 2020-12-09 2021-03-12 岳阳市海圣祥科技有限责任公司 Automatic defect detection and adjustment system for liquid crystal display screen
WO2023108545A1 (en) * 2021-12-16 2023-06-22 Jade Bird Display (Shanghai) Method for constructing defect detection model of micro led array panel, apparatures for dectectig pixel defect and devices
CN115222653A (en) * 2021-12-17 2022-10-21 荣耀终端有限公司 Test method and device
CN115255731A (en) * 2022-07-26 2022-11-01 江苏徐工工程机械研究院有限公司 Welding quality on-line detection welding seam marking device and method
CN116503340A (en) * 2023-04-13 2023-07-28 厦门特仪科技有限公司 Micro oled panel defect detection method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated defect inspection of LED chip using deep convolutional neural network;Lin Hui et al.;《JOURNAL OF INTELLIGENT MANUFACTURING》;第30卷(第6期);第2525-2534页 *
基于信息融合的电熔镁炉熔炼异常工况等级识别;李鸿儒 等;东北大学学报(自然科学版);第41卷(第02期);第153-157页 *

Also Published As

Publication number Publication date
CN117058150A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN109978807B (en) Shadow removing method based on generating type countermeasure network
CN111667455B (en) AI detection method for brushing multiple defects
WO2017176304A1 (en) Automatic assessment of damage and repair costs in vehicles
CN109613002A (en) A kind of glass defect detection method, apparatus and storage medium
CN115294117B (en) Defect detection method and related device for LED lamp beads
CN110533950A (en) Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service
JP5103665B2 (en) Object tracking device and object tracking method
CN112707058B (en) Detection method, system, device and medium for standard actions of kitchen waste
CN112149476A (en) Target detection method, device, equipment and storage medium
Hsia et al. An Intelligent IoT-based Vision System for Nighttime Vehicle Detection and Energy Saving.
CN115424217A (en) AI vision-based intelligent vehicle identification method and device and electronic equipment
CN111768404A (en) Mask appearance defect detection system, method and device and storage medium
CN114612418A (en) Method, device and system for detecting surface defects of mouse shell and electronic equipment
CN117058150B (en) Method and device for detecting defects of lamp beads
CN110942444A (en) Object detection method and device
Chu et al. Deep learning method to detect the road cracks and potholes for smart cities
CN113947613A (en) Target area detection method, device, equipment and storage medium
CN116229336B (en) Video moving target identification method, system, storage medium and computer
CN117351472A (en) Tobacco leaf information detection method and device and electronic equipment
CN110210401B (en) Intelligent target detection method under weak light
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN114627435B (en) Intelligent light adjusting method, device, equipment and medium based on image recognition
CN112750113B (en) Glass bottle defect detection method and device based on deep learning and linear detection
CN115147450A (en) Moving target detection method and detection device based on motion frame difference image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant