CN114445330A - Method and system for detecting appearance defects of components - Google Patents

Method and system for detecting appearance defects of components Download PDF

Info

Publication number
CN114445330A
CN114445330A CN202111558065.4A CN202111558065A CN114445330A CN 114445330 A CN114445330 A CN 114445330A CN 202111558065 A CN202111558065 A CN 202111558065A CN 114445330 A CN114445330 A CN 114445330A
Authority
CN
China
Prior art keywords
image
component
image data
defect
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111558065.4A
Other languages
Chinese (zh)
Inventor
刘净月
王坦
赵慧婷
徐伟
马骁
孙铮
庞明奇
罗晶
乔秀铭
罗俊杰
贺洋
岳冰
韩树强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CASIC Defense Technology Research and Test Center
Original Assignee
CASIC Defense Technology Research and Test Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CASIC Defense Technology Research and Test Center filed Critical CASIC Defense Technology Research and Test Center
Priority to CN202111558065.4A priority Critical patent/CN114445330A/en
Publication of CN114445330A publication Critical patent/CN114445330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a system for detecting appearance defects of components, wherein the method comprises the following steps: acquiring surface image data of a component to be detected; preprocessing the surface image data to obtain target image data; inputting target image data into an image detection model, positioning a component to be detected, and outputting a foreground image of the component; and inputting the foreground image of the component into a pre-trained image detection model, and outputting result image data with a defect detection result. The detection method and the detection system can detect the appearance defects of the components through the image detection model and obtain result image data with defect detection results. The defect type can be judged, the position information of the defect can be given, the detection precision is high, the detection time is short, the intelligent degree is high, and the phenomenon of missing judgment or wrong judgment caused by human experience and subjective factors can be avoided.

Description

Method and system for detecting appearance defects of components
Technical Field
The application relates to the technical field of device appearance detection equipment, in particular to a method and a system for detecting appearance defects of a device.
Background
With the increasing year-by-year inspection task of components, the appearance and quality of inspected devices are also in disorder, for example, the appearance of the devices has polishing traces, renovation traces, lead fracture, pin bending deformation, defects of packaging shells and the like, and batch quality inconsistency includes the defects of identification content, identification mode, marking information, inconsistent surface color and the like, and the defects are caused by some reasons that the devices are not strictly controlled in the packaging process, some reasons are counterfeit devices, and some reasons are caused by the damage of the devices in the packaging and transportation processes. At present, the appearance defect identification of components is still in a stage of relying on manual work, the accuracy of detection results depends on human experience and subjective factors, and the human experience and the subjective factors can cause missing judgment, so that the accuracy of the detection results is poor, and the detection time is long.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method and a system for detecting appearance defects of a component.
Based on the above purpose, the present application provides a method for detecting appearance defects of a component, including:
acquiring surface image data of a component to be detected;
preprocessing the surface image data to obtain target image data;
inputting the target image data into an image detection model, positioning the component to be detected, and outputting a foreground image of the component;
inputting the foreground image of the component into a pre-trained image detection model, and outputting result image data with defect detection results; and the defect detection result comprises defect type information, defect position information and/or defect quantity information of the component to be detected.
Further, the inputting the foreground image of the component into a pre-trained image detection model and outputting the result image data with the defect detection result includes:
carrying out grid division on the foreground image of the component, and acquiring a prediction frame of each grid to obtain predicted image data with the prediction frame;
and inputting the predicted image data with the prediction frame into the image detection model, and outputting the result image data.
Further, the image detection model is constructed based on a Yolov4 network.
Further, the acquiring surface image data of the component to be detected includes:
acquiring an original surface image of the component to be detected;
converting the image signal of the original surface image into a data signal;
storing the data signal as the surface image data.
Further, the preprocessing the surface image data to obtain target image data includes: and sequentially carrying out denoising, graying and top hat conversion on the surface image data to obtain the target image data and the background image data.
Further, the defect type information includes: scratch defects, hole defects, defect defects, smudge defects, pin defects, mark location defects, and/or no defects.
Based on the same inventive concept, the application also provides a system for detecting the appearance defects of the components, which comprises:
the device comprises an image acquisition unit, a detection unit and a control unit, wherein the image acquisition unit is configured to acquire surface image data of a component to be detected;
the image processing unit is configured to preprocess the surface image to obtain target image data;
the image positioning unit is configured to input the target image data into an image detection model, position the component to be detected and output a component foreground image;
the image detection unit is configured to input the foreground image of the component into a pre-trained image detection model and output result image data with a defect detection result; and the defect detection result comprises the defect number information, the defect type information and the defect position information of the component to be detected.
Further, the image acquisition unit comprises a camera and an image acquisition card, wherein the camera is configured to acquire an original surface image of the component to be detected; the image acquisition card is configured to convert an image signal of the original surface image into a data signal and transmit the data signal to the image processing unit.
The camera further comprises an induction unit, wherein the induction unit is configured to induce whether a component to be detected exists in an image acquisition area and judge whether the central position of the surface of the component to be detected corresponds to the position of the camera lens.
The device comprises an image acquisition area, and further comprises a feeding unit, wherein the feeding unit is configured to place a plurality of components to be detected and put the components to be detected into the image acquisition area one by one for image acquisition.
From the above, the method and the system for detecting the appearance defects of the components provided by the application can detect the appearance defects of the components through the image detection model, and can obtain the result image data with the defect detection result. The defect type can be judged, the position information of the defect can be given, the detection precision is high, the detection time is short, the intelligent degree is high, and the phenomenon of missing judgment or wrong judgment caused by human experience and subjective factors can be avoided.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the related art, the drawings needed to be used in the description of the embodiments or the related art will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for detecting an appearance defect of a component according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a system for detecting an appearance defect of a component according to an embodiment of the present application;
fig. 3 is a schematic view of a working principle of a system for detecting an appearance defect of a component according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings in combination with specific embodiments.
It should be noted that technical terms or scientific terms used in the embodiments of the present application should have a general meaning as understood by those having ordinary skill in the art to which the present application belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Referring to fig. 1, the present application provides a method for detecting an appearance defect of a component, including:
and S101, acquiring surface image data of the component to be detected.
Specifically, the acquiring of the surface image data of the component to be detected includes the following steps:
step one, acquiring an original surface image of the component to be detected.
Specifically, during actual detection, a component to be detected is firstly placed in an image acquisition area, whether the component to be detected exists in the image acquisition area is sensed by a photoelectric sensor, and whether the central position of the surface of the component to be detected corresponds to the position of the camera lens is judged. And when the component to be detected in the image acquisition area is determined, and the center position of the surface of the component to be detected corresponds to the position of image acquisition equipment, acquiring an original surface image of the component to be detected by using the image acquisition equipment. In this embodiment, the image capturing device includes a camera, a lens matched with the camera, and an image capturing card.
And acquiring an original surface image of the component to be detected by using a camera, and converting an optical signal image of the component to be detected into an electric signal, namely an image signal, by using an optical sensor in the camera through the camera.
When capturing an image of an original surface using a camera, the type of the camera is very important. In the selection of the industrial camera, the following aspects need to be considered:
resolution ratio: the higher the resolution is, the more the number of the collected pixel points is, and the higher the imaging quality is. The determination of the resolution of the camera needs to be comprehensively measured and calculated according to the width of the component to be detected, the minimum detection precision, the equipment running speed and the like, the detection requirement cannot be met due to too low resolution, and the speed of the detection algorithm is influenced due to too high resolution. Therefore, according to the actual detection requirement, in the embodiment, the resolution of the camera is 2048 × 2 pixels.
Chip type: industrial cameras are classified into two types, i.e., CCD cameras and CMOS cameras, depending on the type of chip. The CCD camera is superior to the CMOS camera in aspects of sensitivity, resolution, noise control and the like, and the CMOS camera has the characteristics of low cost, low power consumption and high integration degree. The speed and the precision requirement during detection are comprehensively considered, and compared with the CCD camera, the CCD camera has higher imaging quality and is more sensitive. In this embodiment, a CCD camera is selected.
Output color: industrial cameras include both monochrome (black and white) cameras and color cameras. In this embodiment, the color of the component needs to be considered when detecting the appearance defect of the component, and therefore, a color camera is adopted.
In summary, according to the detection requirements, the Color linear CCD camera of the sprayer Color series of DALSA and model SC-34-02K80 is selected in this embodiment.
An optical lens matched with the camera is also an indispensable part in the process of acquiring the image, and the optical lens is mainly used for projecting an imaging target onto a photosensitive surface of an image sensor to determine the definition of the image. The quality of the lens directly affects the quality of the imaging quality and even the accuracy of the detection result. Therefore, the optical lens is also very important to select, and when selecting the optical lens, the following aspects need to be considered:
the focal length: the distance from the center of the lens to the focal point of the light collection is one of the important parameters of the lens, which determines the ratio between the image and the actual object. Generally, the larger the focal length of the lens, the larger the resulting image. The determination of the focal length of the lens needs to be comprehensively calculated according to various data such as detection precision, the pixel size of the camera, the object distance between the camera and a target and the like.
View angle: the field angle of the lens determines the size of the region to be photographed, and is related to the focal length of the optical lens.
③ the aperture: the aperture is the clear aperture on the lens, and the light entering amount of the lens is controlled by adjusting the diameter of the aperture to determine the brightness of the lens. Besides the direct relation to the imaging brightness of the lens, the aperture has close relation to the image contrast, the resolution and the depth of field. The influence of the aperture on the whole image needs to be comprehensively considered when the aperture of the lens is adjusted.
Interface: the interface mode of connecting the lens with the camera is indicated. At the time of selection, it is noted that the lens interface type must be compatible with the camera interface type.
In the embodiment, an F interface lens of Schneider is selected, and the model is XENON-EMERALD 2.8/28-S.
Further, when the original surface image of the component to be measured is obtained, a proper light source and a proper lighting condition are required.
In particular, good light sources and illumination play a very important role in the process of acquiring images, and are not simply illuminating objects. The matching of the light source and the lighting scheme should highlight the characteristic quantity of the component to be detected as much as possible, and should make a clear distinction between the part of the component which needs to be detected and the part which does not need to be detected as much as possible, so as to increase the contrast, and simultaneously, enough overall brightness should be ensured, and the change of the position of the component should not affect the imaging quality. The most important in lighting systems is the light source selection.
The LED light source is comprehensively superior to other types of light sources in light efficiency, brightness and service life, and has small heat emission and good stability in long-time work. The white LED light source is purer in color and free of yellow or blue phenomena.
The reasonable lighting mode can highlight the characteristics of the component to be detected, and the complexity of subsequent image processing is reduced. The lighting mode can be divided into the following according to the directivity of incident light, the relative position of the light source and the camera and the relative position of the light source and the tested component: diffuse and direct illumination, front and back illumination, bright and dark field illumination. The method adopts a front-surface polishing mode, and aims to ensure that characters on the surface of the component are more strongly compared with a component packaging material, so that the appearance area of the component is more easily distinguished from an acquired original surface image. Therefore, in this embodiment, the front surface of the component to be tested is polished by using the low-angle bar-shaped white LED light source, so as to provide a suitable condition for subsequent image acquisition on the surface of the component.
And secondly, converting the image signal of the original surface image into a data signal.
Specifically, after the camera acquires an original surface image of the component to be detected, an image signal of the original surface image is converted into a data signal. In this embodiment, an image acquisition card is used to convert the image signal of the original surface image into a data signal.
The image acquisition card is an interface of the image acquisition unit and the image processing unit, is also called an image capture card, and is a bridge connecting the camera and the processor. In the process of transmitting the acquired image, the universal transmission interface cannot realize high-speed transmission of the image signal, so an image acquisition card is required, and the function of the image acquisition card is to convert the image signal output by the camera into a digital signal for acquisition and transmitting the digital signal to the processor.
When the type of the image acquisition card is selected, two aspects need to be considered, namely, a high-speed transmission interface connected with a computer can be provided, and a proper image acquisition card needs to be selected by combining the type and the function of the camera, so that the image acquisition card can work with the camera in a coordinated manner. In this embodiment, an Xtium CL MX4 image acquisition card matched with a DALSA line CCD camera is selected. The image acquisition card adopts a Camera Link interface, the bandwidth speed exceeds 1.7GB/s, a PCIe Gen1.0 slot is supported, and the transmission rate can reach 850 MB/s.
And step three, storing the data signal as the surface image data.
Specifically, after the image acquisition card converts the image signal output by the camera into a digital signal, the digital signal is transmitted to the processor, the processor stores the digital signal as the surface image data, and the surface image data stored in the form of the digital signal is convenient for subsequent detection and analysis. In this embodiment, the processor is a computer.
And S102, preprocessing the surface image data to obtain target image data.
Specifically, the preprocessing the surface image data to obtain target image data includes: and sequentially carrying out denoising, graying and top hat transformation on the surface image data to obtain the target image data and the background image data.
In the process of collecting the surface image data of the component, because of the influence of the self factors of the equipment, the surrounding environment and the shooting illuminance, the collected surface image data generally contains noise attached to a background area and a target area of the surface image data, so that the noise of the surface image data can be reduced by carrying out denoising processing on the surface image data, and the real information of the surface image data is restored. In the embodiment, the surface image data is denoised by adopting a median filtering mode, the image details cannot be damaged by the median filtering denoising mode, the image cannot be blurred, and the edge of the surface image data can be protected while noise is filtered.
After the surface image data is subjected to denoising processing, graying processing is required to be performed on the surface image data. The graying of an image is to convert an image having an original color into a black-and-white image by a computer. In the conversion process, the luminances of the three primary color components are superimposed by weight, and the resultant sum of them is used to represent the degree of brightness, i.e., the gradation value, of the resulting surface image data.
The top cap transformation can solve the problem of pin detail loss of the component to be detected caused by uneven illumination. The top hat transformation is that the original image is poor in effect after the self opening operation, the part with unmatched structural elements can be segmented, and the effect of enhancing the details of the shadow is particularly good. Therefore, in the embodiment, the top-hat transformation operation is used for correcting the loss of detail caused by uneven illumination, and the brighter part in the dark background of the surface image data can be effectively extracted after the top-hat transformation.
And S103, inputting the target image data into an image detection model, positioning the to-be-detected component, and outputting a foreground image of the component.
S104, inputting the foreground image of the component into a pre-trained image detection model, and outputting result image data with a defect detection result; and the defect detection result comprises defect type information, defect number information and/or defect position information of the component to be detected.
Specifically, the image detection model is an image detection model constructed based on a YOLOv4 network, and is a deep learning network model for defect detection.
The image detection model needs to be pre-trained before actual detection. And inputting a part of the target image data of the defective components and a part of the target image data of the non-defective components into the image detection model as a sample set, and pre-training the image detection model. After the pre-training is finished, the image detection model can be used for detecting the surface defects of the component to be detected.
The YOLOv4 network is primarily composed of three main components: a backbone network, a neck network, and a head network. The backbone network is a convolutional neural network which aggregates and forms image features on different fine image granularities. The neck network is a network layer that mixes and combines image features in a series and passes the image features to a prediction layer. The head network predicts the image features, generates a bounding box and predicts the categories.
The skeleton network mainly comprises: CSPDarknet53, Mish activation function, convolution regularization block Dropblock. The YOLOv4 network uses CSPDarknet53 as a reference network, replaces the original RELU activation function with a Mish activation function, and adds a Dropblock block in the module to further improve the generalization capability of the model.
The neck network adopts an SPP module and an FPN + PAN structure. The neck network is usually located at the middle position of the reference network and the head network, and the diversity and the robustness of the features can be further improved by using the neck network. YOLOv4 utilizes the SPP module to fuse feature maps of different scale sizes; and improving the feature extraction capability of the network by utilizing the top-down FPN feature pyramid and the bottom-up PAN feature pyramid. The SPP represents spatial pyramid pooling, is spliced by convolution, and can fix the size of an input feature vector, wherein the sizes of convolution kernels of the largest pooling are {1 × 1,5 × 5,9 × 9 and 13 × 13 }; FPN represents a feature pyramid from top to bottom, PAN represents a feature pyramid from bottom to top, and YOLOv4 improves the accuracy of small target detection on the basis of YOLOv3 with the addition of PAN.
The anchor frame mechanism of the head network is the same as that of the YOLOv3 network, and the main improvements are a Loss function CIOU _ Loss during training and a prediction frame screened dioou _ NMS (NMS, non-maximum suppression algorithm). The head network is used for outputting the target detection result. For different detection algorithms, the number of branches at the output end is different, and the detection algorithm generally comprises a classification branch and a regression branch. Yolov4 uses CIOU _ Loss to replace the original Loss function and uses DIOU _ NMS to replace the traditional algorithm operation, thereby further improving the detection precision of the model.
The Loss function CIOU _ Loss considers the scale information of the overlapping area, the center point distance and the aspect ratio of the frame, so that the prediction frame can better conform to the real frame. Optimizing the initial target detection model by using a CIOU _ Loss function to obtain an optimal target detection model, wherein the formula of the CIoU Loss function is as follows:
LCIoU=1-CIoU
in the formula, LCIoU is a CIoU loss function; CIoU is the regression loss value.
Further, the inputting the foreground image of the component into a pre-trained image detection model and outputting the result image data with the defect detection result includes:
step one, carrying out grid division on the foreground image of the component, and acquiring a prediction frame of each grid to obtain predicted image data with the prediction frame;
and step two, inputting the predicted image data with the prediction frame into the image detection model, and outputting the result image data.
Specifically, grid division is performed on a foreground image of the component, namely, the whole image is divided into different grids, each grid corresponds to an area on the original image, and then whether the component to be detected and the surface of the component are subjected to preset defect types or not is predicted in each grid.
In this embodiment, first, the target image data is divided into S × S grids, each grid is responsible for predicting a target whose center falls into the grid, and 3 prediction frames corresponding to the grids are calculated, so as to obtain predicted image data with the prediction frames. Each prediction frame corresponds to 5+ C values, C represents a category in a preset data set (in this embodiment, the total number of the preset categories of defects is 7, which are respectively scratch defects, hole defects, defect defects, dirt defects, pin defects, mark positioning defects and/or no defects), and 5 represents 5 pieces of parameter information such as coordinates (abscissa and ordinate) of a center point of the prediction frame, width and height dimensions (width and height) and confidence of the prediction frame.
Then, the predicted image data with the prediction frame is input to the image detection model, and since there are S × S meshes, the number of data included in the final output layer of the image detection model is S × 3 × (5+ C).
Calculating the confidence of each prediction box according to the following formula, wherein the confidence of the prediction boxes is calculated as follows:
the confidence level p (object) × IOU, where p (object) indicates whether a component to be detected is contained, and if the component to be detected is contained in the prediction frame, p (object) is 1, otherwise p (object) is 0; the IOU (intersection over) is an intersection area of the prediction frame and the real area of the device to be detected (pixel is taken as a unit, and the pixel area of the real area is normalized to a [0, 1] interval).
Finally, the resultant image data with the defect detection result is output.
Specifically, according to the confidence of each prediction frame and a preset confidence threshold, the prediction frames with the confidence smaller than the preset confidence threshold are filtered, and only the prediction frames with the confidence larger than the preset confidence threshold are reserved. And analyzing and calculating the 5+ C values corresponding to the prediction frames which are larger than the preset confidence coefficient threshold value to form a defect detection result, and outputting result image data with the defect detection result.
The defect detection result comprises defect type information, defect number information and/or defect position information of the component to be detected. The defect type information includes: scratch defects, hole defects, defect defects, smudge defects, pin defects, mark location defects, and/or no defects. The defect number information includes: a corresponding number for each defect type. The defect location information includes: coordinates of the center point of the defect, path vector information of the defect, etc.
For example, a certain component to be detected is detected according to the method for detecting the appearance defect of the component, and the finally output result image data shows that the defect detection result of the component to be detected is as follows: hole defects (i.e., defect type information), 3 (i.e., defect number information), [ (1, 2), (5, 6), (7, 9) ] (i.e., defect location information). The detection result can clearly display the information of the defect type, the defect number and the like of the component to be detected, and the phenomenon of missing judgment or wrong judgment caused by human experience and subjective factors can be avoided.
According to the method for detecting the appearance defects of the components, the appearance defects of the components are detected through the image detection model, and result image data with defect detection results can be obtained. The defect type can be judged, the position information of the defect can be given, the detection precision is high, the detection time is short, the intelligent degree is high, and the phenomenon of missing judgment or wrong judgment caused by human experience and subjective factors can be avoided. The method can be used for identifying the appearance defects of components of different packaging types, can conveniently and quickly meet the test requirements and aims in engineering application, and can provide guarantee for the detection accuracy of the appearance defect identification process, the integrity of test samples, the definition of captured pictures and the like; in addition, compared with the defect judgment method which depends on naked eyes of people and a magnifying glass, the defect judgment method is higher in efficiency and better in accuracy.
It should be noted that the method of this embodiment may also be applied in a distributed scenario, and is completed by cooperation of multiple devices. In such a distributed scenario, one of the multiple devices may only perform one or more steps of the method of the embodiment, and the multiple devices interact with each other to complete the method.
It should be noted that the above describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to any embodiment of the method, the application also provides a detection system for the appearance defects of the components.
Referring to fig. 2, the system for detecting appearance defects of components includes:
an image acquisition unit 201 configured to acquire surface image data of a component to be detected;
an image processing unit 202 configured to pre-process the surface image to obtain target image data;
the image positioning unit 203 is configured to input the target image data into an image detection model, position the component to be detected, and output a component foreground image;
an image detection unit 204 configured to input the component foreground image into a pre-trained image detection model, and output result image data with a defect detection result; the defect detection result comprises defect number information, defect type information and defect position information of the component to be detected.
Specifically, the image acquisition unit comprises a camera and an image acquisition card, wherein the camera is configured to acquire an original surface image of the component to be detected; the image acquisition card is configured to convert an image signal of the original surface image into a data signal and transmit the data signal to the image processing unit.
In some embodiments, the system for detecting the appearance defect of the component further includes a sensing unit, where the sensing unit is configured to sense whether the component to be detected exists in the image acquisition area, and determine whether a center position of the surface of the component to be detected corresponds to the position of the camera lens.
Specifically, the sensing unit includes a photoelectric sensor, the photoelectric sensor can sense whether an image acquisition area has a component to be tested, and judges whether the central position of the surface of the component to be tested corresponds to the position of the camera lens. Through setting up induction element for whether induction element can monitor image acquisition region in real time has the components and parts that await measuring, improves detection efficiency, avoids the camera to do useless work.
In some embodiments, the system for detecting appearance defects of components further includes a feeding unit, where the feeding unit is configured to place a plurality of components to be detected and place the components to be detected into the image acquisition area one by one for image acquisition.
Specifically, the feeding unit can be for rotating the feeding dish place a plurality of components and parts that await measuring on the feeding dish, along with the removal of rotating the feeding dish, can put into the components and parts that await measuring on the feeding dish one by one image acquisition region carries out image acquisition. The rotary feeding mode can improve the detection speed and the detection efficiency of the system.
In some embodiments, referring to fig. 3, the present application provides a system for detecting appearance defects of a component, which performs the following processes:
the first step is as follows: starting a detection system, adjusting the angle and brightness of a light source, turning on a CCD camera, and making an image acquisition unit ready;
the second step is that: and judging whether an image acquisition area is to be tested or not by utilizing a photoelectric sensor in the induction unit, and judging whether the central position of the surface of the component to be tested corresponds to the position of the camera lens or not. If the image acquisition area has a product to be detected and the central position of the surface of the component to be detected corresponds to the position of the lens of the camera, the camera starts to photograph to acquire an original surface image of the component to be detected;
the third step: transmitting the original surface image to an image acquisition card, converting an image signal of the original surface image into a data signal by the image acquisition card, and transmitting the data signal to the image processing unit;
the fourth step: inputting the target image data into a pre-trained image detection model, and outputting final image data with a defect detection result; wherein the defect detection result comprises the defect number of the to-be-detected component
The fifth step: judging whether the product is qualified or not according to the defect detection result, and judging that the product is qualified if the defect detection result is defect-free; and if the defect detection result is other defect types except for no defect, judging that the product is unqualified, namely the product has an appearance defect, marking the product defect, identifying the defect type, and simultaneously sending an alarm to remind an operator to check.
For convenience of description, the above system is described as being divided into various units by functions, and described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The device of the above embodiment is used for implementing the detection method for the appearance defects of the corresponding component in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present application as described above, which are not provided in detail for the sake of brevity.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting appearance defects of components is characterized by comprising the following steps:
acquiring surface image data of a component to be detected;
preprocessing the surface image data to obtain target image data;
inputting the target image data into an image detection model, positioning the to-be-detected component, and outputting a foreground image of the component;
inputting the foreground image of the component into a pre-trained image detection model, and outputting result image data with a defect detection result; and the defect detection result comprises defect type information, defect number information and/or defect position information of the component to be detected.
2. The inspection method of claim 1, wherein inputting the foreground image of the component into a pre-trained image inspection model and outputting the resultant image data with the defect inspection result comprises:
carrying out grid division on the foreground image of the component, and acquiring a prediction frame of each grid to obtain predicted image data with the prediction frame;
and inputting the predicted image data with the prediction frame into the image detection model, and outputting the result image data.
3. The detection method according to claim 1 or 2, wherein the image detection model is an image detection model constructed based on a YOLOv4 network.
4. The inspection method according to claim 1, wherein the acquiring surface image data of the component to be inspected comprises:
acquiring an original surface image of the component to be detected;
converting the image signal of the original surface image into a data signal;
storing the data signal as the surface image data.
5. The inspection method of claim 1, wherein the pre-processing the surface image data to obtain target image data comprises: and sequentially carrying out denoising, graying and top hat transformation on the surface image data to obtain the target image data and the background image data.
6. The detection method according to claim 1, wherein the defect type information comprises: scratch defects, hole defects, defect defects, smudge defects, pin defects, mark location defects, and/or no defects.
7. A system for detecting appearance defects of components is characterized by comprising:
the device comprises an image acquisition unit, a detection unit and a control unit, wherein the image acquisition unit is configured to acquire surface image data of a component to be detected;
the image processing unit is configured to preprocess the surface image to obtain target image data;
the image positioning unit is configured to input the target image data into an image detection model, position the component to be detected and output a component foreground image;
the image detection unit is configured to input the foreground image of the component into a pre-trained image detection model and output result image data with a defect detection result; and the defect detection result comprises defect type information, defect number information and/or defect position information of the component to be detected.
8. The detection system according to claim 7, wherein the image acquisition unit comprises a camera and an image acquisition card, and the camera is configured to acquire an original surface image of the component to be detected; the image acquisition card is configured to convert an image signal of the original surface image into a data signal and transmit the data signal to the image processing unit.
9. The detection system according to claim 7, further comprising a sensing unit configured to sense whether a device to be tested is present in the image capture area and determine whether a center position of a surface of the device to be tested corresponds to a position of the camera lens.
10. The inspection system of claim 7, further comprising a feeding unit configured to place a plurality of the devices under test and place the devices under test one by one into the image capturing area for image capturing.
CN202111558065.4A 2021-12-17 2021-12-17 Method and system for detecting appearance defects of components Pending CN114445330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111558065.4A CN114445330A (en) 2021-12-17 2021-12-17 Method and system for detecting appearance defects of components

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111558065.4A CN114445330A (en) 2021-12-17 2021-12-17 Method and system for detecting appearance defects of components

Publications (1)

Publication Number Publication Date
CN114445330A true CN114445330A (en) 2022-05-06

Family

ID=81364845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111558065.4A Pending CN114445330A (en) 2021-12-17 2021-12-17 Method and system for detecting appearance defects of components

Country Status (1)

Country Link
CN (1) CN114445330A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439452A (en) * 2022-09-13 2022-12-06 杭州凯智莆电子有限公司 Capacitance product detection and evaluation system based on data analysis
CN116597127A (en) * 2023-07-12 2023-08-15 山东辰欣佛都药业股份有限公司 Sterile eye cream packaging integrity image recognition system
CN117457520A (en) * 2023-10-25 2024-01-26 武汉昕微电子科技有限公司 Defect detection method and system for semiconductor component
CN117457520B (en) * 2023-10-25 2024-05-31 武汉昕微电子科技有限公司 Defect detection method and system for semiconductor component

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439452A (en) * 2022-09-13 2022-12-06 杭州凯智莆电子有限公司 Capacitance product detection and evaluation system based on data analysis
CN115439452B (en) * 2022-09-13 2023-04-11 杭州凯智莆电子有限公司 Capacitance product detection and evaluation system based on data analysis
CN116597127A (en) * 2023-07-12 2023-08-15 山东辰欣佛都药业股份有限公司 Sterile eye cream packaging integrity image recognition system
CN116597127B (en) * 2023-07-12 2024-01-12 山东辰欣佛都药业股份有限公司 Sterile eye cream packaging integrity image recognition system
CN117457520A (en) * 2023-10-25 2024-01-26 武汉昕微电子科技有限公司 Defect detection method and system for semiconductor component
CN117457520B (en) * 2023-10-25 2024-05-31 武汉昕微电子科技有限公司 Defect detection method and system for semiconductor component

Similar Documents

Publication Publication Date Title
CN108445007B (en) Detection method and detection device based on image fusion
CN108683907A (en) Optics module picture element flaw detection method, device and equipment
CN114445330A (en) Method and system for detecting appearance defects of components
JP6553624B2 (en) Measurement equipment and system
CN107451969A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN112394064A (en) Point-line measuring method for screen defect detection
CN108804658A (en) Image processing method and device, storage medium, electronic equipment
CN112730251B (en) Device and method for detecting screen color defects
CN113255797B (en) Dangerous goods detection method and system based on deep learning model
CN109741307A (en) Veiling glare detection method, veiling glare detection device and the veiling glare detection system of camera module
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN101937505B (en) Target detection method and equipment and used image acquisition device thereof
CN114993614A (en) AR head-mounted equipment testing equipment and testing method thereof
JP2022533848A (en) Systems and methods for determining if camera components are damaged
JP2019168388A (en) Image inspection method and image inspection device
CN101995325A (en) Appearance detection method and system of image sensor
CN107833223B (en) Fruit hyperspectral image segmentation method based on spectral information
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN115131355B (en) Intelligent method for detecting waterproof cloth abnormity by using electronic equipment data
CN115620079A (en) Sample label obtaining method and lens failure detection model training method
KR101993654B1 (en) Inspecting apparatus mura of display panel and method thereof
WO2023034441A1 (en) Imaging test strips
CN114219758A (en) Defect detection method, system, electronic device and computer readable storage medium
CN108827594B (en) Analytical force detection method and detection system of structured light projector
KR102015620B1 (en) System and Method for detecting Metallic Particles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination