CN111598863A - Defect detection method, device, equipment and readable storage medium - Google Patents

Defect detection method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111598863A
CN111598863A CN202010405374.7A CN202010405374A CN111598863A CN 111598863 A CN111598863 A CN 111598863A CN 202010405374 A CN202010405374 A CN 202010405374A CN 111598863 A CN111598863 A CN 111598863A
Authority
CN
China
Prior art keywords
image
neural network
network model
defect
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010405374.7A
Other languages
Chinese (zh)
Other versions
CN111598863B (en
Inventor
黄耀
陶斯琴
吴雨培
李其乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aqrose Robot Technology Co ltd
Original Assignee
Beijing Aqrose Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aqrose Robot Technology Co ltd filed Critical Beijing Aqrose Robot Technology Co ltd
Priority to CN202010405374.7A priority Critical patent/CN111598863B/en
Publication of CN111598863A publication Critical patent/CN111598863A/en
Application granted granted Critical
Publication of CN111598863B publication Critical patent/CN111598863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a defect detection method, a device, equipment and a readable storage medium, wherein the method comprises the steps of carrying out image channel splicing on an original image and a template image to obtain a spliced multi-channel image; cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image; performing data enhancement processing on the cut image to obtain training sample data; training the neural network model according to the training sample data and storing the trained neural network model; and for the image to be detected, outputting a detection result by using the trained neural network model. The neural network model is input after image processing and trained through image channel splicing, a user-defined region-of-interest frame and a data enhancement method, so that when the model faces the defect detection problem, sufficient defect information, supplementary information and batch processing size values can be captured, the detection accuracy of the neural network model on the defective products is improved, and the probability that good products are detected as the defective products is reduced.

Description

Defect detection method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of industrial manufacturing, in particular to a defect detection method, a defect detection device, defect detection equipment and a readable storage medium.
Background
Along with the development of social intelligence, the automobile industry, the mobile phone industry and the high-end instrument manufacturing industry have increasingly larger demands on integrated circuits and PCB (printed circuit board). In the PCB production and manufacturing process, most processes are already automated, so that the production cost is reduced, and the efficiency is improved. However, in the quality detection link of the PCB, a large amount of defects are detected by depending on the manual operation of quality inspectors, and the existing defect detection technology cannot meet the index requirements of enterprise production. On one hand, due to the fact that the circuit of the PCB is complex, images shot by means of optical equipment are complex and changeable, and challenges are brought to image processing; on the other hand, the form of the defect in the image is not fixed, the shape is variable, and it is not easy to find the defect with the shape change in a complicated background.
At present, PCB defect detection mainly depends on difference between an acquired PCB image and a template image, and then whether defects exist is judged by using a traditional image processing algorithm. The method not only utilizes the information of the template image to remove a part of irrelevant information in the image, but also utilizes the advantages of high detection speed and easy understanding of the result of the traditional image, and is a scheme integrating feasibility and performance. However, the method has a poor effect of identifying stains having a small defect area, has a limited effect of identifying elongated or strip-shaped stains, and cannot be adaptive to small position differences and RGB color differences of different process portions, so that a great number of good products are detected as defective products. The overall performance is that the accuracy of defect detection is not high and the over-detection quantity is excessive. This has influenced PCB manufacturing enterprise's quality control performance, has to increase more reinspectors and guarantees that the yields is not extravagant excessively. The process has the phenomena of poor quality inspection effect and poor total cost.
Disclosure of Invention
The present application mainly aims to provide a defect detection method, apparatus, device and readable storage medium, and aims to solve the problem of poor detection effect of the existing defect detection method.
In order to achieve the above object, the present application provides a defect detection method, which includes the following steps:
carrying out image channel splicing on the original image and the template image to obtain a spliced multi-channel image;
cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
performing data enhancement processing on the cut image to obtain training sample data;
training a neural network model according to the training sample data and storing the trained neural network model;
and for the image to be detected, outputting a detection result by using the trained neural network model.
Optionally, the step of cropping the stitched multi-channel image according to the customized region of interest frame to obtain a cropped image includes:
determining the area size of the self-defined region-of-interest frame according to the defect area of the spliced multi-channel image;
selecting a preset number of pixel points from the spliced multi-channel image;
and cutting the spliced multi-channel image by taking the pixel points as area centers and using the user-defined region-of-interest frame of the area size to obtain a cut image.
Optionally, the step of training the neural network model according to the training sample data and storing the trained neural network model includes:
setting training parameters of a neural network model;
inputting the sample data into the data network model, training the neural network model according to the training parameters, and acquiring network parameters of a neural network;
and setting and storing the trained neural network model according to the network parameters.
Optionally, for the image to be detected, the step of outputting the detection result by using the trained neural network model includes:
and carrying out image channel splicing on the image to be detected and the template image to obtain a multi-channel input image to be detected.
Optionally, the step of outputting the detection result by using the trained neural network model for the image to be detected includes:
segmenting the multi-channel input image to be detected according to a preset resolution;
inputting the segmented multichannel input image to be detected into the trained neural network model to obtain defect probability information;
and outputting a detection result according to the defect probability information.
Optionally, the step of segmenting the multi-channel generated image to be detected according to a preset resolution includes:
if the multichannel input image to be detected has suspicious defect position information, segmenting the multichannel input image to be detected through a segmentation region frame;
and if the multichannel input image to be detected does not have suspicious defect position information, segmenting the multichannel input image to be detected by equally dividing or increasing segmentation lines.
Optionally, the step of outputting the detection result according to the defect probability information includes:
if the defect probability information of the segmented multi-channel input image to be detected has defect probability larger than a preset threshold value, outputting the image to be detected as a defective product;
and when the defect probability information of the segmented multi-channel input image to be detected is smaller than a preset threshold value, outputting the image to be detected as a good product.
The present application further includes a defect detection apparatus, the defect detection apparatus comprising:
the splicing module is used for splicing the original image and the template image through image channels to obtain a spliced multi-channel image;
the cutting module is used for cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
the data enhancement module is used for carrying out data enhancement processing on the cut image to obtain training sample data;
the training module is used for training the neural network model according to the training sample data and storing the trained neural network model;
and the detection module is used for outputting a detection result for the image to be detected by utilizing the trained neural network model.
The present application further provides a defect detecting apparatus, the defect detecting apparatus including: a memory, a processor and a defect detection program stored on the memory and executable on the processor, the defect detection program when executed by the processor implementing the steps of the defect detection method as described above.
The present application also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the defect detection method as described above.
The method comprises the steps of carrying out image channel splicing on an original image and a template image to obtain a spliced multi-channel image; cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image; performing data enhancement processing on the cut image to obtain training sample data; training a neural network model according to the training sample data and storing the trained neural network model; and for the image to be detected, outputting a detection result by using the trained neural network model. The neural network model is input after image processing and trained through image channel splicing, a user-defined region-of-interest frame and a data enhancement method, so that when the neural network model faces the defect detection problem, sufficient and variable defect information, proper background supplementary information and proper batch processing size values can be captured, the detection accuracy of the neural network model on the defect products is improved, and the probability that good products are detected as the defect products is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a defect detection method according to a first embodiment of the present application;
FIG. 3 is a detailed flowchart of step S20 of FIG. 2 in a second embodiment of the defect detection method of the present application;
FIG. 4 is a detailed flowchart of step S40 of FIG. 2 in a third embodiment of the defect detection method of the present application;
FIG. 5 is a flowchart illustrating steps before step S50 and step S50 of FIG. 2 according to a fourth embodiment of the defect detection method of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application.
The terminal in the embodiment of the application is a defect detection device.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that turns off the display screen and/or the backlight when the terminal device is moved to the ear. Of course, the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a defect detection program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the defect detection program stored in the memory 1005 and perform the following operations:
carrying out image channel splicing on the original image and the template image to obtain a spliced multi-channel image;
cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
performing data enhancement processing on the cut image to obtain training sample data;
training a neural network model according to the training sample data and storing the trained neural network model;
and for the image to be detected, outputting a detection result by using the trained neural network model.
Based on the above terminal hardware structure, various embodiments of the present application are provided.
The application provides a defect detection method.
Referring to fig. 2, in a first embodiment of a defect detection method, the method includes:
step S10, carrying out image channel splicing on the original image and the template image to obtain a spliced multi-channel image;
and splicing the original image and the template image on an image channel, wherein the template image is a standard PCB image with error defects, and the original image is a PCB image which is actually acquired and possibly has defect areas. In the current PCB detection technical scheme, a template image is theoretically and practically proved to have abundant information quantity, and the introduction of the template image can enable an algorithm to see more information. Compared with the original image of a single utilization product, the defect identification method has the capability of higher defect identification accuracy rate by combining the original image and the defect identification image. In the prior art, the difference between the two images is often made, and the obtained difference image is used as a new image to be processed. The method is simple and intuitive, and the defect type of a large-area block in the PCB can be identified by utilizing the difference information between two images to process the images. However, the difference image only retains the difference information of the two images, and any identical information is discarded. However, if some difference information in the difference image is not assisted by the information of the original image corresponding to the difference, it is very difficult to confirm whether the difference image is a defect or a normal product with small disturbance. The identification of complex defects is aided by the need for the same information of both images. Both the same information and the difference information are helpful for defect identification. By splicing the original image and the template image on the color channel, the traditional 3-channel image is spliced into a 6-channel image, the same information and different information of two images are kept, and the spatial position information between the information, namely the height and width coordinate values of the pixel points, is also accurately kept, so that the network can extract abstract information with a spatial relative relationship and can also extract difference information between different channels. Optionally, the original image and the template image may be subjected to difference image obtaining or width stitching, but the obtained effect is discounted to some extent, for example, if a technical route of obtaining the difference image from two images is adopted, the same part of information in the image may be lost, the recognition effect on complex defects depending on the background around the pixel points or disturbance good products may be poor, the recognition performance is reduced, and when the original image and the template image are stitched in the width, the pixel point pairs at the same position of the original image and the template image may be changed into two pixel points with different width values, the accurate position information is lost, and the recognition performance of the network on the defects may be reduced.
Step S20, according to the user-defined region of interest frame, the spliced multi-channel image is cut to obtain a cut image;
typically, the PCB image of the product is large in size. If the whole image is input into the network for training, the problem that the video memory is not enough due to the overlarge input image to limit the batch processing size to be too small exists. Because the invention models the task into the classification task, the neural network performance under the classification task has close relation with the batch processing size. Too small of a factor may significantly degrade the performance of the network resulting from network training. The video memory occupation size of the neural network is approximately linearly proportional to the batch processing size, and the series of relations make the overlarge size of the input picture unfavorable for the network performance. In the PCB defect detection process, most areas in an image are possibly unrelated to defects, and if all image information in an original image is input into a network, the proportion of defect information to non-defect information is very small, so that the defect information is submerged in the non-defect information with high probability, and the network wrongly depends on the non-defect information to make classification judgment. According to the method and the device, the spliced multi-channel image generated after the original image and the template image are subjected to channel splicing is cut through the user-defined region-of-interest frame, and different cut images are obtained. Meanwhile, the size of the self-defined region of interest frame can be correspondingly adjusted according to the size of the defect region in the original image. Preferably, when the spliced multi-channel image is cropped through the custom region-of-interest frame, the number ratio of the image containing the defective region to the image of the non-defective region of the cropped image can be controlled to be close to 1. The spliced multi-channel image is cut through the user-defined region-of-interest frame, so that a better batch processing size value is ensured, a proper proportion of defect information in an input image is balanced, and richer features are shared for a neural network, so that the performance of the network on a defect detection task is improved. Alternatively, the equal segmentation graph may be gradually adopted, but on one hand, the defect accounts for a very small proportion of all the pictures, so that the network has a poor defect identification effect, and on the other hand, more defects are divided into two parts, and the defect feature mode becomes more complex, so that the difficulty of defect identification is increased, and the network performance is reduced.
Step S30, performing data enhancement processing on the cut image to obtain training sample data;
in the field of deep learning, data enhancement plays a very important role, the data space can be greatly expanded on a training set of data, and the diversity of the data is improved, so that the neural network can extract better characteristics, and the performance of the neural network is integrally improved. In the application, the included data enhancement methods mainly include five data enhancement technologies of translation, rotation, turnover, illumination enhancement and blurring. Translation and rotation are very important steps to improve the performance of the neural network. The data samples can be enriched through the turning operation, more samples with different illumination intensities can be derived through illumination enhancement, and therefore certain robustness of the network to illumination changes is enhanced. The blurring process enables the network to have certain adaptability to clearly focused pictures and blurred pictures. All of the five data enhancement modes can be used simultaneously, and only one or more of the five data enhancement modes can be used. And for the cropped image, performing data enhancement processing to form training sample data finally used for training the data network model. Alternatively, instead of data enhancement, the picture is augmented to a larger data set, but although theoretically the training set consisting of the original data set augmented by a certain multiple is equivalent to the data enhancement technique, the multiple is the number of training iterations, there is a problem that the disk space occupied by the data is also enlarged by a multiple corresponding to the number of iterations, which is not very economical in real situations and therefore not well suited for practical use.
Step S40, training the neural network model according to the training sample data and storing the trained neural network model;
training parameters of the neural network model, such as iteration times, learning rate, batch processing size and the like, are set according to the obtained training sample data, the neural network model is trained, and the trained data network model is stored, namely the neural network model is set according to the network parameters obtained by training. The training process can be completed through back-end equipment such as a cloud server, after the training is completed, the corresponding neural network model is stored in the cloud end, and when the neural network model is required to be used, the corresponding neural network model is obtained from the cloud end.
Step S50, outputting a detection result for the image to be detected by using the trained neural network model;
after the neural network training is completed, inputting an image to be detected according to a stored neural network model, wherein the image to be detected also needs to be subjected to image channel splicing with a template image to obtain an input image with 6 channels, then segmenting the input image, dividing the input image into a series of input images and inputting the input images into the stored neural network model, reasoning by the neural network model to output a reasoning result of corresponding defect probability information, and simultaneously determining a detection result of a PCB corresponding to the image to be detected according to the defect probability information, wherein the detection result is a defective product or a non-defective product. For the product actually using the method in the present application, it may only include the device related to the inference process in step S50, and the training process for the neural network model is completed through the cloud. That is, steps S10 to S40 are the training process of the neural network model in the present application, and step S50 is the step of performing inference using the trained neural network model. For an actual product, the steps S10 to S50 may be continuously completed by the same product, or the steps S10 to S40 and the step S50 may be separately completed on two different products, for example, the training process of the neural network model is completed by a cloud server, and the on-site detection device is only used for an inference process, that is, the trained neural network model is directly obtained from the cloud server for detection.
In the embodiment, image channel splicing is performed on an original image and a template image to obtain a spliced multi-channel image; cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image; performing data enhancement processing on the cut image to obtain training sample data; training a neural network model according to the training sample data and storing the trained neural network model; and for the image to be detected, outputting a detection result by using the trained neural network model. The neural network model is input into a neural network model for meridian training after image processing through image channel splicing, a user-defined region-of-interest frame and a data enhancement method, so that when the neural network model faces the defect detection problem, sufficient and varied defect information, proper background supplementary information and proper batch processing size can be captured, the detection accuracy of the neural network model on the defect is improved, and the probability that good products are detected as the defect is reduced.
Further, referring to fig. 2 and 3, on the basis of the above-mentioned embodiments of the defect detecting method of the present application, a second embodiment of the defect detecting method is provided, in which,
step S20 includes:
step S21, determining the area size of the custom region-of-interest frame according to the defect area of the spliced multi-channel image;
and determining the size of the custom region-of-interest frame according to the size of the defect region in the spliced multi-channel image, namely correspondingly increasing the size of the custom region-of-interest frame to cover the information of the defect region when the size of the defect region is larger.
Step S22, selecting pixel points with preset number from the spliced multi-channel image;
and selecting a preset number of pixel points from the spliced multi-channel image, if relevant labeling information of a defect region in the multi-channel image exists, selecting the preset number of pixel points according to the labeling information, and if the relevant labeling information does not exist, manually selecting the preset pixel points. The selection of the pixel points can be random or manually controlled.
Step S23, cutting the spliced multi-channel image by taking the pixel point as an area center and a user-defined region-of-interest frame of the area size to obtain a cut image;
and cutting the spliced multi-channel image according to the size of the customized interested area frame by taking the selected pixel point as an area center to obtain a plurality of different cut images, wherein the cut images may comprise defect areas or only comprise non-defect areas, and preferably, the number ratio of the cut images comprising the defect areas to the cut images not comprising the defect areas is controlled to be 1 to 1.
In this embodiment, the size of the custom region of interest frame is determined according to the size of the defect region, and the multi-channel image is intercepted by using the custom region of interest frame, so that the proportion of the defect region in the training data of the neural network model is controlled, and the training efficiency and the training accuracy are improved.
Further, referring to fig. 2 and 4, on the basis of the above-mentioned embodiments of the defect detecting method of the present application, a third embodiment of the defect detecting method is provided, in which,
step S40 includes:
step S41, setting training parameters of the neural network model;
and setting training parameters of the neural network model, such as iteration times, learning rate, batch processing size and the like. The learning rate determines how fast the parameter moves to the optimal value. If the learning rate is too large, the optimal value is likely to be crossed; on the contrary, if the learning rate is too low, the optimization efficiency may be too low, and the algorithm may not be converged for a long time. Generally, the learning rate in the neural network model training process can be dynamically adjusted, namely the learning rate is larger in the initial training stage and gradually decreases with the increase of the number of iterations. The video memory occupation size of the neural network is approximately linearly proportional to the batch processing size, so that the proper batch processing size needs to be selected due to the hardware strength of different devices, so that the hardware can normally run and simultaneously ensure that as much image information as possible is obtained.
Step S42, inputting the sample data into the data network model, training the neural network model according to the training parameters, and obtaining network parameters of the neural network;
inputting the acquired sample data into a neural network model, training the neural network model according to set training parameters such as iteration times, learning rate, batch processing size and the like, ending the training process of the neural network model if the specified iteration times are reached, learning at the learning rate set by carbon steel in the iteration process to approach an optimal solution, and simultaneously performing batch processing on a plurality of input images according to the batch processing size. And obtaining the network parameters of the data network model through training.
Step S43, setting and storing the trained neural network model according to the network parameters;
after the training process is finished, the neural network model is set according to the network parameters obtained by training, and the neural network model is stored, wherein the stored data network model can be used for detecting the image to be detected later.
In this embodiment, the training parameters of the neural network model are set, and the neural network model is trained according to the training parameters, so that the accuracy of the training process is ensured, and the trained neural network model is stored for later use.
Further, referring to fig. 2 and 5, on the basis of the above-mentioned embodiments of the defect detecting method of the present application, a fourth embodiment of the defect detecting method is provided, in which,
step S50 is preceded by:
step S51, carrying out image channel splicing on the image to be detected and the template image to obtain a multi-channel input image to be detected;
similar to the training process of the neural network model, for the belt detection image, image channel splicing is also needed to be carried out on the belt detection image and the template image, the 3-channel image is changed into the 6-channel image, the width and the height of a pixel point in the traditional 3-channel image correspond to 3 color information, and through image channel splicing, the width and the height of the pixel point in the 6-channel image correspond to 6 color information. And carrying out image channel splicing on the image with the detection and the template image to obtain a multi-channel image with the detection.
Step S50 includes:
step S52, segmenting the multi-channel input image to be detected according to a preset resolution;
the image of the input image to be detected after the channel splicing is large, so the segmentation is needed, if the segmentation is not performed, the input information of the neural network model is possibly too large, and the hardware cannot support the output of the corresponding processing result due to the fact that the neural network model cannot be normally used. The segmentation of the multi-channel input image with detection can be used for segmentation according to the suspicious defect region information hardware interesting region frame in the multi-channel image to be detected, and if the corresponding suspicious defect region information does not exist, the multi-channel image to be detected can be selected to be equally divided.
Step S53, inputting the segmented multichannel input image to be detected into the trained data network model to obtain defect probability information;
and inputting the segmented multichannel image to be detected into the stored trained neural network. In the application, the selected neural network model is a classification task neural network model, and the neural network model is modeled into a two-classification task. The neural network model is selected as a network based on ResNet50, so that for the input multi-channel image to be detected, after inference is carried out on the result neural network model, an inference result of corresponding defect probability information is output.
Step S54, outputting a detection result according to the defect probability information;
and comparing the defect probability information of the image with a preset threshold value, when the defect probability is greater than the preset threshold value, judging that a defect area exists in the currently input image to be detected, namely the PCB to be detected is a defective product, and if the defect probability information is less than or equal to the preset threshold value, judging that the defect area does not exist, wherein the detection result is a good product. Meanwhile, for the object to be detected with a plurality of images to be detected input into the neural network model, the object to be detected is finally judged to be a good product only when the defect probability of all the images to be detected is smaller than or equal to a preset threshold value, namely the images to be detected are all good products. The setting of the preset threshold value should make a good balance between the undetected rate and the overdetected rate.
In this embodiment, the stored trained data network model is used to reason the image to be detected, output the corresponding defect probability inference result, judge the category of the object to be detected according to the defect probability, and utilize the trained neural network model to reason, so as to improve the accuracy of defect detection.
Further, on the basis of the above-mentioned embodiments of the defect detecting method of the present application, there is provided a fifth embodiment of the defect detecting method, in which,
step S52 includes:
step A1, if the multichannel input image to be detected has suspicious defect position information, segmenting the multichannel input image to be detected through segmenting a region frame;
step A2, if the multichannel input image to be detected does not have suspicious defect position information, segmenting the multichannel input image to be detected by equally dividing or increasing segmentation lines;
if the multichannel input image to be detected contains the position information of the defect, namely the information of the defect area possibly exists, the suspicious defect area is intercepted and segmented by using the area frame, the detection of the suspicious defect area is ensured, if the position information of the suspicious defect does not exist, the segmentation line is equally divided or randomly increased to segment the multichannel image to be detected, the randomness in the detection process is ensured, and the detection accuracy of the neural network model is also favorably improved.
In this embodiment, when detecting a multi-channel image with detection, if there is suspicious defect location information, the multi-channel image to be detected is segmented by using a region frame, and if there is no suspicious defect location information, the multi-channel image is randomly segmented.
Further, on the basis of the above-mentioned embodiments of the defect detecting method of the present application, there is provided a sixth embodiment of the defect detecting method, in which,
step S54 includes:
step B1, if the defect probability information of the segmented image to be detected has defect probability larger than a preset threshold value, outputting the image to be detected as a defective product;
step B2, when the defect probability information of the segmented image to be detected is smaller than a preset threshold value, outputting the image to be detected as a good product;
the method comprises the steps of segmenting a multi-channel image to be detected, inputting the segmented multi-channel image into a neural network model to obtain a defect probability reasoning result, if an image with a probability larger than a preset threshold value exists in the defect probability of the segmented image, indicating that a defect area exists, correspondingly, judging that the image to be detected is a defective product, and when the defect probability corresponding to all the segmented images is smaller than or equal to the preset threshold value, namely, the segmented images are judged to be good products, judging that an object to be detected corresponding to the image to be detected is a good product.
In the embodiment, the detection result of the object to be detected is obtained according to the defect probability, so that the detection result can be more flexibly and accurately obtained.
Further, on the basis of the above-described embodiments of the defect detecting method of the present application, there is provided a seventh embodiment of the defect detecting method, in which,
the defect detection method is applied to the defect detection of the PCB. Namely, the neural network model is used for detecting the defects of the PCB. The method comprises the steps of obtaining an original image of a PCB, splicing the original image and a template image through image channels to obtain a spliced multi-channel image with 6 channels, then cutting the spliced multi-channel image according to a user-defined region-of-interest frame, meanwhile, carrying out operations such as translation, rotation, overturning, blurring and illumination enhancement on the cut image to obtain training sample data with various data, setting training parameters such as iteration parameters, learning rate and batch processing size for a neural network model, and training by using the training sample data to obtain the neural network model for PCB defect detection so as to finish a training stage for the neural network model. Acquiring an image to be detected of the PCB to be detected, splicing the image to be detected and the template image by image channels to obtain a multi-channel input image to be detected of 6 channels, a series of input image data of a multi-channel input image to be detected, which is segmented into preset resolution, is input into a trained neural network model, acquiring inference result of defect probability information, comparing the defect probability with a preset threshold value, if the defect probability is greater than the preset threshold value, judging the PCB to be detected as defective, if the defect probability is less than a preset threshold, judging the PCB to be detected as good, and meanwhile, for the PCB with a plurality of images to be detected input to the neural network model for reasoning, judging that the PCB to be detected is a good product only when the defect probability of all the images to be detected is smaller than a preset threshold value, and otherwise, judging that the PCB to be detected is a defective product.
In addition, an embodiment of the present application further provides a defect detecting apparatus, where the defect detecting apparatus includes:
the splicing module is used for splicing the original image and the template image through image channels to obtain a spliced multi-channel image;
the cutting module is used for cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
the data enhancement module is used for carrying out data enhancement processing on the cut image to obtain training sample data;
the training module is used for training the neural network model according to the training sample data and storing the trained neural network model;
and the detection module is used for outputting a detection result for the image to be detected by utilizing the trained neural network model.
Optionally, the cutting module is further configured to:
determining the area size of the self-defined region-of-interest frame according to the defect area of the spliced multi-channel image;
selecting a preset number of pixel points from the spliced multi-channel image;
and cutting the spliced multi-channel image by taking the pixel points as area centers and using the user-defined region-of-interest frame of the area size to obtain a cut image.
Optionally, the training module is further configured to:
setting training parameters of a neural network model;
inputting the sample data into the data network model, training the neural network model according to the training parameters, and acquiring network parameters of a neural network;
and setting and storing the trained neural network model according to the network parameters.
Optionally, the splicing module is further configured to:
and carrying out image channel splicing on the image to be detected and the template image to obtain a multi-channel input image to be detected.
Optionally, the detection module is further configured to:
segmenting the multi-channel input image to be detected according to a preset resolution;
inputting the segmented multichannel input image to be detected into the trained neural network model to obtain defect probability information;
and outputting a detection result according to the defect probability information.
Optionally, the cutting module is further configured to:
if the multichannel input image to be detected has suspicious defect position information, segmenting the multichannel input image to be detected through a segmentation region frame;
and if the multichannel input image to be detected does not have suspicious defect position information, segmenting the multichannel input image to be detected by equally dividing or increasing segmentation lines.
Optionally, the detection module is further configured to:
if the defect probability information of the segmented multi-channel input image to be detected has defect probability larger than a preset threshold value, outputting the image to be detected as a defective product;
and when the defect probability information of the segmented multi-channel input image to be detected is smaller than a preset threshold value, outputting the image to be detected as a good product.
It should be additionally noted that, for the defect detection apparatus in the present application, the on-site entity apparatus may include only the inference module, that is, only the method steps related to the inference process are executed, and the training process for the neural network model is completed through the cloud end, and when the trained neural network model needs to be used, only the corresponding information needs to be acquired from the cloud end.
The specific implementation of the apparatus and the readable storage medium (i.e., the computer readable storage medium) of the present application is basically the same as the embodiments of the defect detection method, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A defect detection method, comprising:
carrying out image channel splicing on the original image and the template image to obtain a spliced multi-channel image;
cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
performing data enhancement processing on the cut image to obtain training sample data;
training a neural network model according to the training sample data and storing the trained neural network model;
and for the image to be detected, outputting a detection result by using the trained neural network model.
2. The defect detection method of claim 1, wherein the step of cropping the stitched multi-channel image according to the custom region of interest frame to obtain a cropped image comprises:
determining the area size of the self-defined region-of-interest frame according to the defect area of the spliced multi-channel image;
selecting a preset number of pixel points from the spliced multi-channel image;
and cutting the spliced multi-channel image by taking the pixel points as area centers and using the user-defined region-of-interest frame of the area size to obtain a cut image.
3. The defect detection method of claim 1, wherein the step of training the neural network model according to the training sample data and saving the trained neural network model comprises:
setting training parameters of a neural network model;
inputting the sample data into the data network model, training the neural network model according to the training parameters, and acquiring network parameters of a neural network;
and setting and storing the trained neural network model according to the network parameters.
4. The defect detection method of claim 1, wherein the step of outputting the detection result using the trained neural network model for the image to be detected comprises:
and carrying out image channel splicing on the image to be detected and the template image to obtain a multi-channel input image to be detected.
5. The defect detection method of claim 4, wherein the step of outputting the detection result using the trained neural network model for the image to be detected comprises:
segmenting the multi-channel input image to be detected according to a preset resolution;
inputting the segmented multichannel input image to be detected into the trained neural network model to obtain defect probability information;
and outputting a detection result according to the defect probability information.
6. The defect detection method of claim 5, wherein the step of segmenting the multi-channel generated image to be detected according to a preset resolution comprises:
if the multichannel input image to be detected has suspicious defect position information, segmenting the multichannel input image to be detected through a segmentation region frame;
and if the multichannel input image to be detected does not have suspicious defect position information, segmenting the multichannel input image to be detected by equally dividing or increasing segmentation lines.
7. The defect detection method of claim 5, wherein the step of outputting the detection result based on the defect probability information comprises:
if the defect probability information of the segmented multi-channel input image to be detected has defect probability larger than a preset threshold value, outputting the image to be detected as a defective product;
and when the defect probability information of the segmented multi-channel input image to be detected is smaller than a preset threshold value, outputting the image to be detected as a good product.
8. A defect detection apparatus, characterized in that the defect detection apparatus comprises:
the splicing module is used for splicing the original image and the template image through image channels to obtain a spliced multi-channel image;
the cutting module is used for cutting the spliced multi-channel image according to the user-defined region-of-interest frame to obtain a cut image;
the data enhancement module is used for carrying out data enhancement processing on the cut image to obtain training sample data;
the training module is used for training the neural network model according to the training sample data and storing the trained neural network model;
and the detection module is used for outputting a detection result for the image to be detected by utilizing the trained neural network model.
9. A defect detection apparatus, characterized in that the defect detection apparatus comprises: memory, a processor and a defect detection program stored on the memory and executable on the processor, the defect detection program when executed by the processor implementing the steps of the defect detection method according to any one of claims 1 to 7.
10. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the defect detection method according to any one of claims 1 to 7.
CN202010405374.7A 2020-05-13 2020-05-13 Defect detection method, device, equipment and readable storage medium Active CN111598863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010405374.7A CN111598863B (en) 2020-05-13 2020-05-13 Defect detection method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010405374.7A CN111598863B (en) 2020-05-13 2020-05-13 Defect detection method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111598863A true CN111598863A (en) 2020-08-28
CN111598863B CN111598863B (en) 2023-08-22

Family

ID=72182428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010405374.7A Active CN111598863B (en) 2020-05-13 2020-05-13 Defect detection method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111598863B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object
CN112461846A (en) * 2020-11-26 2021-03-09 常州微亿智造科技有限公司 Workpiece defect detection method and device
CN112967187A (en) * 2021-02-25 2021-06-15 深圳海翼智新科技有限公司 Method and apparatus for target detection
CN113256607A (en) * 2021-06-17 2021-08-13 常州微亿智造科技有限公司 Defect detection method and device
CN114596263A (en) * 2022-01-27 2022-06-07 阿丘机器人科技(苏州)有限公司 Deep learning mainboard appearance detection method, device, equipment and storage medium
CN115031363A (en) * 2022-05-27 2022-09-09 约克广州空调冷冻设备有限公司 Method and device for predicting performance of air conditioner
CN115661155A (en) * 2022-12-28 2023-01-31 北京阿丘机器人科技有限公司 Defect detection model construction method, device, equipment and storage medium
CN117367023A (en) * 2023-10-25 2024-01-09 广东鑫焱智能设备科技有限公司 Energy consumption control method, system, equipment and storage medium for refrigerated cabinet

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317513B2 (en) * 1996-12-19 2001-11-13 Cognex Corporation Method and apparatus for inspecting solder paste using geometric constraints
AU2001213525B2 (en) * 2000-10-30 2008-04-10 Landmark Graphics Corporation System and method for analyzing and imaging three-dimensional volume data sets
CN206470220U (en) * 2016-12-24 2017-09-05 大连日佳电子有限公司 Circuit board detecting system
WO2018000731A1 (en) * 2016-06-28 2018-01-04 华南理工大学 Method for automatically detecting curved surface defect and device thereof
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN109389599A (en) * 2018-10-25 2019-02-26 北京阿丘机器人科技有限公司 A kind of defect inspection method and device based on deep learning
CN110136130A (en) * 2019-05-23 2019-08-16 北京阿丘机器人科技有限公司 A kind of method and device of testing product defect
CN110189336A (en) * 2019-05-30 2019-08-30 上海极链网络科技有限公司 Image generating method, system, server and storage medium
WO2019233166A1 (en) * 2018-06-04 2019-12-12 杭州海康威视数字技术股份有限公司 Surface defect detection method and apparatus, and electronic device
CN111091127A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Image detection method, network model training method and related device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317513B2 (en) * 1996-12-19 2001-11-13 Cognex Corporation Method and apparatus for inspecting solder paste using geometric constraints
AU2001213525B2 (en) * 2000-10-30 2008-04-10 Landmark Graphics Corporation System and method for analyzing and imaging three-dimensional volume data sets
WO2018000731A1 (en) * 2016-06-28 2018-01-04 华南理工大学 Method for automatically detecting curved surface defect and device thereof
CN206470220U (en) * 2016-12-24 2017-09-05 大连日佳电子有限公司 Circuit board detecting system
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
WO2019233166A1 (en) * 2018-06-04 2019-12-12 杭州海康威视数字技术股份有限公司 Surface defect detection method and apparatus, and electronic device
CN109389599A (en) * 2018-10-25 2019-02-26 北京阿丘机器人科技有限公司 A kind of defect inspection method and device based on deep learning
CN110136130A (en) * 2019-05-23 2019-08-16 北京阿丘机器人科技有限公司 A kind of method and device of testing product defect
CN110189336A (en) * 2019-05-30 2019-08-30 上海极链网络科技有限公司 Image generating method, system, server and storage medium
CN111091127A (en) * 2019-12-16 2020-05-01 腾讯科技(深圳)有限公司 Image detection method, network model training method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李天宇: "基于机器视觉的PCB元器件在线检测" *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135048B (en) * 2020-09-23 2022-02-15 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object
CN112135048A (en) * 2020-09-23 2020-12-25 创新奇智(西安)科技有限公司 Automatic focusing method and device for target object
CN112461846B (en) * 2020-11-26 2024-02-23 常州微亿智造科技有限公司 Workpiece defect detection method and device
CN112461846A (en) * 2020-11-26 2021-03-09 常州微亿智造科技有限公司 Workpiece defect detection method and device
CN112967187A (en) * 2021-02-25 2021-06-15 深圳海翼智新科技有限公司 Method and apparatus for target detection
CN112967187B (en) * 2021-02-25 2024-05-31 深圳海翼智新科技有限公司 Method and apparatus for target detection
CN113256607A (en) * 2021-06-17 2021-08-13 常州微亿智造科技有限公司 Defect detection method and device
CN114596263A (en) * 2022-01-27 2022-06-07 阿丘机器人科技(苏州)有限公司 Deep learning mainboard appearance detection method, device, equipment and storage medium
CN114596263B (en) * 2022-01-27 2024-08-02 阿丘机器人科技(苏州)有限公司 Deep learning mainboard appearance detection method, device, equipment and storage medium
CN115031363A (en) * 2022-05-27 2022-09-09 约克广州空调冷冻设备有限公司 Method and device for predicting performance of air conditioner
CN115031363B (en) * 2022-05-27 2023-11-28 约克广州空调冷冻设备有限公司 Method and device for predicting air conditioner performance
CN115661155A (en) * 2022-12-28 2023-01-31 北京阿丘机器人科技有限公司 Defect detection model construction method, device, equipment and storage medium
CN117367023A (en) * 2023-10-25 2024-01-09 广东鑫焱智能设备科技有限公司 Energy consumption control method, system, equipment and storage medium for refrigerated cabinet
CN117367023B (en) * 2023-10-25 2024-06-04 广东鑫焱智能设备科技有限公司 Energy consumption control method, system, equipment and storage medium for refrigerated cabinet

Also Published As

Publication number Publication date
CN111598863B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111598863A (en) Defect detection method, device, equipment and readable storage medium
CN111179253B (en) Product defect detection method, device and system
CN108009543B (en) License plate recognition method and device
US7409081B2 (en) Apparatus and computer-readable medium for assisting image classification
CN114266773B (en) Display panel defect positioning method, device, equipment and storage medium
CN111930622B (en) Interface control testing method and system based on deep learning
CN113538392B (en) Wafer detection method, wafer detection equipment and storage medium
CN112767366A (en) Image recognition method, device and equipment based on deep learning and storage medium
CN111079638A (en) Target detection model training method, device and medium based on convolutional neural network
CN114549390A (en) Circuit board detection method, electronic device and storage medium
CN112767354A (en) Defect detection method, device and equipment based on image segmentation and storage medium
CN113628179B (en) PCB surface defect real-time detection method, device and readable medium
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN111598084B (en) Defect segmentation network training method, device, equipment and readable storage medium
CN115661160B (en) Panel defect detection method, system, device and medium
CN113222913A (en) Circuit board defect detection positioning method and device and storage medium
CN113793323A (en) Component detection method, system, equipment and medium
CN112070762A (en) Mura defect detection method and device for liquid crystal panel, storage medium and terminal
CN117871545A (en) Method and device for detecting defects of circuit board components, terminal and storage medium
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN114677567A (en) Model training method and device, storage medium and electronic equipment
CN113284113B (en) Glue overflow flaw detection method, device, computer equipment and readable storage medium
CN113127349B (en) Software testing method and system
CN110310341B (en) Method, device, equipment and storage medium for generating default parameters in color algorithm
CN115631197B (en) Image processing method, device, medium, equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant