CN110598761A - Dispensing detection method and device and computer readable storage medium - Google Patents

Dispensing detection method and device and computer readable storage medium Download PDF

Info

Publication number
CN110598761A
CN110598761A CN201910789705.9A CN201910789705A CN110598761A CN 110598761 A CN110598761 A CN 110598761A CN 201910789705 A CN201910789705 A CN 201910789705A CN 110598761 A CN110598761 A CN 110598761A
Authority
CN
China
Prior art keywords
dispensing
training
dispensing detection
image
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910789705.9A
Other languages
Chinese (zh)
Inventor
汤其剑
海涵
王启垒
彭翔
廖美华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910789705.9A priority Critical patent/CN110598761A/en
Publication of CN110598761A publication Critical patent/CN110598761A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

According to the dispensing detection method, the dispensing detection device and the computer readable storage medium disclosed by the embodiment of the invention, firstly, a preset training set image is obtained, and an original label file obtained by marking a dispensing area on the training set image is obtained; then, forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model; and finally, inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected. Through the implementation of the invention, the power supply module is subjected to dispensing detection through the neural network model trained by the deep learning algorithm, so that the working efficiency of dispensing detection and the accuracy of the detection result are effectively improved, and the quality of the power supply module delivered from a factory is better ensured.

Description

Dispensing detection method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of quality inspection, in particular to a dispensing detection method and device and a computer readable storage medium.
Background
In the production and processing process of the power module, the power module needs to be subjected to glue dispensing operation, for example, the magnetic column on the power module needs to be subjected to glue dispensing, and then the magnetic column subjected to glue dispensing is subjected to packaging operation, wherein the quality of the glue dispensing operation has great influence on the product quality of the power module.
In order to ensure the product quality of the produced power module, in practical application, the quality detection needs to be performed on the dispensing area on the power module so as to avoid the defective products from leaving the factory. At present, when the glue dispensing detection of the power supply modules is carried out, whether the glue dispensing quality of each power supply module is qualified is usually detected manually by special quality inspection personnel, however, the general working efficiency of a manual mode is low, omission and misjudgment are easy to occur, and the accuracy of the glue dispensing detection cannot be fully guaranteed.
Disclosure of Invention
The embodiments of the present invention mainly aim to provide a method and an apparatus for dispensing detection, and a computer-readable storage medium, which can at least solve the problems of low operation efficiency and insufficient accuracy of detection results caused by manual dispensing detection of a power module in the related art.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a dispensing detection method, including:
acquiring a preset training set image, and marking a dispensing area on the training set image to obtain an original label file;
adopting the training set images and the corresponding original label files to form training samples, and inputting the training samples into a neural network for training to obtain a dispensing detection model;
and inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected.
In order to achieve the above object, a second aspect of the embodiments of the present invention provides a dispensing detection apparatus, including:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a preset training set image and an original label file obtained by marking a dispensing area on the training set image;
the training module is used for forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model;
and the detection module is used for inputting the image of the power module to be detected to the dispensing detection model and outputting a dispensing detection result corresponding to the power module to be detected.
To achieve the above object, a third aspect of embodiments of the present invention provides an electronic apparatus, including: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement any of the steps of the dispensing detection method.
In order to achieve the above object, a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing one or more programs, where the one or more programs are executable by one or more processors to implement the steps of any one of the above dispensing detection methods.
According to the dispensing detection method, the dispensing detection device and the computer readable storage medium provided by the embodiment of the invention, firstly, a preset training set image is obtained, and an original label file obtained by marking a dispensing area on the training set image is obtained; then, forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model; and finally, inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected. Through the implementation of the invention, the power supply module is subjected to dispensing detection through the neural network model trained by the deep learning algorithm, so that the working efficiency of dispensing detection and the accuracy of the detection result are effectively improved, and the quality of the power supply module delivered from a factory is better ensured.
Other features and corresponding effects of the present invention are set forth in the following portions of the specification, and it should be understood that at least some of the effects are apparent from the description of the present invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic basic flow chart of a dispensing detection method according to a first embodiment of the present invention;
FIG. 2 is a diagram illustrating an input/output format of a neural network according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a basic flow chart of a neural network training method according to a first embodiment of the present invention;
fig. 4 is a schematic basic flow chart of a method for testing a dispensing detection model according to a first embodiment of the present invention;
fig. 5 is a schematic structural view of a dispensing detection device according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment:
in order to solve the technical problems that the operation efficiency is low and the accuracy of the detection result cannot be sufficiently guaranteed due to the manual dispensing detection of the power module in the related art, the present embodiment provides a dispensing detection method, and as shown in fig. 1, a basic flow diagram of the dispensing detection method provided by the present embodiment is provided, and the dispensing detection method provided by the present embodiment includes the following steps:
step 101, obtaining a preset training set image, and marking a dispensing area on the training set image to obtain an original label file.
Specifically, the neural network is trained under a supervised learning framework, so that training samples need to be obtained in the embodiment, and the neural network is trained based on different training samples. Each training sample comprises a training set image and an original label file corresponding to the training set image, wherein the training set image is also used for training a power module image of a neural network, the original label file is used for representing the attribute of a dispensing area on a power module, the original label file can be obtained by self-labeling through open source software such as labelImg software, and the original label file is an xml file.
It should be noted that, in order to ensure the accuracy of the subsequently trained model, the acquired training set images may be subjected to scaling and denoising processing; and in order to prevent the overfitting problem caused by too few samples, automatic expansion processing can be carried out on the basis of the acquired training set images, and more training set images can be obtained on the basis of the existing training set images.
In an optional implementation manner of this embodiment, the manner of acquiring the preset training set image includes, but is not limited to, the following two ways:
the method comprises the steps that firstly, a first number of images are continuously acquired under different image acquisition conditions for the same power supply module; acquiring a preset second number of images from the acquired images to be used as training set images; wherein the second number is less than or equal to the first number.
Specifically, in practical application, images of the training set can be automatically acquired, that is, a plurality of images of the same power module can be continuously acquired by using the image acquisition device, it should be understood that the plurality of acquired images should be acquired under different image acquisition conditions, and the image acquisition conditions in this embodiment may be related to the image acquisition angle/orientation of the power module, the defocus parameters of the image acquisition device, and the like. It should also be noted that the present embodiment may use all the acquired images as training set images, or may select only a part of the acquired images as training set images and another part of the images as test set images.
And in the second mode, a preset number of images are acquired from a preset image database to serve as training set images.
Specifically, in another implementation of this embodiment, the power module image may also be directly selected from an existing database to be derived, and then used as a training set image.
And 102, forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model.
Specifically, the dispensing detection in the power module image to be detected is realized based on a deep learning algorithm in the embodiment, wherein the adopted neural network may include any one of a Deep Neural Network (DNN), a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). In this embodiment, based on the constructed training samples, a certain optimization algorithm is adopted to perform neural network training in a specific training environment, wherein the learning rate and the training times during training can be determined according to actual requirements, and are not limited uniquely herein.
In an optional implementation manner of this embodiment, inputting the training samples into the neural network for training includes: inputting the training sample into a YOLO convolution neural network for training; the output of the YOLO convolutional neural network is S × S grids, and each grid predicts B bounding boxes that may include dispensing regions. Correspondingly, the tag file needs to include: the method comprises the steps of obtaining category information p of a dispensing area, confidence C of the category of the dispensing area and boundary frame parameters of a boundary frame where the dispensing area is located, wherein the boundary frame parameters are represented as (x, y, w, h), x and y are position offset of the center of the boundary frame relative to the upper left corner of a grid where the boundary frame is located, and w and h are width and height of the boundary frame.
Specifically, a YOLO (young Only Look at one) method is a new target detection method developed on the basis of CNN, and when the method is used for dispensing detection, the method integrates dispensing area position prediction and dispensing quality category prediction into a single neural network model, so that dispensing detection and identification can be rapidly performed under the condition of high accuracy. The YOLO method unifies the flow of dispensing detection into a single neural network, and the neural network predicts the bounding box of the dispensing area by using the whole image information and also identifies the quality category of dispensing (whether dispensing in the dispensing area is qualified or unqualified), thereby realizing the end-to-end real-time dispensing detection task.
In this embodiment, based on the characteristics of simple flow, fast speed, and high detection rate of the YOLO prediction, the YOLO convolutional neural network model is used to perform dispensing detection. The detection method based on the YOLO is a first-order detection method, an input training set image is divided into grids with the size of S multiplied by S, each grid is responsible for predicting a dispensing area falling into the grid, if the coordinate of the center position of a certain dispensing area falls into a certain grid, the grid is responsible for predicting the dispensing area, and each grid is responsible for predicting B bounding boxes. It should be understood that the bounding box parameters in this embodiment are used to characterize the location of the dispensing area.
As shown in fig. 2, before the training set image and the label file are input to the neural network in a one-to-one correspondence manner, in this embodiment, the training set image may be changed to 448 × 448, and then the training set image is input to the neural network, and 7 × 7 grids are output, that is, S is 7, each grid is 7 channels, so that each grid has 7 elements, where the first two elements are category information YES or NO, that is, whether dispensing of a dispensing region is qualified, and the next element is a confidence coefficient of a predicted bounding box (value 0 to 1), and the last four elements are bounding box parameters (x, y, w, h).
In an optional implementation manner of this embodiment, the YOLO convolutional neural network includes 24 convolutional layers and 2 fully-connected layers, an activation function of the convolutional layers and the fully-connected layers is a leakage ReLU function, the convolutional layers are used to extract image features of the dispensing regions in the training set images, and the fully-connected layers are used to predict bounding box parameters of the dispensing regions.
As shown in fig. 3, which is a schematic flow chart of the neural network training method provided in this embodiment, optionally, inputting the training sample to the neural network for training, and obtaining the dispensing detection model specifically includes the following steps:
step 301, inputting a training sample into a YOLO convolutional neural network for training to obtain a prediction label file actually output by the iterative training;
step 302, comparing the predicted tag file with the corresponding original tag file by using a preset loss function;
step 303, judging whether the comparison result meets a preset convergence condition; if yes, executing the step 304, otherwise, returning to the step 301, and continuing to carry out iterative training in a circulating manner;
and step 304, determining the network model obtained by the iterative training as the trained dispensing detection model.
Specifically, in this embodiment, the training process is repeated for a plurality of times to perform iterative optimization, the output obtained by each training prediction of the neural network is calculated as a Loss Function (Loss Function) with the original corresponding label file data, then parameters such as the weight of the neural network are adjusted to reduce the Loss Function value of the next iteration, and when the Loss Function value meets the preset standard, it is determined that the model convergence condition is met, that is, the training process of the whole deep neural network model is completed.
In an alternative implementation of this embodiment, the loss function is expressed as follows:
in this embodiment, B may be 1, a first addend term on the right side of the equation is an error term at the center of the bounding box, a second addend term is an error term at the width of the bounding box, a third addend term is a confidence error term, and a fourth addend term is a category error term.
As shown in fig. 4, which is a schematic flow chart of the method for testing a dispensing detection model provided in this embodiment, optionally, after the dispensing detection model is obtained, the method further includes the following steps:
step 401, acquiring a preset test set image, and marking a dispensing area on the test set image to obtain an original label file;
step 402, inputting a test set image into a dispensing detection model to obtain a test output label file;
step 403, calculating the correlation between the test output label file and the corresponding original label file;
and step 404, when the calculated correlation degree is greater than a preset correlation degree threshold value, determining the dispensing detection model obtained through training as an effective dispensing detection model.
Specifically, the test set image and the original label file marked correspondingly form a test sample, in this embodiment, after the dispensing detection model is trained, the test sample is used to verify the validity of the dispensing detection model, that is, the test set image in the test sample is input to the dispensing detection model which is trained, then the correlation between the output label file and the original label file in the test sample is compared to determine the validity of the model, when the correlation between the test output label and the original label is greater than a preset threshold, the dispensing detection model which is trained is determined to be an effective and correct model, and then the dispensing detection model can be used to perform dispensing detection on the power module image to be detected; otherwise, it indicates that the trained dispensing detection model has errors, and the dispensing detection model needs to be retrained.
And 103, inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected.
Specifically, in this embodiment, the image of the power module to be detected is input to the trained dispensing detection model for operation, so that the position of the dispensing area in the image can be determined, and the dispensing quality category (whether the dispensing in the dispensing area is qualified or unqualified) of the dispensing area is determined.
According to the dispensing detection method provided by the embodiment of the invention, firstly, a preset training set image is obtained, and an original label file obtained by marking a dispensing area on the training set image is obtained; then, forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model; and finally, inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected. Through the implementation of the invention, the power supply module is subjected to dispensing detection through the neural network model trained by the deep learning algorithm, so that the working efficiency of dispensing detection and the accuracy of the detection result are effectively improved, and the quality of the power supply module delivered from a factory is better ensured.
Second embodiment:
in order to solve the technical problems of low operation efficiency and insufficient guarantee of accuracy of detection results caused by manual dispensing detection of the power module in the related art, the present embodiment shows a dispensing detection apparatus, and with specific reference to fig. 5, the dispensing detection apparatus of the present embodiment includes:
an obtaining module 501, configured to obtain a preset training set image, and an original label file obtained by marking a dispensing area on the training set image;
the training module 502 is configured to adopt the training set images and the corresponding original label files to form training samples, and input the training samples to a neural network for training to obtain a dispensing detection model;
the detection module 503 is configured to input the image of the power module to be detected to the dispensing detection model, and output a dispensing detection result corresponding to the power module to be detected.
In some embodiments of this embodiment, the obtaining module 501, when obtaining a preset training set image, is specifically configured to continuously collect, for the same power module, a preset first number of images under different image collection conditions; acquiring a preset second number of images from the acquired images to be used as training set images; wherein the second number is less than or equal to the first number; or, acquiring a preset number of images from a preset image database as training set images.
In some implementations of this embodiment, the tag file includes: the method comprises the steps of obtaining category information p of a dispensing area, confidence C of the category of the dispensing area and boundary frame parameters of a boundary frame where the dispensing area is located, wherein the boundary frame parameters are represented as (x, y, w, h), x and y are position offset of the center of the boundary frame relative to the upper left corner of a grid where the boundary frame is located, and w and h are width and height of the boundary frame. Correspondingly, when the training sample is input to the neural network for training, the training module 502 is specifically configured to input the training sample to the YOLO convolutional neural network for training; the output of the YOLO convolutional neural network is a grid of size S × S, and each grid predicts B bounding boxes that may include dispensing regions.
Further, in some embodiments of this embodiment, the YOLO convolutional neural network includes 24 convolutional layers and 2 fully-connected layers, the activation functions of the convolutional layers and the fully-connected layers are a leakage ReLU function, the convolutional layers are used for extracting image features of the dispensing regions in the training set images, and the fully-connected layers are used for predicting bounding box parameters of the dispensing regions.
In addition, in some embodiments of this embodiment, the training module 502 is specifically configured to input a training sample to the YOLO convolutional neural network for training, so as to obtain a predicted label file actually output by the nth iterative training; comparing the predicted tag file with the corresponding original tag file by using a preset loss function; when the comparison result meets a preset convergence condition, determining the network model obtained by the Nth iterative training as a trained dispensing detection model; and when the comparison result does not meet the preset convergence condition, continuing to perform the (N + 1) th iterative training until the convergence condition is met.
Further, in some implementations of the present embodiment, the loss function is expressed as follows:
the first addend term on the right side of the equation is an error term in the center of the bounding box, the second addend term is an error term in the width and the height of the bounding box, the third addend term is a confidence coefficient error term, and the fourth addend term is a category error term.
Further, in some embodiments of this embodiment, the dispensing detection device further includes: the test module is used for obtaining a preset test set image after the dispensing detection model is obtained, and marking a dispensing area on the test set image to obtain an original label file; inputting the test set image to a glue dispensing detection model to obtain a test output label file; calculating the correlation degree of the test output label file and the corresponding original label file; and when the calculated correlation degree is greater than a preset correlation degree threshold value, determining the dispensing detection model obtained by training as an effective dispensing detection model. Correspondingly, the detection module 503 is configured to input the image of the power module to be detected to the determined effective dispensing detection model.
It should be noted that the dispensing detection method in the foregoing embodiments can be implemented based on the dispensing detection device provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the dispensing detection device described in this embodiment may refer to the corresponding process in the foregoing method embodiments, and is not described herein again.
By adopting the dispensing detection device provided by the embodiment, a preset training set image is obtained, and an original label file obtained by marking a dispensing area on the training set image is obtained; then, forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model; and finally, inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected. Through the implementation of the invention, the power supply module is subjected to dispensing detection through the neural network model trained by the deep learning algorithm, so that the working efficiency of dispensing detection and the accuracy of the detection result are effectively improved, and the quality of the power supply module delivered from a factory is better ensured.
The third embodiment:
the present embodiment provides an electronic device, as shown in fig. 6, which includes a processor 601, a memory 602, and a communication bus 603, wherein: the communication bus 603 is used for realizing connection communication between the processor 601 and the memory 602; the processor 601 is configured to execute one or more computer programs stored in the memory 602 to implement at least one step of the dispensing detection method in the first embodiment.
The present embodiments also provide a computer-readable storage medium including volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, computer program modules or other data. Computer-readable storage media include, but are not limited to, RAM (Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other Memory technology, CD-ROM (Compact disk Read-Only Memory), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
The computer-readable storage medium in this embodiment may be used for storing one or more computer programs, and the stored one or more computer programs may be executed by a processor to implement at least one step of the method in the first embodiment.
The present embodiment also provides a computer program, which can be distributed on a computer readable medium and executed by a computing device to implement at least one step of the method in the first embodiment; and in some cases at least one of the steps shown or described may be performed in an order different than that described in the embodiments above.
The present embodiments also provide a computer program product comprising a computer readable means on which a computer program as shown above is stored. The computer readable means in this embodiment may include a computer readable storage medium as shown above.
It will be apparent to those skilled in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software (which may be implemented in computer program code executable by a computing device), firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
In addition, communication media typically embodies computer readable instructions, data structures, computer program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to one of ordinary skill in the art. Thus, the present invention is not limited to any specific combination of hardware and software.
The foregoing is a more detailed description of embodiments of the present invention, and the present invention is not to be considered limited to such descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A dispensing detection method is characterized by comprising the following steps:
acquiring a preset training set image, and marking a dispensing area on the training set image to obtain an original label file;
adopting the training set images and the corresponding original label files to form training samples, and inputting the training samples into a neural network for training to obtain a dispensing detection model;
and inputting the image of the power module to be detected to the dispensing detection model, and outputting a dispensing detection result corresponding to the power module to be detected.
2. The dispensing detection method of claim 1, wherein the obtaining of the predetermined training set image comprises:
continuously acquiring a preset first number of images under different image acquisition conditions for the same power supply module;
acquiring a preset second number of images from the acquired images to be used as training set images; wherein the second number is less than or equal to the first number;
or, acquiring a preset number of images from a preset image database as training set images.
3. The dispensing detection method of claim 1, wherein the label file comprises: the method comprises the following steps that the type information p of a dispensing region, the confidence coefficient C of the type of the dispensing region and the boundary frame parameters of a boundary frame where the dispensing region is located are represented as (x, y, w, h), wherein x and y are the position offset of the center of the boundary frame relative to the upper left corner of a grid where the boundary frame is located, and w and h are the width and the height of the boundary frame;
the inputting the training samples into a neural network for training comprises:
inputting the training sample into a YOLO convolution neural network for training; the output of the YOLO convolutional neural network is the grids of S × S size, and each grid predicts B bounding boxes that may include the dispensing region.
4. The dispensing detection method of claim 3, wherein the inputting the training samples into a neural network for training to obtain a dispensing detection model comprises:
inputting the training sample into a YOLO convolutional neural network for training to obtain a predicted label file actually output by the Nth iterative training;
comparing the predicted tag file with a corresponding original tag file by using a preset loss function;
when the comparison result meets a preset convergence condition, determining the network model obtained by the Nth iterative training as a trained dispensing detection model;
and when the comparison result does not meet the preset convergence condition, continuing to perform the (N + 1) th iterative training until the convergence condition is met.
5. The dispensing detection method of claim 4, wherein the loss function is expressed as:
the first addend term on the right side of the equation is an error term in the center of the bounding box, the second addend term is an error term in the width and the height of the bounding box, the third addend term is a confidence coefficient error term, and the fourth addend term is a category error term.
6. The dispensing detection method of claim 3, wherein the YOLO convolutional neural network comprises 24 convolutional layers and 2 fully connected layers, the activation functions of the convolutional layers and the fully connected layers are Leaky ReLU functions, the convolutional layers are used for extracting the image features of the dispensing regions in the training set images, and the fully connected layers are used for predicting the bounding box parameters of the dispensing regions.
7. The dispensing detection method according to any of claims 1 to 6, further comprising, after obtaining the dispensing detection model:
acquiring a preset test set image, and marking a dispensing area on the test set image to obtain an original label file;
inputting the test set image into the dispensing detection model to obtain a test output label file;
calculating the correlation degree of the test output label file and the corresponding original label file;
when the calculated correlation degree is larger than a preset correlation degree threshold value, determining the dispensing detection model obtained through training as an effective dispensing detection model;
the inputting of the power module image to be detected to the dispensing detection model comprises the following steps:
and inputting the image of the power module to be detected to the determined effective dispensing detection model.
8. The utility model provides a detection device is glued to point which characterized in that includes:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a preset training set image and an original label file obtained by marking a dispensing area on the training set image;
the training module is used for forming a training sample by adopting the training set image and the corresponding original label file, and inputting the training sample into a neural network for training to obtain a dispensing detection model;
and the detection module is used for inputting the image of the power module to be detected to the dispensing detection model and outputting a dispensing detection result corresponding to the power module to be detected.
9. An electronic device, comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the dispensing detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the method of dispensing detection as claimed in any one of claims 1 to 7.
CN201910789705.9A 2019-08-26 2019-08-26 Dispensing detection method and device and computer readable storage medium Pending CN110598761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910789705.9A CN110598761A (en) 2019-08-26 2019-08-26 Dispensing detection method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789705.9A CN110598761A (en) 2019-08-26 2019-08-26 Dispensing detection method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110598761A true CN110598761A (en) 2019-12-20

Family

ID=68855641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789705.9A Pending CN110598761A (en) 2019-08-26 2019-08-26 Dispensing detection method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110598761A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111231251A (en) * 2020-01-09 2020-06-05 杭州电子科技大学 Product detection method, equipment and system of injection molding machine
CN111291482A (en) * 2020-01-21 2020-06-16 苏州尼特数据科技有限公司 Method, device, computer equipment and medium for determining dispensing parameters
CN111330871A (en) * 2020-03-31 2020-06-26 新华三信息安全技术有限公司 Quality classification method and device
CN111624199A (en) * 2020-05-18 2020-09-04 Oppo(重庆)智能科技有限公司 Detection method and system, and storage medium
CN111680749A (en) * 2020-06-08 2020-09-18 北京百度网讯科技有限公司 Method and device for obtaining output result of dispenser
CN112150436A (en) * 2020-09-23 2020-12-29 创新奇智(合肥)科技有限公司 Lipstick inner wall gluing detection method and device, electronic equipment and storage medium
CN112434738A (en) * 2020-11-24 2021-03-02 英业达(重庆)有限公司 Decision tree algorithm-based solder paste detection method, system, electronic device and medium
CN112487707A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Intelligent dispensing graph generation method based on LSTM
CN112487706A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Automatic mounting parameter intelligent decision method based on ensemble learning
CN112634203A (en) * 2020-12-02 2021-04-09 富泰华精密电子(郑州)有限公司 Image detection method, electronic device and computer-readable storage medium
CN113408631A (en) * 2021-06-23 2021-09-17 佛山缔乐视觉科技有限公司 Method and device for identifying style of ceramic sanitary appliance and storage medium
CN114152621A (en) * 2021-11-30 2022-03-08 联想(北京)有限公司 Processing method, processing device and processing system
CN114549454A (en) * 2022-02-18 2022-05-27 岳阳珞佳智能科技有限公司 Online monitoring method and system for chip glue-climbing height of production line
CN114798360A (en) * 2022-06-29 2022-07-29 深圳市欧米加智能科技有限公司 Real-time detection method for PCB dispensing and related device
CN117193226A (en) * 2023-11-08 2023-12-08 深圳市艾姆克斯科技有限公司 Multifunctional intelligent industrial control system and control method
CN117252822A (en) * 2023-09-05 2023-12-19 广东奥普特科技股份有限公司 Defect detection network construction and defect detection method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958456A (en) * 2017-12-08 2018-04-24 华霆(合肥)动力技术有限公司 Dispensing detection method, device and electronic equipment
CN109003271A (en) * 2018-07-25 2018-12-14 江苏拙术智能制造有限公司 A kind of Wiring harness connector winding displacement quality determining method based on deep learning YOLO algorithm
CN109961029A (en) * 2019-03-15 2019-07-02 Oppo广东移动通信有限公司 A kind of dangerous goods detection method, device and computer readable storage medium
US20190213734A1 (en) * 2018-01-09 2019-07-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for detecting a defect in a steel plate, as well as apparatus and server therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958456A (en) * 2017-12-08 2018-04-24 华霆(合肥)动力技术有限公司 Dispensing detection method, device and electronic equipment
US20190213734A1 (en) * 2018-01-09 2019-07-11 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for detecting a defect in a steel plate, as well as apparatus and server therefor
CN109003271A (en) * 2018-07-25 2018-12-14 江苏拙术智能制造有限公司 A kind of Wiring harness connector winding displacement quality determining method based on deep learning YOLO algorithm
CN109961029A (en) * 2019-03-15 2019-07-02 Oppo广东移动通信有限公司 A kind of dangerous goods detection method, device and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JOSEPH REDMON,ET AL: "YOLO9000: Better, Faster, Stronger", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
JOSEPH REDMON,ET AL: "You Only Look Once: Unified, Real-Time Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
ZQNNN: "从YOLOV1到YOLOV3", 《CSDN》 *
查广丰,等: "基于深度学习的点胶缺陷检测", 《图像与多媒体技术》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111231251B (en) * 2020-01-09 2022-02-01 杭州电子科技大学 Product detection method, equipment and system of injection molding machine
CN111231251A (en) * 2020-01-09 2020-06-05 杭州电子科技大学 Product detection method, equipment and system of injection molding machine
CN111291482A (en) * 2020-01-21 2020-06-16 苏州尼特数据科技有限公司 Method, device, computer equipment and medium for determining dispensing parameters
CN111330871A (en) * 2020-03-31 2020-06-26 新华三信息安全技术有限公司 Quality classification method and device
CN111624199A (en) * 2020-05-18 2020-09-04 Oppo(重庆)智能科技有限公司 Detection method and system, and storage medium
CN111680749A (en) * 2020-06-08 2020-09-18 北京百度网讯科技有限公司 Method and device for obtaining output result of dispenser
CN111680749B (en) * 2020-06-08 2023-11-07 北京百度网讯科技有限公司 Method and device for obtaining output result of dispenser
CN112150436A (en) * 2020-09-23 2020-12-29 创新奇智(合肥)科技有限公司 Lipstick inner wall gluing detection method and device, electronic equipment and storage medium
CN112487707B (en) * 2020-11-13 2023-10-17 北京遥测技术研究所 LSTM-based intelligent dispensing pattern generation method
CN112487706B (en) * 2020-11-13 2023-10-17 北京遥测技术研究所 Automatic mounting parameter intelligent decision method based on ensemble learning
CN112487706A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Automatic mounting parameter intelligent decision method based on ensemble learning
CN112487707A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Intelligent dispensing graph generation method based on LSTM
CN112434738A (en) * 2020-11-24 2021-03-02 英业达(重庆)有限公司 Decision tree algorithm-based solder paste detection method, system, electronic device and medium
CN112634203A (en) * 2020-12-02 2021-04-09 富泰华精密电子(郑州)有限公司 Image detection method, electronic device and computer-readable storage medium
CN112634203B (en) * 2020-12-02 2024-05-31 富联精密电子(郑州)有限公司 Image detection method, electronic device, and computer-readable storage medium
CN113408631A (en) * 2021-06-23 2021-09-17 佛山缔乐视觉科技有限公司 Method and device for identifying style of ceramic sanitary appliance and storage medium
CN114152621A (en) * 2021-11-30 2022-03-08 联想(北京)有限公司 Processing method, processing device and processing system
CN114549454A (en) * 2022-02-18 2022-05-27 岳阳珞佳智能科技有限公司 Online monitoring method and system for chip glue-climbing height of production line
CN114798360A (en) * 2022-06-29 2022-07-29 深圳市欧米加智能科技有限公司 Real-time detection method for PCB dispensing and related device
CN117252822A (en) * 2023-09-05 2023-12-19 广东奥普特科技股份有限公司 Defect detection network construction and defect detection method, device and equipment
CN117193226A (en) * 2023-11-08 2023-12-08 深圳市艾姆克斯科技有限公司 Multifunctional intelligent industrial control system and control method
CN117193226B (en) * 2023-11-08 2024-01-26 深圳市艾姆克斯科技有限公司 Multifunctional intelligent industrial control system and control method

Similar Documents

Publication Publication Date Title
CN110598761A (en) Dispensing detection method and device and computer readable storage medium
CN110705598B (en) Intelligent model management method, intelligent model management device, computer equipment and storage medium
CN110826379B (en) Target detection method based on feature multiplexing and YOLOv3
TW202013248A (en) Method and apparatus for vehicle damage identification
CN107331118B (en) Fall detection method and device
CN110187334B (en) Target monitoring method and device and computer readable storage medium
CN112036249B (en) Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification
US11966291B2 (en) Data communication
CN111985469B (en) Method and device for recognizing characters in image and electronic equipment
CN110969200A (en) Image target detection model training method and device based on consistency negative sample
CN113109816B (en) Echo block tracking method, device and storage medium of radar echo image
CN110969600A (en) Product defect detection method and device, electronic equipment and storage medium
CN109961030A (en) Pavement patching information detecting method, device, equipment and storage medium
CN111353440A (en) Target detection method
CN113657202A (en) Component identification method, training set construction method, device, equipment and storage medium
CN110222704B (en) Weak supervision target detection method and device
CN115937703A (en) Enhanced feature extraction method for remote sensing image target detection
CN117372424B (en) Defect detection method, device, equipment and storage medium
CN114241425A (en) Training method and device of garbage detection model, storage medium and equipment
CN113591645A (en) Power equipment infrared image identification method based on regional convolutional neural network
CN112528500B (en) Evaluation method and evaluation equipment for scene graph construction model
CN116486063A (en) Detection frame calibration method, device, equipment and computer readable storage medium
US20230022253A1 (en) Fast and accurate prediction methods and systems based on analytical models
CN110705633B (en) Target object detection method and device and target object detection model establishing method and device
CN114463544A (en) Irregular object semantic segmentation quick labeling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication