CN111862089A - Method and device for identifying defects of dust screen and electronic equipment - Google Patents

Method and device for identifying defects of dust screen and electronic equipment Download PDF

Info

Publication number
CN111862089A
CN111862089A CN202010769188.1A CN202010769188A CN111862089A CN 111862089 A CN111862089 A CN 111862089A CN 202010769188 A CN202010769188 A CN 202010769188A CN 111862089 A CN111862089 A CN 111862089A
Authority
CN
China
Prior art keywords
layer
picture
neural network
network model
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010769188.1A
Other languages
Chinese (zh)
Inventor
于丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wentai Information Technology Co Ltd
Original Assignee
Shanghai Wentai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wentai Information Technology Co Ltd filed Critical Shanghai Wentai Information Technology Co Ltd
Priority to CN202010769188.1A priority Critical patent/CN111862089A/en
Publication of CN111862089A publication Critical patent/CN111862089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for identifying defects of a dust screen and electronic equipment. Firstly, preprocessing a target picture to generate a simulation picture; then, the target picture and the simulation picture are used as the input of the neural network model, and a plurality of simulation pictures containing the same object to be recognized are recognized, so that the recognition base number is increased, and the recognition error probability is reduced; and when the output of the neural network model indicates that the number of the first results is greater than the set threshold value, determining that the dust screen in the target picture has defects, wherein the first results are the defective identification results corresponding to the target picture or the simulation picture. The simulation picture is an image of a target picture after rotation or mirror image or contrast adjustment or brightness adjustment, so that the purpose of enriching the number of identification samples is achieved; by identifying a plurality of simulated pictures containing the same object to be identified, the identification cardinality is increased, the probability of identification errors is reduced, and the accuracy of the model identification result is guaranteed.

Description

Method and device for identifying defects of dust screen and electronic equipment
Technical Field
The application relates to the field of images, in particular to a method and a device for identifying flaws of a dust screen and electronic equipment.
Background
Currently, it is imperative to fully realize production line automation in a mechanical factory, and the production line automation also becomes a hot spot of current research. For example, in the production process of the mobile phone, the defects of the dustproof net need to be classified, the mobile phone shell with the defective dustproof net needs to be reworked, and the next production is performed without problems. When the work is still classified manually, a lot of problems are caused when workers perform a lot of mechanical repetitive work, for example, the classification accuracy is reduced and the efficiency is reduced due to mental fatigue, and different workers have great autonomy in classification and low stability and reliability of production.
In order to automate the dust screen detection, the mobile phone shells are classified according to a predetermined procedure without human intervention. The manual labor detection device has the advantages that workers are liberated from heavy and mechanical manual labor, the purposes of saving labor and reducing production cost are achieved, and the problems of low accuracy, low efficiency, unstable production and the like in manual detection are solved, so that the problems that the manual labor detection device needs to solve at present are solved.
Disclosure of Invention
The present application provides a method and an apparatus for identifying a dust screen defect, and an electronic device, so as to solve the above problems.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a method for identifying a defect of a dust screen, where the method includes:
preprocessing a target picture to generate a simulated picture, wherein the preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the simulated picture is an image of the target picture after rotation or mirror image processing or after contrast adjustment or brightness adjustment;
taking the target picture and the simulation picture as the input of a neural network model;
and when the output of the neural network model indicates that the number of first results is greater than a set threshold value, determining that the dust screen in the target picture has defects, wherein the first results are the recognition results of the defects corresponding to the target picture or the simulation picture.
In a second aspect, an embodiment of the present application provides a device for identifying a dust screen defect, where the device includes:
the system comprises a preprocessing unit, a processing unit and a processing unit, wherein the preprocessing unit is used for preprocessing a target picture to generate a simulation picture, the preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the simulation picture is an image obtained by rotating or mirroring the target picture or adjusting the contrast or adjusting the brightness;
the recognition unit is used for taking the target picture and the simulation picture as the input of a neural network model; and the device is further configured to determine that the dust screen in the target picture has a defect when the output of the neural network model is that the number of first results is greater than a predetermined threshold, where the first results are recognition results of the defect corresponding to the target picture or the simulated picture.
In a third aspect, the present application provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method described above.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor and memory for storing one or more programs; the one or more programs, when executed by the processor, implement the methods described above.
Compared with the prior art, the method, the device and the electronic equipment for identifying the flaws of the dust screen have the advantages that: firstly, preprocessing a target picture to generate a simulation picture; then, the target picture and the simulation picture are used as the input of the neural network model, and a plurality of simulation pictures containing the same object to be recognized are recognized, so that the recognition base number is increased, and the recognition error probability is reduced; and when the output of the neural network model indicates that the number of the first results is greater than the set threshold value, determining that the dust screen in the target picture has defects, wherein the first results are the defective identification results corresponding to the target picture or the simulation picture. The simulation picture is an image of a target picture after rotation or mirror image or contrast adjustment or brightness adjustment, so that the purpose of enriching the number of identification samples is achieved; by identifying a plurality of simulated pictures containing the same object to be identified, the identification cardinality is increased, the probability of identification errors is reduced, and the accuracy of the model identification result is guaranteed.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment;
FIG. 2 is a flowchart illustrating a method for identifying defects in a dust screen according to an embodiment;
FIG. 3 is a flowchart illustrating a method for identifying defects in a dust screen according to another embodiment;
FIG. 4 is a diagram illustrating the substeps of S102 provided by an embodiment;
FIG. 5 is a schematic diagram of a neural network model according to an embodiment;
fig. 6 is a schematic block diagram of a device for identifying defects in a dust screen according to an embodiment.
In the figure: 10-a processor; 11-a memory; 12-a bus; 13-a communication interface; 201-a pre-processing unit; 202-an identification unit; 203-training unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that the terms "upper", "lower", "inner", "outer", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships conventionally found in use of products of the application, and are used only for convenience in describing the present application and for simplification of description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the application provides an electronic device which can be a computer device. Please refer to fig. 1, a schematic structural diagram of an electronic device. The electronic device comprises a processor 10, a memory 11, a bus 12. The processor 10 and the memory 11 are connected by a bus 12, and the processor 10 is configured to execute an executable module, such as a computer program, stored in the memory 11.
The processor 10 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the dust screen flaw identification method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 10. The Processor 10 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The Memory 11 may comprise a high-speed Random Access Memory (RAM) and may further comprise a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 12 may be an ISA (Industry Standard architecture) bus, a PCI (peripheral component interconnect) bus, an EISA (extended Industry Standard architecture) bus, or the like. Only one bi-directional arrow is shown in fig. 1, but this does not indicate only one bus 12 or one type of bus 12.
The memory 11 is used for storing programs, such as programs corresponding to the dust screen flaw identification device. The dust screen defect recognition apparatus includes at least one software function module which may be stored in the memory 11 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device. The processor 10 executes the program to realize the dust screen flaw identification method after receiving the execution instruction.
Possibly, the electronic device provided by the embodiment of the present application further includes a communication interface 13. The communication interface 13 is connected to the processor 10 via a bus. The electronic device can receive picture information and the like transmitted by other terminals or components through the communication interface 13.
It should be understood that the structure shown in fig. 1 is merely a structural schematic diagram of a portion of an electronic device, which may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The method for identifying defects of a dust screen according to an embodiment of the present invention can be applied to, but is not limited to, the electronic device shown in fig. 1, and with reference to fig. 2, the method for identifying defects of a dust screen includes:
and S106, preprocessing the target picture to generate a simulation picture.
The preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the simulation picture is an image of the target picture after rotation or mirror image processing or contrast adjustment or brightness adjustment.
Specifically, the simulated pictures all include the object to be identified in the target picture, i.e., the dust screen.
And S107, taking the target picture and the simulation picture as the input of the neural network model.
Specifically, when the neural network model identifies whether an object to be identified in the picture has a defect, a certain error exists. In order to reduce errors, the target picture is preprocessed to obtain a plurality of analog pictures containing the same object to be identified. By increasing the cardinality of recognition, the probability of recognition error is reduced.
S108, judging whether the quantity of the first results output by the neural network model is larger than a set threshold value or not. If yes, executing S109; if not, go to S110.
And the first result is a defective identification result corresponding to the target picture or the simulated picture.
Specifically, when the output of the neural network model indicates that the number of the first results is greater than the predetermined threshold, which indicates that the dust screen in the target picture is defective, S109 is performed. When the number of the first results outputted by the neural network model is smaller than or equal to the predetermined threshold, which indicates that the dust screen in the target picture is not defective, S110 is performed. It should be noted that the predetermined threshold value is related to the total number of analog pictures.
And S109, determining that the dustproof net in the target picture has defects.
S110, determining that the dust screen in the target picture has no defects.
To sum up, in the method for identifying flaws in a dust screen provided in the embodiment of the application, firstly, a target picture is preprocessed to generate a simulation picture; then, the target picture and the simulation picture are used as the input of the neural network model, and a plurality of simulation pictures containing the same object to be recognized are recognized, so that the recognition base number is increased, and the recognition error probability is reduced; and when the output of the neural network model indicates that the number of the first results is greater than the set threshold value, determining that the dust screen in the target picture has defects, wherein the first results are the defective identification results corresponding to the target picture or the simulation picture. The simulation picture is an image of a target picture after rotation or mirror image or contrast adjustment or brightness adjustment, so that the purpose of enriching the number of identification samples is achieved; by identifying a plurality of simulated pictures containing the same object to be identified, the identification cardinality is increased, the probability of identification errors is reduced, and the accuracy of the model identification result is guaranteed.
On the basis of fig. 2, regarding how to train the neural network model, a possible implementation manner is further provided in the embodiment of the present application, please refer to fig. 3, where the method for identifying a dust screen flaw further includes:
and S101, acquiring a sample picture and a corresponding supervision picture.
The sample picture and the supervision picture are provided with labels, and the labels represent whether the dust screen in the corresponding picture has flaws or not.
Possibly, the sample pictures comprise a first class of training pictures and a second class of training pictures. The first type of training picture is a picture which is initially acquired and comprises a dust screen. And preprocessing the first class of training pictures to generate a second class of training pictures.
The preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the second type of training picture is an image which is obtained by rotating or mirroring the first type of training picture or adjusting the contrast or the brightness.
Specifically, the richness of the training samples determines the accuracy of the model identification result to some extent. The richer the training samples are, the higher the accuracy of the recognition result of the finally trained neural network model is. In reality, the number of sample pictures (first type training pictures) obtained by acquisition is limited. The first type of training picture is preprocessed to generate a second type of training picture, the second type of training picture is an image formed by rotating or mirroring the first type of training picture or adjusting the contrast or adjusting the brightness, and the first type of training picture and the second type of training picture both contain images corresponding to the dustproof net, so that the purpose of enriching the number of samples can be achieved.
Possibly, the dustproof net pictures are randomly divided into a first class of training pictures and supervision pictures according to a preset proportion. Wherein the predetermined ratio may be 8: 2.
And S102, taking the sample picture as the input of the neural network model to train the neural network model until the neural network model converges.
Specifically, the first class of training pictures and the second class of training pictures are used as input of the neural network model, and sample pictures trained by the neural network model are improved. The neural network model is trained by adopting a gradient descent optimization algorithm, so that the training efficiency of the neural network model can be effectively improved. When the neural network model converges, namely the training is finished, the current dust screen flaw identification model can be used for identifying the dust screen flaws.
And S103, taking the supervision picture as the input of the converged neural network model to carry out supervision test on the neural network model.
Specifically, the accuracy of the output result of the neural network model is verified by taking the supervision picture as the input of the neural network model. When the supervision pictures need to be described, the supervision pictures are provided with labels, for example, when a dust screen in a certain supervision picture is defective, the label of the supervision picture is 1; when the dust screen in a supervision picture has no flaw, the label of the supervision picture is 0, and the content and form of the label are not limited herein, for example, only for ease of understanding.
And S104, judging whether the accuracy of the output result of the test is smaller than a test threshold value. If yes, executing S102; if not, go to S105.
The accuracy of the output result can be obtained by comparing the output result with the label of the corresponding supervision picture. When the accuracy of the output result is greater than or equal to the preset test threshold, it indicates that the recognition effect of the dust screen flaw recognition model is better, and at this time, the training may be ended, and S105 is executed. And when the accuracy of the output result is smaller than the test threshold, the defect identification model of the dust screen still has a large defect, and the training is repeated, and S102 is executed at the moment to perform repeated training on the neural network model.
And S105, finishing the training.
Specifically, the training is completed and the neural network model is saved.
On the basis of fig. 3, as to how to obtain the sample picture and the target picture, the embodiment of the present application also provides a possible implementation manner, please refer to the following.
And cutting the equipment picture to obtain the dustproof net picture.
Specifically, when an image is captured, an apparatus portion and an environment portion other than the dust screen are photographed because the photographing range is wide. And the image content outside the dustproof net possibly interferes with the training and testing of the neural network model, so that the interference is reduced, the training efficiency and the accuracy of a test result are improved, the equipment picture can be cut, the image content outside the dustproof net is deleted, and the dustproof net picture is accurately obtained. When a label is added to the dustproof screen picture, the dustproof screen picture is a sample picture; when the dustproof net picture is directly used for identification, the dustproof net picture is the target picture.
Possibly, the dust screen part can be accurately cut from the device picture by utilizing graying, binarization and contour detection algorithms.
On the basis of fig. 3, regarding the content in S102, the embodiment of the present application further provides a possible implementation manner, please refer to fig. 4, where S102 includes:
s102-1, taking the sample picture as the input of the neural network model, and training the neural network model by adopting a gradient descent optimization algorithm.
Specifically, the neural network model is trained by adopting a gradient descent optimization algorithm, so that the training speed is increased.
S102-2, calculating a loss function of the neural network model.
Wherein, the loss function can adopt a cross entropy loss function. When it needs to be noted, the sample picture carries a label indicating whether the dust screen therein is defective or not. Thus, in combination with the result of the training, a loss function can be calculated. And the loss function represents the error level of the neural network model identification result.
And S102-3, when the loss function is smaller than the preset loss threshold value, the neural network model is considered to be converged.
Specifically, when the loss function is smaller than the preset loss threshold, it indicates that the neural network model is stably converged, and the training is stopped after the neural network model is considered to be converged, and S103 may be performed.
On the basis of fig. 2, regarding how to construct a neural network model, a possible implementation manner is further provided in the embodiment of the present application, and the method for identifying a dust screen flaw further includes:
the neural network model is constructed from parameters, wherein the parameters include learning power, momentum, and weight decay.
Wherein, the learning rate (lr) refers to the magnitude of the updating of the network weight in the optimization algorithm. The learning rate may be constant, decreasing, momentum-based, or adaptive. Different optimization algorithms determine different learning rates. When the learning rate is too high, the model may not be converged, and loss is continuously oscillated up and down; if the learning rate is too low, the convergence rate of the model is slow, and longer training time is required, and in the embodiment of the present application, the learning rate is set to 0.01.
Momentum (Momentum) comes from Newton's law, the basic idea is to find the influence of optimally adding "inertia", when there is a flat region in the error surface, the gradient descent method (SGD) can learn faster, and the Momentum is set to 0.9 in the embodiment of the application.
Weight decay (L2 regularization), which is set to 0.0005 in the embodiment of the present application, may avoid model overfitting to some extent.
Possibly, an embodiment of the present application provides a possible neural network model structure, please refer to fig. 5, the neural network model includes: a 3D integration layer [ abbreviated as Conv (3 × 3)/BN/Relu ], a 5D integration layer [ abbreviated as Conv (5 × 5)/BN/Relu ], a splice layer (abbreviated as Concat), a max pooling layer [ abbreviated as MaxPool (2 × 2) ], a 9D integration layer [ abbreviated as Conv (9 × 9)/BN/Relu ], a 1D integration layer [ abbreviated as Conv (1 × 1)/BN ], a flattening layer (abbreviated as Reduce _ mean/Squeeze), and a full connection layer (abbreviated as FullConnect).
The 3D integration layer comprises a 3 x 3 convolution layer, a batch normalization layer and an excitation layer;
the 5D integrated layer comprises a 5 x 5 convolution layer, a batch normalization layer and an excitation layer;
the 9D integrated layer comprises a 9 × 9 convolution layer, a batch normalization layer and an excitation layer;
the 1D integrated layer comprises 1 × 1 convolution layers and a batch normalization layer.
Referring to fig. 5, the neural network model includes a first branch, a second branch, a third branch and a fourth branch, and the first branch, the second branch, the third branch and the fourth branch are connected in sequence.
The first branch includes a first batch of 3D integration layers, a first batch of 5D integration layers, a first batch of splice layers, and a first batch of max pooling layers.
The second branch comprises two second 3D integration layers, a second 5D integration layer, a second splicing layer and a second maximum pooling layer.
The third branch comprises four third 3D integration layers, a third splicing layer and a third maximum pooling layer.
The fourth branch comprises a 9D integration layer, a 1D integration layer, a fourth batch of largest pooling layers, a leveling layer and a full connection layer.
With continued reference to fig. 5, the Input is used to Input data, and the data dimension may be (n × 178 × 3), where n represents the number of Input pictures and 178 × 178 represents the size of the Input pictures.
After Input, the first 3D integration layer is connected, and convolution layer of 3 × 3 convolution kernel, batch normalization, ReLu activation function processing are performed. And extracting features from the convolutional layer, and normalizing after convolution to make the features convergent. The activation function adds a nonlinear factor to enrich the dimensionality of the features.
The first 3D integration layer is connected with the first 5D integration layer and the first splicing layer respectively.
The first 5D integration layer is used for performing convolution layer of 5 x 5 convolution kernel, batch normalization and ReLu activation function processing on the output of the first 3D integration layer.
The first 5D integration layers are also connected to the first splice layer.
The first batch of stitching layers is used for tensor stitching the output of the first batch of 5D integration layers and the output of the first batch of 3D integration layers. Convolution characteristics of a front layer can be obtained through tensor splicing, so that the network accuracy is improved more comprehensively through the characteristics; and secondly, the number of convolution kernels can be reduced, the redundancy of the network is reduced, the running speed of the network is also improved, and meanwhile, the precision is not reduced.
The first splicing layers are connected with the first maximum pooling layers. The step size of the maximum pooling layer is 2. The maximum pooling layer can reduce the deviation of the estimated mean value caused by parameter errors of the convolutional layer, and more texture information is reserved.
Different sense fields can be obtained by convolution kernels with different sizes, which is helpful for improving accuracy, but the computation complexity is larger due to the overlarge convolution kernels, so that the selection of 3 and 5 is more suitable.
The first largest pooling layer is connected to the second branch.
After the first batch of largest pooling layers, two second batches of 3D integration layer structures are connected in series. [ Conv (3 x 3)/BN/Relu ]. sup.2 indicates that two second batches of 3D integration layer structures are connected in series.
The two second 3D integrated layers are connected in series in structure, and the output of the first largest pooling layer is respectively connected with the two second 3D integrated layers in series in structure and connected with the second splicing layer in series.
The two second batches of 3D integrated layers are connected in series in structure, and the second batches of 5D integrated layers and the second batches of splicing layers are connected in sequence.
And the second batch of splicing layers are used for carrying out tensor splicing on the output of the second batch of 5D integration layers and the output of the first batch of maximum pooling layers.
The second batch of spliced layers are connected with the second batch of maximum pooling layers.
The second largest pooling layer is connected to the third branch.
The second batch of the largest pooling layers is connected to four third batches of 3D integration layer structure series and third batches of the splicing layers.
And the third batch of splicing layers are used for carrying out tensor splicing on the outputs of the four third batches of 3D integrated layer structures which are connected in series and the outputs of the second batch of maximum pooling layers.
The third splice layer connects the third maximum pooling layers for 2 x 2 maximum pooling.
The third largest pooling layer was connected to the 9D integration layer.
Because the former network adopts the stacking of smaller convolution kernels, the number of the convolution kernels is less, and the complexity of the network is very low, the reception field is enlarged through the 9 x 9 convolution kernels, the capability of characterizing the characteristics of the network is enhanced, and the accuracy is further improved.
The 9D integrated layer is connected with the 1D integrated layer and the fourth largest pooling layer in sequence. To perform dimension reduction processing.
The fourth batch of the largest pooling layer is sequentially connected with the leveling layer and the full-connection layer.
For calculating the mean of the tensor along the assigned axis, the dimension with a value of 1 is removed. And finally, the full connection layer outputs a prediction result (Output).
BN denotes Batch Normalization (Batch Normalization), Conv denotes a convolutional layer (reciprocal), ReLu denotes a ReLu active layer, MaxPool denotes a max pooling layer, Reduce _ mean denotes an average value of a computed tensor along a specified axis (a certain dimension of the tensor), Squeeze denotes a dimension of a deleted tensor in which a value is 1, FullConnect denotes a fully connected layer, and Concat denotes a tensor splice.
The neural network model in the embodiment of the application is simple in structure, easy to understand and suitable for a dust screen flaw identification task. And a batch normalization layer is applied in the network for multiple times, so that the training speed is greatly improved, and the gradient flow of the whole network is improved. In addition, the characteristics of the deep convolutional layer of the network are spliced with the characteristics of the convolutional layer of the previous layer and then input into the next network layer, namely, higher accuracy can be achieved by using fewer convolutional kernels, which means fewer training parameters and faster running speed. And determining the final network structure through multiple times of training and debugging. The recognition precision in training can reach 99%, and the test recognition precision is about 97%. And based on Intel Corei7-7800X CPU @3.5GHz 12, the running speed of each picture is about 30ms, so that the real-time requirement can be completely met, and the recognition accuracy is higher. And model acceleration can be performed through openvino, and the requirement of real-time detection in a factory can be met.
Possibly, the method for identifying the defects of the dust screen in the embodiment of the application can also be used for training models such as glue overflow port defect classification, sound hole defect classification and shell defect classification.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a device for identifying defects in a dust screen according to an embodiment of the present disclosure, where the device is optionally applied to an electronic apparatus as described above.
The dust screen flaw recognition device includes: a preprocessing unit 201 and a recognition unit 202.
The preprocessing unit 201 is configured to preprocess the target picture to generate a simulated picture, where the preprocessing includes random rotation processing, random mirror processing, contrast processing, and brightness processing, and the simulated picture is an image of the target picture after rotation or mirror image or after contrast adjustment or brightness adjustment. Specifically, the preprocessing unit 201 may execute the above S106.
The recognition unit 202 is used for taking the target picture and the simulation picture as the input of the neural network model; and the device is further used for determining that the dust screen in the target picture has a defect when the output of the neural network model is that the number of the first results is greater than a set threshold value, wherein the first results are the recognition results of the defect corresponding to the target picture or the simulation picture. Specifically, the recognition unit 202 may perform the above-described S107 to S110.
Further, the dust screen flaw identification device further comprises: a training unit 203.
The preprocessing unit 201 is further configured to obtain a sample picture and a corresponding supervised picture, where the sample picture and the supervised picture both have tags, and the tags represent whether the dust screen in the corresponding picture has a defect. Specifically, the preprocessing unit 201 may perform the above S101.
The training unit 203 is configured to use the sample picture as an input of a neural network model to train the neural network model until the neural network model converges; taking the supervision picture as the input of the converged neural network model to carry out supervision test on the neural network model; when the accuracy of the test output result is smaller than the test threshold, repeatedly using the sample picture as the input of the neural network model to train the neural network model until the neural network model is converged; and when the accuracy of the output result of the test is greater than or equal to the test threshold, ending the training. Specifically, the training unit 203 may perform the above S102 to S105.
Further, the training unit 203 is specifically configured to use the sample picture as an input of the neural network model, and train the neural network model by using a gradient descent optimization algorithm; calculating a loss function of the neural network model, wherein the loss function represents the error height of the recognition result of the neural network model; when the loss function is less than a preset loss threshold, the neural network model is considered to be converged. Specifically, the training unit 203 may perform the above-described S102-1 to S102-3.
It should be noted that the device for identifying a defect in a dust screen according to the present embodiment may execute the method shown in the above method flow embodiments to achieve the corresponding technical effects. For the sake of brevity, the corresponding contents in the above embodiments may be referred to where not mentioned in this embodiment.
The embodiment of the invention also provides a storage medium, wherein the storage medium stores computer instructions and programs, and the computer instructions and the programs execute the dust screen flaw identification method of the embodiment when being read and run. The storage medium may include memory, flash memory, registers, or a combination thereof, etc.
The following provides an electronic device, which may be a computer device, and as shown in fig. 1, the electronic device may implement the above-mentioned dust screen flaw identification method; specifically, the electronic device includes: processor 10, memory 11, bus 12. The processor 10 may be a CPU. The memory 11 is used for storing one or more programs, and when the one or more programs are executed by the processor 10, the dust screen flaw identification method of the above embodiment is performed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A method for identifying dust screen flaws, the method comprising:
preprocessing a target picture to generate a simulated picture, wherein the preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the simulated picture is an image of the target picture after rotation or mirror image processing or after contrast adjustment or brightness adjustment;
taking the target picture and the simulation picture as the input of a neural network model;
and when the output of the neural network model indicates that the number of first results is greater than a set threshold value, determining that the dust screen in the target picture has defects, wherein the first results are the recognition results of the defects corresponding to the target picture or the simulation picture.
2. The dust screen imperfection identification method of claim 1, further comprising:
acquiring a sample picture and a corresponding supervision picture, wherein the sample picture and the supervision picture are provided with labels, and the labels represent whether a dust screen in the corresponding picture has flaws or not;
taking the sample picture as the input of a neural network model to train the neural network model until the neural network model converges;
taking the supervision picture as the input of the converged neural network model to carry out supervision test on the neural network model;
when the accuracy of the test output result is smaller than the test threshold value, repeatedly using the sample picture as the input of the neural network model to train the neural network model until the neural network model is converged;
and when the accuracy of the output result of the test is greater than or equal to the test threshold, ending the training.
3. The method for identifying dust screen flaws of claim 2, wherein the step of training the neural network model by using the sample picture as an input of the neural network model until the neural network model converges comprises:
taking the sample picture as the input of the neural network model, and training the neural network model by adopting a gradient descent optimization algorithm;
calculating a loss function of the neural network model, wherein the loss function represents the error level of the recognition result of the neural network model;
and when the loss function is smaller than a preset loss threshold value, the neural network model is considered to be converged.
4. The dust screen flaw identification method of claim 1, wherein the neural network model comprises: the device comprises a 3D integration layer, a 5D integration layer, a splicing layer, a maximum pooling layer, a 9D integration layer, a 1D integration layer, a leveling layer and a full connection layer;
the 3D integration layer comprises a 3 x 3 convolution layer, a batch normalization layer and an excitation layer;
the 5D integration layer comprises a 5 x 5 convolution layer, a batch normalization layer and an excitation layer;
the 9D integration layer comprises a 9 x 9 convolution layer, a batch normalization layer and an excitation layer;
the 1D integration layer comprises 1 × 1 convolution layers and batch normalization layers.
5. The dust screen flaw identification method of claim 4, wherein the neural network model includes a first branch, a second branch, a third branch, and a fourth branch, the first branch, the second branch, the third branch, and the fourth branch being connected in sequence;
the first branch comprises a first 3D integration layer, a first 5D integration layer, a first splicing layer and a first maximum pooling layer;
the second branch comprises two second 3D integrated layers, a second 5D integrated layer, a second splicing layer and a second maximum pooling layer;
the third branch comprises four third 3D integrated layers, a third splicing layer and a third maximum pooling layer;
the fourth branch comprises the 9D integration layer, the 1D integration layer, a fourth largest pooling layer, the flattening layer and the full connection layer.
6. The dust screen flaw identification method of claim 5, wherein the first branch is connected to the second branch;
the first 3D integration layer is connected with the first 5D integration layer and the first splicing layer respectively;
the first 5D integration layer is used for performing convolution layer of 5 x 5 convolution kernel, batch normalization and ReLu activation function processing on the output of the first 3D integration layer;
the first 5D integration layer is connected with the first splicing layer;
the first splicing layer is used for tensor splicing the output of the first 5D integration layer and the output of the first 3D integration layer;
the first splicing layer is connected with the first maximum pooling layer;
the first plurality of maximum pooling layers is connected to the second branch.
7. The dust screen flaw identification method of claim 6, wherein the second branch is connected to the third branch;
the two second 3D integrated layers are connected in series in structure, and the output of the first largest pooling layer is respectively connected with the two second 3D integrated layers in series in structure and the second splicing layer in connection;
the two second 3D integrated layers are connected in series in structure, and the second 5D integrated layers and the second splicing layer are connected in sequence;
the second batch of splicing layers are used for carrying out tensor splicing on the output of the second batch of 5D integration layers and the output of the first batch of maximum pooling layers;
the second batch of spliced layers are connected with the second batch of maximum pooling layers;
the second largest pooling layer is connected to the third branch.
8. A dust screen blemish identification device, the device comprising:
the system comprises a preprocessing unit, a processing unit and a processing unit, wherein the preprocessing unit is used for preprocessing a target picture to generate a simulation picture, the preprocessing comprises random rotation processing, random mirror image processing, contrast processing and brightness processing, and the simulation picture is an image obtained by rotating or mirroring the target picture or adjusting the contrast or adjusting the brightness;
the recognition unit is used for taking the target picture and the simulation picture as the input of a neural network model; and the device is further configured to determine that the dust screen in the target picture has a defect when the output of the neural network model is that the number of first results is greater than a predetermined threshold, where the first results are recognition results of the defect corresponding to the target picture or the simulated picture.
9. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
10. An electronic device, comprising: a processor and memory for storing one or more programs; the one or more programs, when executed by the processor, implement the method of any of claims 1-7.
CN202010769188.1A 2020-08-03 2020-08-03 Method and device for identifying defects of dust screen and electronic equipment Pending CN111862089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010769188.1A CN111862089A (en) 2020-08-03 2020-08-03 Method and device for identifying defects of dust screen and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010769188.1A CN111862089A (en) 2020-08-03 2020-08-03 Method and device for identifying defects of dust screen and electronic equipment

Publications (1)

Publication Number Publication Date
CN111862089A true CN111862089A (en) 2020-10-30

Family

ID=72953117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010769188.1A Pending CN111862089A (en) 2020-08-03 2020-08-03 Method and device for identifying defects of dust screen and electronic equipment

Country Status (1)

Country Link
CN (1) CN111862089A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016145887A (en) * 2015-02-06 2016-08-12 株式会社ニューフレアテクノロジー Inspection device and method
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN107220640A (en) * 2017-05-23 2017-09-29 广州绿怡信息科技有限公司 Character identifying method, device, computer equipment and computer-readable recording medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN111093140A (en) * 2019-12-11 2020-05-01 上海闻泰信息技术有限公司 Method, device, equipment and storage medium for detecting defects of microphone and earphone dust screen
CN111161243A (en) * 2019-12-30 2020-05-15 华南理工大学 Industrial product surface defect detection method based on sample enhancement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016145887A (en) * 2015-02-06 2016-08-12 株式会社ニューフレアテクノロジー Inspection device and method
CN107133943A (en) * 2017-04-26 2017-09-05 贵州电网有限责任公司输电运行检修分公司 A kind of visible detection method of stockbridge damper defects detection
CN107220640A (en) * 2017-05-23 2017-09-29 广州绿怡信息科技有限公司 Character identifying method, device, computer equipment and computer-readable recording medium
CN110378305A (en) * 2019-07-24 2019-10-25 中南民族大学 Tealeaves disease recognition method, equipment, storage medium and device
CN111093140A (en) * 2019-12-11 2020-05-01 上海闻泰信息技术有限公司 Method, device, equipment and storage medium for detecting defects of microphone and earphone dust screen
CN111161243A (en) * 2019-12-30 2020-05-15 华南理工大学 Industrial product surface defect detection method based on sample enhancement

Similar Documents

Publication Publication Date Title
WO2019051941A1 (en) Method, apparatus and device for identifying vehicle type, and computer-readable storage medium
CN110619618A (en) Surface defect detection method and device and electronic equipment
JP6584250B2 (en) Image classification method, classifier configuration method, and image classification apparatus
CN114155244B (en) Defect detection method, device, equipment and storage medium
CN111680750B (en) Image recognition method, device and equipment
CN111353580B (en) Training method of target detection network, electronic equipment and storage medium
CN113421192B (en) Training method of object statistical model, and statistical method and device of target object
CN111401343B (en) Method for identifying attributes of people in image and training method and device for identification model
CN111598084B (en) Defect segmentation network training method, device, equipment and readable storage medium
CN114841974B (en) Nondestructive testing method, nondestructive testing system, nondestructive testing electronic equipment and nondestructive testing medium for internal structure of fruit
CN110490058B (en) Training method, device and system of pedestrian detection model and computer readable medium
CN117710756B (en) Target detection and model training method, device, equipment and medium
CN113592859B (en) Deep learning-based classification method for defects of display panel
CN115565020A (en) Tooth surface damage identification method and device based on improved neural network
CN116109627B (en) Defect detection method, device and medium based on migration learning and small sample learning
CN111862089A (en) Method and device for identifying defects of dust screen and electronic equipment
CN113052244B (en) Classification model training method and classification model training device
CN115620083A (en) Model training method, face image quality evaluation method, device and medium
CN114443878A (en) Image classification method, device, equipment and storage medium
CN113076169A (en) User interface test result classification method and device based on convolutional neural network
CN112183523A (en) Text detection method and device
CN111832629A (en) FPGA-based fast-RCNN target detection method
CN111986274A (en) Watermelon maturity state detection method, equipment and medium
CN111242449A (en) Enterprise information loss prediction method
CN117173538A (en) Model training method and device, nonvolatile storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination