CN113838015A - Electric appliance product appearance defect detection method based on network cooperation - Google Patents

Electric appliance product appearance defect detection method based on network cooperation Download PDF

Info

Publication number
CN113838015A
CN113838015A CN202111079409.3A CN202111079409A CN113838015A CN 113838015 A CN113838015 A CN 113838015A CN 202111079409 A CN202111079409 A CN 202111079409A CN 113838015 A CN113838015 A CN 113838015A
Authority
CN
China
Prior art keywords
appearance
defect
electric appliance
deep learning
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111079409.3A
Other languages
Chinese (zh)
Other versions
CN113838015B (en
Inventor
聂佳
杜鹏飞
刘传忠
高文祥
薛吉
杨剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Original Assignee
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electrical Apparatus Research Institute Group Co Ltd filed Critical Shanghai Electrical Apparatus Research Institute Group Co Ltd
Priority to CN202111079409.3A priority Critical patent/CN113838015B/en
Publication of CN113838015A publication Critical patent/CN113838015A/en
Application granted granted Critical
Publication of CN113838015B publication Critical patent/CN113838015B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an electric appliance product appearance defect detection method based on network cooperation. In the invention, a man-machine cooperation technology is utilized to collect detection samples in the appearance detection of the electric appliance product, and a deep learning mechanism is superposed to dynamically optimize an electric appliance appearance detection model, thereby finally forming an electric appliance appearance detection model expert library. At present, the traditional appearance detection system for the electrical appliance industry has high requirements, and can be operated by a person who needs professional training.

Description

Electric appliance product appearance defect detection method based on network cooperation
Technical Field
The invention relates to a method for detecting appearance defects of an electric appliance product based on network cooperation, and belongs to the technical field of artificial intelligence and industrial control automation.
Background
With the deep development and promotion of intelligent manufacturing, industrial internet and artificial intelligence in China, the application of various electrical equipment is increased day by day, and the corresponding problems of product quality and use safety are not ignored. At present, the intellectualization of inspection means and tools in the production and manufacturing links of user-side electrical equipment is still in the primary stage in China, and the requirement of rapid development of the industry cannot be met.
At present, a product appearance defect detection system based on deep learning and a device thereof generally comprise an image display device, a server, a controller and the like. In most detection systems with servers, a deep learning module is included at a server end, iterative computation is performed on acquired image data to obtain an updated detection model, but most of defect types of the detection model are fixed and do not have the expansion of new types, and meanwhile, for the model being detected and a product detection result detected by the model, a user at an edge side cannot inform a machine learning module of new defect types and product characteristics, so that a new detection mechanism and an updated product detection expert library cannot be formed for the new situation
The invention patent application with application number 202010107000.7 discloses an assembly production line intelligent control system based on an expert database, which comprises the expert database, an upper computer, a robot, a screw machine, a detection unit, a driving unit, a visual identification unit, a nail feeding machine, an intelligent assembly unit and the like. The patent application does not carry out iterative optimization on the detection model through deep machine learning, and meanwhile, the generated expert library cannot meet the requirement of updating under a distributed system similar to an edge side-far end, and is not beneficial to forming a new detection mechanism.
The invention patent application with application number 202010888429.4 discloses a deep learning device and a deep learning application method, which comprises the following steps: model base, operating device, executive device. The user selects and operates the page visualization component according to the application requirement, and calls the corresponding deep learning model to process the input data, so that the required deep learning task can be performed on the input data. The method disclosed by the patent application does not reflect iterative training of the model, and limits the breadth of deep learning and the accuracy of the output model.
The invention patent application with application number 202010829978.4 discloses an intelligent steel belt visual inspection device, which is more characterized in that the intelligent steel belt visual inspection device utilizes a trained and implemented model for inspection, does not have steps based on network cooperation and machine learning, and belongs to a traditional visual identification inspection solution.
Disclosure of Invention
The purpose of the invention is: a man-machine cooperation mechanism is introduced into a product appearance defect detection method based on deep learning.
In order to achieve the above object, the technical solution of the present invention is to provide a method for detecting appearance defects of an electrical product based on network cooperation, which is characterized by comprising the following steps:
step 1, establishing an electrical appliance field appearance detection expert model base in a far-end training server, training a machine deep learning algorithm by the far-end training server by utilizing training data sets corresponding to different electrical appliance models uploaded by edge side human-computer interaction equipment to form a plurality of appearance detection deep learning models corresponding to different electrical appliance models, storing all the appearance detection deep learning models in the electrical appliance field appearance detection expert model base, and establishing a mapping relation between each appearance detection deep learning model and the corresponding electrical appliance model;
when the edge side production equipment needs to produce an electric appliance product with a new electric appliance equipment model, the remote training server trains to obtain a new appearance detection deep learning model and stores the new appearance detection deep learning model in the electric appliance field appearance detection expert model base, and then the remote training server informs the edge side human-computer interaction equipment of the model information of the newly added appearance detection deep learning model, so that a user can select the newly added appearance detection deep learning model in the electric appliance field appearance detection expert model base through the edge side human-computer interaction equipment based on the electric appliance equipment model;
when the appearance detection deep learning model is used for detecting the appearance defects of the electric appliance products, firstly, the received image data with any size is standardized, and the received image data is unified into the input size of a CNN convolutional neural network, so that the standardized image data is obtained; inputting the standardized image data into a CNN convolutional neural network, carrying out convolution operation on the standardized image data by convolution kernels with different dimensions and types, and respectively generating a small target characteristic diagram, a middle target characteristic diagram and a large target characteristic diagram, wherein the sampling receptive field of the small target characteristic diagram is smaller than that of the middle target characteristic diagram, and the sampling receptive field of the middle target characteristic diagram is smaller than that of the large target characteristic diagram; the appearance detection deep learning model detects small target defects based on a small target feature map, detects medium target defects based on a medium target feature map, detects large target defects based on a large target feature map, performs frame regression prediction and classification of target detection objects under a frame on vector groups corresponding to the small target feature map, the medium target feature map and the large target feature map by using a frame regression algorithm and a multi-classification algorithm during detection, obtains small target defect frames and/or medium target defect frames and/or large target defect frames and defect categories for respectively marking the positions of the small target defects and/or the medium target defects and/or the large target defects if defects exist, and obtains output parameters x, y, w, h and confidence coefficients after the detection, wherein x, y, h and confidence coefficients, Y represents the X-axis offset and the Y-axis offset of the central position of the small target defect frame or the medium target defect frame or the large target defect frame relative to the upper left corner position of the current defect, w and h represent the ratio of the width and the height of the small target defect frame or the medium target defect frame or the large target defect frame to the width and the height of the whole picture respectively, and the value of the confidence coefficient is not more than 1;
step 2, generating model calling information related to the electrical equipment model by utilizing edge side human-computer interaction equipment according to the electrical equipment model of the electrical equipment actually produced by the edge side production equipment by a user; after the human-computer interaction equipment uploads the model calling information to a remote training server, the remote training server calls an appearance detection deep learning model corresponding to the current electric appliance model, which is stored in an electric appliance field appearance detection expert model base, according to the model calling information;
step 3, after the edge side production equipment obtains a real-time appearance picture of the current electric appliance product through the image acquisition equipment, uploading the real-time appearance picture to a far-end training server, inputting the received real-time appearance picture into a called appearance detection deep learning model by the far-end training server, judging whether the current electric appliance product has defects by using the real-time appearance picture through the appearance detection deep learning model, if the current electric appliance product has the defects, outputting a predicted small target defect frame and/or a medium target defect frame and/or a large target defect frame and defect types, and outputting corresponding output parameters x, y, w, h, confidence and defect types;
step 4, the edge side human-computer interaction equipment frames a corresponding defect area on the real-time appearance picture by using the received output parameters x, y, w and h, and displays a corresponding confidence coefficient and a corresponding defect type;
step 5, if the confidence coefficient displayed in the step 4 is lower than a preset threshold value, or a new defect type is judged to appear according to the real-time appearance picture, drawing a defect frame on the real-time appearance picture by utilizing edge side human-computer interaction equipment, inputting the corresponding defect type, obtaining corresponding defect frame parameters x, y, w and h by utilizing the drawn defect frame by utilizing the edge side human-computer interaction equipment, and meanwhile, setting the confidence coefficient to be 1 by the edge side human-computer interaction equipment; the edge side human-computer interaction equipment stores the previously obtained parameters x, y, w and h of the defect frame, the corresponding defect type, the confidence value and the real-time appearance picture as new training data;
if the confidence degree displayed in the step 4 is not lower than the preset threshold value and no new defect type appears, the edge side human-computer interaction equipment stores the defect frame parameters x, y, w and h obtained in the step 4, the corresponding defect type, the confidence degree value and the real-time appearance picture as a new piece of training data;
and 6, uploading all new training data collected by each period step length to a training server by the edge side human-computer interaction equipment according to a set period, forming a new training data set by the training server by using all new training data collected by each period step length, retraining the appearance detection deep learning model used in the steps 2 to 5 based on the training data set to obtain an updated and optimized appearance detection deep learning model, and replacing the existing appearance detection deep learning model and storing the updated and optimized appearance detection deep learning model into an appearance detection expert model base in the field of electric appliances.
Preferably, in step 1, the training data set of the current appliance model is obtained by the following method:
the edge side human-computer interaction equipment obtains an appearance picture of an electric appliance product of the current electric appliance type through image acquisition equipment, then judges whether the appearance picture of the electric appliance product has appearance defects or not, and carries out manual marking on the appearance picture of the electric appliance product with the appearance defects; when manual marking is carried out, marking is carried out on the sampling picture based on the standard of the electric appliance industry field for appearance detection, and a defect frame and a defect category are marked on the appearance picture of the electric appliance product; the method comprises the steps that edge side human-computer interaction equipment uploads an electrical product appearance picture which is subjected to manual labeling and an electrical product appearance picture which does not need manual labeling to a far-end training server, the far-end training server constructs a training data set based on the electrical product appearance picture received within a certain time period, and a machine deep learning algorithm is trained by using the training data set, so that an appearance detection deep learning model corresponding to the current electrical equipment model is obtained.
According to the invention, human-computer cooperation and deep learning are combined in the research visual detection technology, so that a product detection model expert database is formed together, and the detection quality is improved. According to the invention, field data is collected through a visual detection technology and a human-computer interaction technology, and deep learning of a traditional electric appliance appearance detection model at a server end is realized through a cloud platform technology. Finally, the invention stores the optimized models based on a large amount of electric appliance appearance sample data on the edge side into a special database to form an expert model base for electric appliance appearance detection, and forms a highly intelligent detection standard in the electric appliance industry through the models stored in the expert base in practical application.
In the invention, a man-machine cooperation technology is utilized to collect detection samples in the appearance detection of the electric appliance product, and a deep learning mechanism is superposed to dynamically optimize an electric appliance appearance detection model, thereby finally forming an electric appliance appearance detection model expert library. At present, the traditional appearance detection system for the electrical appliance industry has high requirements, and can be operated by a person who needs professional training.
Drawings
FIG. 1 is a schematic diagram of a product appearance defect detection method based on network coordination;
FIG. 2 is a schematic diagram of application of a product appearance defect detection method based on network coordination;
FIG. 3 is a flowchart of model warehousing after completion of deep machine learning algorithm training;
FIG. 4 is a flowchart of a process of performing human-computer cooperation detection on a human-computer interaction interface;
FIG. 5 is a flow chart of a visual WEB interface control deep machine learning service.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
The invention provides a method for detecting appearance defects of an electric appliance product based on network cooperation, which comprises the following steps:
step 1, an electrical appliance field appearance detection expert model base is established in a far-end training server, the far-end training server trains a machine deep learning algorithm by utilizing training data sets corresponding to different electrical appliance models uploaded by edge side human-computer interaction equipment to form a plurality of appearance detection deep learning models corresponding to different electrical appliance models, all the appearance detection deep learning models are stored in the electrical appliance field appearance detection expert model base, and a mapping relation is established between each appearance detection deep learning model and the corresponding electrical appliance model.
In this step, the training data set of the current electrical equipment model is obtained by the following method:
the edge side human-computer interaction equipment obtains the appearance picture of the electric appliance product of the current electric appliance equipment model through the image acquisition equipment, then judges whether the appearance picture of the electric appliance product has appearance defects or not, and carries out manual marking on the appearance picture of the electric appliance product with the appearance defects. When manual marking is carried out, the sampling picture is marked based on the standard of appearance detection in the field of the electric appliance industry, and a defect frame and a defect category are marked on the appearance picture of the electric appliance product. The method comprises the steps that edge side human-computer interaction equipment uploads an electrical product appearance picture which is subjected to manual labeling and an electrical product appearance picture which does not need manual labeling to a far-end training server, the far-end training server constructs a training data set based on the electrical product appearance picture received within a certain time period, and a machine deep learning algorithm is trained by using the training data set, so that an appearance detection deep learning model corresponding to the current electrical equipment model is obtained.
When the edge side production equipment needs to produce an electrical product with a new electrical equipment model, the far-end training server trains based on the method to obtain a new appearance detection deep learning model and stores the new appearance detection deep learning model in the electrical equipment field appearance detection expert model base, and then the far-end training server informs the edge side human-computer interaction equipment of the model information of the newly added appearance detection deep learning model, so that a user can select the newly added appearance detection deep learning model in the electrical equipment field appearance detection expert model base through the edge side human-computer interaction equipment based on the electrical equipment model.
In this embodiment, when the appearance detection deep learning model detects the appearance defects of the electrical product, the normalization operation is performed on the received image data with any size, and the received image data is unified into the input size of the CNN convolutional neural network by scaling and pad (extended pixel value 0), so as to obtain the normalized image data. And then inputting the normalized image data into a CNN convolutional neural network, performing convolution operation on the normalized image data by convolution kernels with different dimensions and types, and respectively generating a small target feature map with 32 times of downsampling receptive field of 13 multiplied by 13pixels, a middle target feature map with 16 times of downsampling receptive field of 26 multiplied by 26pixels and a large target feature map with 8 times of downsampling receptive field of 52 multiplied by 52 pixels. Detecting different size targets under the condition of multiple sizes through three different sizes of characteristic maps, such as: detecting small target defects such as scratches, flaws and paint dropping based on the small target feature map; detecting medium target defects, such as nameplate defects and electric appliance surface knob defects, based on the medium target feature map; and detecting large target defects, such as wiring terminal abnormity, based on the large target characteristic diagram. Finally, the appearance detection deep learning model uses a frame regression algorithm (BBoxReg) and a multi-classification algorithm (SVMs) to carry out frame regression prediction of a target detection object and classification of targets under the frame on vector groups corresponding to the small target feature map, the medium target feature map and the large target feature map under three scales, if the defects are judged to exist, a small target defect frame and/or a medium target defect frame and/or a large target defect frame and defect categories used for marking the positions of the small target defects and/or the medium target defects and/or the large target defects respectively are obtained, and output parameters X, Y, w, h and confidence coefficients are obtained after the acquisition, wherein X and Y represent the X-axis offset and the Y-axis offset of the central position of the small target defect frame or the medium target defect frame or the large target defect frame relative to the upper left corner position of the current defect, w and h represent the width and height of the small target defect frame or the medium target defect frame or the large target defect frame respectively in the width and height proportion of the whole picture, and the confidence coefficient value is not more than 1. The appearance detection deep learning model carries out normalization processing on an input image, identified parameters x, y, w and h are output according to the coordinates x, y, w and h of the center point of the defect frame after normalization, the output parameters x, y, w and h occupy the proportion of image pixels, and finally restoration operation is required to be carried out when the defect frame is displayed on terminal equipment.
And 2, generating model calling information related to the electrical equipment model by utilizing the edge side human-computer interaction equipment according to the electrical equipment model of the electrical product actually produced by the edge side production equipment by a user. After the human-computer interaction equipment uploads the model calling information to a remote training server, the remote training server calls an appearance detection deep learning model corresponding to the current electric appliance model, which is stored in an electric appliance field appearance detection expert model base, according to the model calling information.
And 3, after the edge side production equipment obtains a real-time appearance picture of the current electric appliance product through the image acquisition equipment, uploading the real-time appearance picture to a far-end training server, inputting the received real-time appearance picture into the called appearance detection deep learning model by the far-end training server, judging whether the current electric appliance product has defects or not by using the appearance detection deep learning model, if the current electric appliance product has the defects, outputting a predicted small target defect frame and/or a medium target defect frame and/or a large target defect frame and defect types, and outputting corresponding output parameters x, y, w, h, confidence and defect types.
And 4, framing the corresponding defect area on the real-time appearance picture by the edge side human-computer interaction equipment by using the received output parameters x, y, w and h, and displaying the corresponding confidence coefficient and the defect type.
And 5, if the confidence coefficient displayed in the step 4 is lower than a preset threshold value or a new defect type is judged to appear according to the real-time appearance picture, drawing a defect frame on the real-time appearance picture by utilizing the edge side human-computer interaction equipment and inputting the corresponding defect type, obtaining corresponding defect frame parameters x, y, w and h by utilizing the drawn defect frame by utilizing the edge side human-computer interaction equipment, and meanwhile, setting the confidence coefficient to be 1 by utilizing the edge side human-computer interaction equipment. And the edge side human-computer interaction equipment stores the previously obtained parameters x, y, w and h of the defect frame, the corresponding defect type, the confidence value and the real-time appearance picture as new training data.
And if the confidence degree displayed in the step 4 is not lower than the preset threshold value and no new defect type appears, the edge side human-computer interaction equipment saves the defect frame parameters x, y, w and h obtained in the step 4, the corresponding defect type, the confidence degree value and the real-time appearance picture as a new piece of training data.
And 6, uploading all new training data collected by each period step length to a training server by the edge side human-computer interaction equipment according to a set period, forming a new training data set by the training server by using all new training data collected by each period step length, retraining the appearance detection deep learning model used in the steps 2 to 5 based on the training data set to obtain an updated and optimized appearance detection deep learning model, and replacing the existing appearance detection deep learning model and storing the updated and optimized appearance detection deep learning model into an appearance detection expert model base in the field of electric appliances.
In this embodiment, the built-in hardware standard of the human-computer interaction device needs to carry an Intel sailing J1800 processor as a main chip, a 2G DDR3 memory, an SSD hard disk, a ubuntu16.04 or above version operating system, and a display screen with a touch function. As shown in fig. 2, the human-computer interaction device needs to have a configuration function interface for completing necessary configuration options, a display interface for displaying detected images, uploading data sets, and updating detection models.
The built-in hardware standard of the training server needs to be loaded with an Intel to strong 5128 processor as a main chip, a 32G DDR3 internal memory and an SSD hard disk are built in, a GPU unit needs to be configured, and Ubuntu16.04 and above versions of operating systems need to be pre-installed. In order to realize model optimization, the operating system must be pre-loaded with a relational database, a deep machine learning framework and a Web server framework. Training server
In this embodiment, a visual WEB operation interface is used to implement the operation on the training server, and the operation process is as follows:
1. the visual operation interface is a visual operation interface which can be deployed in a cloud terminal and is provided by a WEB service built in a server.
2. The visual operation interface is connected with the human-computer interaction equipment to check parameters of the human-computer interaction equipment, the visual operation interface is connected with the server to specify training parameters, the electric appliance appearance detection model training is started, and information of an electric appliance appearance detection model library is sent to the human-computer interaction equipment.
In the system, a Web interface is developed based on a flash framework and is responsible for communication with the human-computer interaction equipment and the server. As shown in FIG. 1, the Web interface needs to be connected with a server, and training can be started by observing a training instruction from a visual interface to a training server.
As shown in fig. 1, the Web interface needs to be connected to the human-computer interaction device, so that a user can query parameters of a detected product and change settings of the human-computer interaction device in the visual interface, and the whole system can be monitored and managed conveniently by cloud configuration.

Claims (2)

1. An electric appliance product appearance defect detection method based on network cooperation is characterized by comprising the following steps:
step 1, establishing an electrical appliance field appearance detection expert model base in a far-end training server, training a machine deep learning algorithm by the far-end training server by utilizing training data sets corresponding to different electrical appliance models uploaded by edge side human-computer interaction equipment to form a plurality of appearance detection deep learning models corresponding to different electrical appliance models, storing all the appearance detection deep learning models in the electrical appliance field appearance detection expert model base, and establishing a mapping relation between each appearance detection deep learning model and the corresponding electrical appliance model;
when the edge side production equipment needs to produce an electric appliance product with a new electric appliance equipment model, the remote training server trains to obtain a new appearance detection deep learning model and stores the new appearance detection deep learning model in the electric appliance field appearance detection expert model base, and then the remote training server informs the edge side human-computer interaction equipment of the model information of the newly added appearance detection deep learning model, so that a user can select the newly added appearance detection deep learning model in the electric appliance field appearance detection expert model base through the edge side human-computer interaction equipment based on the electric appliance equipment model;
when the appearance detection deep learning model is used for detecting the appearance defects of the electric appliance products, firstly, the received image data with any size is standardized, and the received image data is unified into the input size of a CNN convolutional neural network, so that the standardized image data is obtained; inputting the standardized image data into a CNN convolutional neural network, carrying out convolution operation on the standardized image data by convolution kernels with different dimensions and types, and respectively generating a small target characteristic diagram, a middle target characteristic diagram and a large target characteristic diagram, wherein the sampling receptive field of the small target characteristic diagram is smaller than that of the middle target characteristic diagram, and the sampling receptive field of the middle target characteristic diagram is smaller than that of the large target characteristic diagram; the appearance detection deep learning model detects small target defects based on a small target feature map, detects medium target defects based on a medium target feature map, detects large target defects based on a large target feature map, performs frame regression prediction and classification of target detection objects under a frame on vector groups corresponding to the small target feature map, the medium target feature map and the large target feature map by using a frame regression algorithm and a multi-classification algorithm during detection, obtains small target defect frames and/or medium target defect frames and/or large target defect frames and defect categories for respectively marking the positions of the small target defects and/or the medium target defects and/or the large target defects if defects exist, and obtains output parameters x, y, w, h and confidence coefficients after the detection, wherein x, y, h and confidence coefficients, Y represents the X-axis offset and the Y-axis offset of the central position of the small target defect frame or the medium target defect frame or the large target defect frame relative to the upper left corner position of the current defect, w and h represent the ratio of the width and the height of the small target defect frame or the medium target defect frame or the large target defect frame to the width and the height of the whole picture respectively, and the value of the confidence coefficient is not more than 1;
step 2, generating model calling information related to the electrical equipment model by utilizing edge side human-computer interaction equipment according to the electrical equipment model of the electrical equipment actually produced by the edge side production equipment by a user; after the human-computer interaction equipment uploads the model calling information to a remote training server, the remote training server calls an appearance detection deep learning model corresponding to the current electric appliance model, which is stored in an electric appliance field appearance detection expert model base, according to the model calling information;
step 3, after the edge side production equipment obtains a real-time appearance picture of the current electric appliance product through the image acquisition equipment, uploading the real-time appearance picture to a far-end training server, inputting the received real-time appearance picture into a called appearance detection deep learning model by the far-end training server, judging whether the current electric appliance product has defects by using the real-time appearance picture through the appearance detection deep learning model, if the current electric appliance product has the defects, outputting a predicted small target defect frame and/or a medium target defect frame and/or a large target defect frame and defect types, and outputting corresponding output parameters x, y, w, h, confidence and defect types;
step 4, the edge side human-computer interaction equipment frames a corresponding defect area on the real-time appearance picture by using the received output parameters x, y, w and h, and displays a corresponding confidence coefficient and a corresponding defect type;
step 5, if the confidence coefficient displayed in the step 4 is lower than a preset threshold value, or a new defect type is judged to appear according to the real-time appearance picture, drawing a defect frame on the real-time appearance picture by utilizing edge side human-computer interaction equipment, inputting the corresponding defect type, obtaining corresponding defect frame parameters x, y, w and h by utilizing the drawn defect frame by utilizing the edge side human-computer interaction equipment, and meanwhile, setting the confidence coefficient to be 1 by the edge side human-computer interaction equipment; the edge side human-computer interaction equipment stores the previously obtained parameters x, y, w and h of the defect frame, the corresponding defect type, the confidence value and the real-time appearance picture as new training data;
if the confidence degree displayed in the step 4 is not lower than the preset threshold value and no new defect type appears, the edge side human-computer interaction equipment stores the defect frame parameters x, y, w and h obtained in the step 4, the corresponding defect type, the confidence degree value and the real-time appearance picture as a new piece of training data;
and 6, uploading all new training data collected by each period step length to a training server by the edge side human-computer interaction equipment according to a set period, forming a new training data set by the training server by using all new training data collected by each period step length, retraining the appearance detection deep learning model used in the steps 2 to 5 based on the training data set to obtain an updated and optimized appearance detection deep learning model, and replacing the existing appearance detection deep learning model and storing the updated and optimized appearance detection deep learning model into an appearance detection expert model base in the field of electric appliances.
2. The method for detecting the appearance defects of the electric appliance based on the network coordination as claimed in claim 1, wherein in the step 1, the training data set of the current electric appliance model is obtained by adopting the following method:
the edge side human-computer interaction equipment obtains an appearance picture of an electric appliance product of the current electric appliance type through image acquisition equipment, then judges whether the appearance picture of the electric appliance product has appearance defects or not, and carries out manual marking on the appearance picture of the electric appliance product with the appearance defects; when manual marking is carried out, marking is carried out on the sampling picture based on the standard of the electric appliance industry field for appearance detection, and a defect frame and a defect category are marked on the appearance picture of the electric appliance product; the method comprises the steps that edge side human-computer interaction equipment uploads an electrical product appearance picture which is subjected to manual labeling and an electrical product appearance picture which does not need manual labeling to a far-end training server, the far-end training server constructs a training data set based on the electrical product appearance picture received within a certain time period, and a machine deep learning algorithm is trained by using the training data set, so that an appearance detection deep learning model corresponding to the current electrical equipment model is obtained.
CN202111079409.3A 2021-09-15 2021-09-15 Electrical product appearance defect detection method based on network cooperation Active CN113838015B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111079409.3A CN113838015B (en) 2021-09-15 2021-09-15 Electrical product appearance defect detection method based on network cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111079409.3A CN113838015B (en) 2021-09-15 2021-09-15 Electrical product appearance defect detection method based on network cooperation

Publications (2)

Publication Number Publication Date
CN113838015A true CN113838015A (en) 2021-12-24
CN113838015B CN113838015B (en) 2023-09-22

Family

ID=78959322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111079409.3A Active CN113838015B (en) 2021-09-15 2021-09-15 Electrical product appearance defect detection method based on network cooperation

Country Status (1)

Country Link
CN (1) CN113838015B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445344A (en) * 2021-12-29 2022-05-06 广州瑞松视觉技术有限公司 Battery box appearance detection method and device
CN114994046A (en) * 2022-04-19 2022-09-02 深圳格芯集成电路装备有限公司 Defect detection system based on deep learning model
CN115138598A (en) * 2022-05-16 2022-10-04 格力电器(武汉)有限公司 PCB welding production line intelligence letter sorting system
WO2024187356A1 (en) * 2023-03-14 2024-09-19 广州盛创文化发展有限公司 Defect detection method and apparatus for silicone product, and terminal device and medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544256A (en) * 1993-10-22 1996-08-06 International Business Machines Corporation Automated defect classification system
WO2018165753A1 (en) * 2017-03-14 2018-09-20 University Of Manitoba Structure defect detection using machine learning algorithms
CN109598287A (en) * 2018-10-30 2019-04-09 中国科学院自动化研究所 The apparent flaws detection method that confrontation network sample generates is generated based on depth convolution
CN109993094A (en) * 2019-03-26 2019-07-09 苏州富莱智能科技有限公司 Fault in material intelligent checking system and method based on machine vision
CN110378869A (en) * 2019-06-05 2019-10-25 北京交通大学 A kind of rail fastening method for detecting abnormality of sample automatic marking
WO2020007096A1 (en) * 2018-07-02 2020-01-09 北京百度网讯科技有限公司 Method and device for detecting quality of display screen, electronic device, and storage medium
CA3128957A1 (en) * 2019-03-04 2020-03-03 Bhaskar Bhattacharyya Near real-time detection and classification of machine anomalies using machine learning and artificial intelligence
CN110910353A (en) * 2019-11-06 2020-03-24 成都数之联科技有限公司 Industrial false failure detection method and system
CN111179223A (en) * 2019-12-12 2020-05-19 天津大学 Deep learning-based industrial automatic defect detection method
CN111223093A (en) * 2020-03-04 2020-06-02 武汉精立电子技术有限公司 AOI defect detection method
US20200175673A1 (en) * 2018-11-30 2020-06-04 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for detecting defect of meal box, server, and storage medium
GB202007344D0 (en) * 2020-03-17 2020-07-01 Apical Ltd Model-based machine-learning and inferencing
CN111489326A (en) * 2020-01-13 2020-08-04 杭州电子科技大学 Copper foil substrate surface defect detection method based on semi-supervised deep learning
CN111754456A (en) * 2020-05-15 2020-10-09 清华大学 Two-dimensional PCB appearance defect real-time automatic detection technology based on deep learning
CN111798419A (en) * 2020-06-27 2020-10-20 上海工程技术大学 Metal paint spraying surface defect detection method
CN112132776A (en) * 2020-08-11 2020-12-25 苏州跨视科技有限公司 Visual inspection method and system based on federal learning, storage medium and equipment
CN113096098A (en) * 2021-04-14 2021-07-09 大连理工大学 Casting appearance defect detection method based on deep learning
WO2021140483A1 (en) * 2020-01-10 2021-07-15 Everseen Limited System and method for detecting scan and non-scan events in a self check out process

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544256A (en) * 1993-10-22 1996-08-06 International Business Machines Corporation Automated defect classification system
WO2018165753A1 (en) * 2017-03-14 2018-09-20 University Of Manitoba Structure defect detection using machine learning algorithms
WO2020007096A1 (en) * 2018-07-02 2020-01-09 北京百度网讯科技有限公司 Method and device for detecting quality of display screen, electronic device, and storage medium
CN109598287A (en) * 2018-10-30 2019-04-09 中国科学院自动化研究所 The apparent flaws detection method that confrontation network sample generates is generated based on depth convolution
US20200175673A1 (en) * 2018-11-30 2020-06-04 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for detecting defect of meal box, server, and storage medium
CA3128957A1 (en) * 2019-03-04 2020-03-03 Bhaskar Bhattacharyya Near real-time detection and classification of machine anomalies using machine learning and artificial intelligence
CN109993094A (en) * 2019-03-26 2019-07-09 苏州富莱智能科技有限公司 Fault in material intelligent checking system and method based on machine vision
CN110378869A (en) * 2019-06-05 2019-10-25 北京交通大学 A kind of rail fastening method for detecting abnormality of sample automatic marking
CN110910353A (en) * 2019-11-06 2020-03-24 成都数之联科技有限公司 Industrial false failure detection method and system
CN111179223A (en) * 2019-12-12 2020-05-19 天津大学 Deep learning-based industrial automatic defect detection method
WO2021140483A1 (en) * 2020-01-10 2021-07-15 Everseen Limited System and method for detecting scan and non-scan events in a self check out process
CN111489326A (en) * 2020-01-13 2020-08-04 杭州电子科技大学 Copper foil substrate surface defect detection method based on semi-supervised deep learning
CN111223093A (en) * 2020-03-04 2020-06-02 武汉精立电子技术有限公司 AOI defect detection method
GB202007344D0 (en) * 2020-03-17 2020-07-01 Apical Ltd Model-based machine-learning and inferencing
CN111754456A (en) * 2020-05-15 2020-10-09 清华大学 Two-dimensional PCB appearance defect real-time automatic detection technology based on deep learning
CN111798419A (en) * 2020-06-27 2020-10-20 上海工程技术大学 Metal paint spraying surface defect detection method
CN112132776A (en) * 2020-08-11 2020-12-25 苏州跨视科技有限公司 Visual inspection method and system based on federal learning, storage medium and equipment
CN113096098A (en) * 2021-04-14 2021-07-09 大连理工大学 Casting appearance defect detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. KIM, H. JO, M. RA AND W. -Y. KIM: "Weakly-Supervised Defect Segmentation on Periodic Textures Using CycleGAN", 《IEEE ACCESS》 *
S. NIU, B. LI, X. WANG AND H. LIN: "Defect Image Sample Generation With GAN for Improving Defect Recognition", 《IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445344A (en) * 2021-12-29 2022-05-06 广州瑞松视觉技术有限公司 Battery box appearance detection method and device
CN114994046A (en) * 2022-04-19 2022-09-02 深圳格芯集成电路装备有限公司 Defect detection system based on deep learning model
CN115138598A (en) * 2022-05-16 2022-10-04 格力电器(武汉)有限公司 PCB welding production line intelligence letter sorting system
WO2024187356A1 (en) * 2023-03-14 2024-09-19 广州盛创文化发展有限公司 Defect detection method and apparatus for silicone product, and terminal device and medium

Also Published As

Publication number Publication date
CN113838015B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN113838015A (en) Electric appliance product appearance defect detection method based on network cooperation
US11175790B2 (en) System and method for providing real-time product interaction assistance
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN107679475B (en) Store monitoring and evaluating method and device and storage medium
CN106650795B (en) Hotel room type image sorting method
TW202001795A (en) Labeling system and method for defect classification
CN111507357B (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
US20050135667A1 (en) Method and apparatus for labeling images and creating training material
CN113222913A (en) Circuit board defect detection positioning method and device and storage medium
CN111507325B (en) Industrial visual OCR recognition system and method based on deep learning
CN114235837A (en) LED packaging surface defect detection method, device, medium and equipment based on machine vision
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN108052918A (en) A kind of person's handwriting Compare System and method
CN114972246A (en) Die-cutting product surface defect detection method based on deep learning
KR102366396B1 (en) RGB-D Data and Deep Learning Based 3D Instance Segmentation Method and System
CN117788444A (en) SMT patch offset detection method, SMT patch offset detection device and SMT patch offset detection system
CN117474886A (en) Ceramic cup defect detection method and system
CN113145473A (en) Intelligent fruit sorting system and method
US20200065631A1 (en) Produce Assessment System
CN117197479A (en) Image analysis method, device, computer equipment and storage medium applying corn ear outer surface
CN111583225A (en) Defect detection method, device and storage medium
CN117237701A (en) Classified evaluation method, system, equipment and storage medium for flotation foam
CN117011216A (en) Defect detection method and device, electronic equipment and storage medium
CN113158632B (en) Table reconstruction method for CAD drawing and computer readable storage medium
CN112581472B (en) Target surface defect detection method facing human-computer interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant