CN111986178A - Product defect detection method and device, electronic equipment and storage medium - Google Patents
Product defect detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111986178A CN111986178A CN202010849926.3A CN202010849926A CN111986178A CN 111986178 A CN111986178 A CN 111986178A CN 202010849926 A CN202010849926 A CN 202010849926A CN 111986178 A CN111986178 A CN 111986178A
- Authority
- CN
- China
- Prior art keywords
- image
- defect
- information
- detected
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 133
- 238000001514 detection method Methods 0.000 title claims abstract description 98
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims description 36
- 238000012549 training Methods 0.000 claims description 23
- 230000015654 memory Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 13
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 18
- 238000004519 manufacturing process Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000012937 correction Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005067 remediation Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a product defect detection method and device, electronic equipment and a storage medium, and relates to the fields of computer vision, image processing, deep learning and the like. The specific implementation scheme is as follows: acquiring an image to be detected of a target product; determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and a pre-stored color characteristic value of the template image; taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel; and obtaining the defect information of the target product according to the input image and the target detection model. The embodiment of the application can improve the defect detection effect on products with small size and weak texture characteristics in an industrial scene.
Description
Technical Field
The application relates to the technical field of computers, in particular to the fields of computer vision, image processing, deep learning and the like.
Background
In industrial manufacturing scenarios, such as component manufacturing scenarios for consumer electronics, defect detection of product appearance is an important step before product shipment. The traditional appearance defect detection is realized by manual visual detection, but the problems of high labor cost, difficult unification of quality inspection standards, difficult storage of detection data, secondary mining and utilization and the like exist. Compared with a manual visual detection scheme, the automatic detection scheme based on computer vision has the characteristics of stable performance, sustainable iterative optimization and the like, so that the automatic detection scheme is widely concerned in the field of defect detection.
Disclosure of Invention
The application provides a product defect detection method and device, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided a product defect detecting method, including:
acquiring an image to be detected of a target product;
determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and a pre-stored color characteristic value of the template image;
taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel;
and obtaining the defect information of the target product according to the input image and the target detection model.
According to another aspect of the present application, there is provided a product defect detecting apparatus including:
the image acquisition module is used for acquiring an image to be detected of a target product;
the difference determining module is used for determining the color characteristic difference between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
the first channel processing module is used for taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel;
and the defect detection module is used for obtaining the defect information of the target product according to the input image and the target detection model.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method provided by any of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method provided by any of the embodiments of the present application.
According to the technical scheme of the application, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, therefore, in the process of obtaining the defect information of the target product according to the input image and the target detection model, the depth characteristic extraction can be carried out aiming at the difference between the image to be detected and the template image, and the defect detection effect of the product with small size and weak texture characteristics in an industrial scene is improved by putting the attention of the model on the difference between the image to be detected and the template image.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of a product defect detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a product defect detection method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of an HRNet model in an embodiment of the present application;
FIG. 4 is a schematic diagram of an application example of a product defect detection method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a product defect detection apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a product defect detection apparatus according to another embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing the product defect detection method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram illustrating a product defect detection method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
step S11, acquiring an image to be detected of a target product;
illustratively, the target product may include a product to be detected for defects in industrial manufacturing, such as parts of consumer electronics, home appliances, and the like. The image to be detected of the target product may include an original image obtained by shooting the target product by an image acquisition device on the production line, such as a camera, a video camera, or the like, or an image obtained by processing the original image.
Step S12, determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
illustratively, the template image may include a pre-shot image of a qualified product or a good product of the same type or model as the target product, or may include a simulated design drawing of the product, etc.
The color feature values of the image may include feature values of pixels in the image in one or more color channels. For example, for an RGB (Red-Green-Blue) color image, the color feature values may include R-channel feature values, G-channel feature values, and/or B-channel feature values of the respective pixels. For a grayscale image, the color feature values may include feature values of grayscale channels, i.e., grayscale values.
The color characteristic difference may include a difference between a color characteristic value of each pixel of the image to be detected and a color characteristic value of a corresponding pixel in the template image. If the image to be detected and the template image are RGB color images, color characteristic difference values can be respectively calculated for an R channel, a G channel and a B channel. And if the image to be detected and the template image are gray level images, the color characteristic value comprises the gray level difference value of each pixel.
For example, if the image to be detected and the template image are images with the same resolution, for example, images acquired by the same image acquisition device, the difference between the color feature values may be calculated pixel by pixel to obtain the color feature difference. If the image to be detected and the template image are not images with the same resolution, the resolution of the image to be detected or the template image can be adjusted to ensure that the resolution of the image to be detected and the resolution of the template image are the same, and then the color characteristic difference value is determined.
Step S13, using the color feature difference as the feature value of the first channel to obtain the input image including the first channel;
the input image obtained based on step S13 may be a single-channel image or a multi-channel image including a first channel whose feature values are color feature values. Illustratively, the input image may include one or more first channels, for example, the input image may include 3 first channels corresponding to R, G, and B channels of the image to be detected, respectively; the input image may also include 1 first channel corresponding to a grayscale channel.
For example, other channels such as the second channel, the third channel, and the like may also be included in the input image. The characteristic values of other channels in the input image may include color characteristic values of the image to be detected, color characteristic values of the template image, or other characteristic information of the target product, for example, based on the depth and surface gradient of the target product identified from the image to be detected.
And step S14, obtaining the defect information of the target product according to the input image and the target detection model.
For example, the target detection model may be used to obtain defect information of a product related to an input image with respect to the input image. The target detection model may be trained based on Deep Convolutional Neural Networks (Deep CNNs), including, for example, U-net (U-type network), FCN (full Convolutional network), Mask R-CNN (Regions with Deep Convolutional Neural Networks) Networks, and the like. After the input image is input into the target detection model, the target detection model can output the defect information of the target product.
Illustratively, the defect information of the target product may include a defect location, a defect size, a defect type, and the like.
In the embodiment of the application, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected and the template image of the target product, therefore, in the process of obtaining the defect information of the target product according to the input image and the target detection model, the depth characteristic extraction can be carried out aiming at the difference between the image to be detected and the template image, the difference between the image to be detected and the template image is put through the attention of the model, thereby the defect detection effect of the product with small size and weak texture characteristics under an industrial scene is improved, and the method has the advantages of accurate and stable detection and strong robustness.
In an alternative exemplary embodiment, the input image further comprises a second channel and/or a third channel. The product defect detection method may further include:
the color characteristic value of the image to be detected is taken as the characteristic value of the second channel, and/or,
and taking the color characteristic value of the template image as the characteristic value of the third channel.
For example, the input image may be a 9-channel image including 3 first channels, 3 second channels, and 3 third channels. Wherein, the 3 first channels comprise color characteristic difference values respectively corresponding to the RGB three channels; the 3 second channels comprise characteristic values of RGB three channels of the image to be detected; the 3 third channels include RGB three-channel feature values of the template image. The target detection model can respectively extract the characteristics of each channel, then fuse the extracted characteristic information of each channel and output the defect information of the target product on the image to be detected.
As another example, the input image may be a 3-channel image including 1 first channel, 1 second channel, and 1 third channel. The first channel comprises a gray value difference value between an image to be detected and a template image; the second channel comprises a gray value of the image to be detected; the third channel includes the grayscale values of the template image.
According to the exemplary embodiment, the target detection model can learn the defect image, the template image and the difference between the defect image and the template image simultaneously in the training process, and based on an Attention mechanism of the model, the model automatically puts Attention on the more important channel or the spatial position of the image, so that the defect detection effect on products with small size and weak texture features in an industrial scene is greatly improved.
Illustratively, as shown in fig. 2, in an alternative implementation of the step S11, acquiring the image to be detected of the target product may include:
step S121, acquiring an original image of a target product;
step S122, determining the position information of the identification point of the target product in the original image;
and S123, correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image.
Illustratively, the identification points may include fixed markers with high resolution in the target product, such as screws, part corner points, and the like. The position information of the identification point in the original image can be obtained by means of image identification. For example, the position information of the recognition point in the image is detected using a key point detection model. The position information of the identification point in the template image can be determined through manual marking or image identification.
For example, an image acquisition device on a production line may be used to capture a target product to obtain an original image. Then, the position information of the identification point in the original image is determined by using the key point detection model. And then, according to the position information of the identification point in the original image and the position information of the identification point in the template image, determining differences of various information such as the rotation angle, the position, the size and the like of the product in the original image and the template image, correcting the original image according to the differences, for example, performing rotation correction on the original image according to the difference of the rotation angle of the product, so that the corrected image is aligned with the template image and can be used as an image to be detected. The keypoint detection model can be obtained by training based on a deep convolutional neural network, which includes, for example, FCN, HRNet (High-Resolution Net), and the like. The corrected image is aligned with the template image, and one or more information of the rotation angle, the position and the size of the product in the corrected image and the template image are consistent.
According to the embodiment, the image to be detected and the template image can be aligned, so that each pixel in the image to be detected corresponds to each pixel in the template image one to one, the accuracy of the color characteristic difference value is improved, and the accuracy of product defect detection is improved.
Optionally, the step S122 of determining the position information of the identification point of the target product in the original image may include:
and inputting the original image into a high-resolution network HRNet model to obtain the position information of the identification point output by the HRNet model in the original image.
This embodiment utilizes the HRNet network model to detect the recognition points in the raw image. The processing of the original image by the HRNet model can refer to the schematic diagram shown in fig. 3. As shown in fig. 3, the network structure of the HRNet model includes a Convolution (Convolution) layer, a Strided Convolution (Strided Convolution) layer, and an upsampling (Upsample) layer. After an original image is input into a model, the original image is processed by using a convolutional layer, a step convolutional layer and an upsampling layer in the model, a Feature Map (Feature Map) is output, a loss function is calculated according to the Feature Map, the Feature Map is subjected to operations such as splicing and deconvolution when a preset condition is reached, and finally a Feature Map which enhances the identification points is output.
The convolution layer performs convolution scanning on an original image or a characteristic diagram by utilizing convolution cores with different weights, extracts meaningful image characteristics from the original image or the characteristic diagram, and outputs the meaningful image characteristics to the next characteristic diagram. And the step-by-step convolution layer enlarges the receptive field of the convolution kernel under the condition of not increasing the number of parameters and improves the expression of the model. The up-sampling layer up-samples the original image or feature map, for example, the feature map with width w and height h can be changed into a feature map with width 2w and height 2h after passing through the up-sampling layer, so that more detailed information is retained.
The HRNet model is a deep neural network model with a convolutional layer and an upsampling layer, has higher robustness for original images with different brightness and inclination angles, is used in an identification point detection task, and has higher generalization performance.
Optionally, in step S123, the correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image may include:
determining an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
and correcting the original image according to the affine transformation matrix to obtain an image to be detected aligned with the template image.
For example, a matrix is used to represent an original image, each element in the matrix represents a feature value of each pixel in the original image, and the original image is multiplied by an affine transformation matrix to obtain a matrix corresponding to the image to be detected, thereby obtaining the image to be detected.
The affine transformation matrix can represent transformation operations such as rotation, translation, scaling and the like between the identification points in the original image and the identification points in the template image. Therefore, the original image is corrected based on the affine transformation matrix, and the target product in the original image can be subjected to accurate transformation operation and is consistent with the product in the template image in all aspects of inclination angle, position, size and the like. The accuracy of the color characteristic difference value is improved, and therefore the accuracy of product defect detection is improved.
For example, in an alternative implementation manner of the step S14, obtaining the defect information of the target product according to the input image and the target detection model may include:
inputting the input image into a target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
and determining the defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type.
In this embodiment, the defect information of the target product may be determined using a model in which the output information includes defect locations and mask information. Based on the mask information output by the model, the defect type can be obtained. It is helpful to make adaptive treatment for different types of defective products.
For example, a Mask R-CNN model is used as the above-described target detection model. The network structure of the Mask R-CNN model comprises a convolutional layer, a Pooling (Pooling) layer, a full connection layer and the like. The convolution layer performs convolution scanning on an original image or a characteristic diagram by utilizing convolution cores with different weights, extracts meaningful image characteristics from the original image or the characteristic diagram, and outputs the meaningful image characteristics to the next characteristic diagram. The pooling layer is used for performing dimension reduction operation on the feature map and reserving main features in the image. Specifically, the Mask R-CNN model obtains a corresponding feature map by using convolution operation of a classification model, then calculates whether a certain Region of Interest (ROI) of the original image contains a defect by using a candidate Region Network (Region pro social Network), performs feature extraction by using a convolutional neural Network if the Region of Interest (ROI) of the original image contains the defect, and then predicts a defect boundary, a bounding box (bounding box) and Mask information (Mask) of the product; if no defect is contained, no correlation calculation is performed. In the training process, combined training can be carried out by combining predicted loss of various information, model parameters are optimized, and when the error between the output of the model and the true value is smaller than a certain threshold value, the training is stopped.
The Mask R-CNN model adopts a deep neural network structure with convolution and pooling operations, so that the original images with different brightness and inclination angles have higher robustness and higher generalization performance in a task of detecting defect positions. And, the output information of the model includes a defect location, mask information, confidence, etc., wherein the mask information corresponds to the defect type. The loss of the predicted mask information during model training can be combined with the loss of other information to optimize the model parameters, so that the defect type is determined according to the mask information output by the model, and the method has the advantages of accuracy in identification and strong robustness.
Illustratively, the product defect detecting method may further include:
and determining a processing mode of the target product according to the defect information of the target product.
For example, based on the type, location or size of the defect of the target product and the quality and safety requirements of the production line, it is determined whether the robot arm is to be operated to remove the target product from the production line or whether an alarm message is to be issued.
According to the exemplary embodiment, the processing mode of the target product can be determined based on the detected defect information, the corresponding business decision can be automatically made, the automation level of the production line is improved, and the labor cost is saved.
Illustratively, the product defect detecting method may further include:
storing the defect information of the target product and the marking information corresponding to the defect information in a training database;
and calling the defect information and the marking information corresponding to the defect information from the training database, and updating the target detection model.
For example, after each product is inspected, the defect information is stored in a training library. After the product defect detection method is operated for a period of time, for example, manually marking defect information and detection accuracy, storing information marked in the manual marking process in a training database, and then calling the defect information and marking information from the training database when an updating instruction is received to retrain the target detection model. According to the embodiment, the purpose that the model is dynamically expanded and generalized along with the service can be achieved, and the accuracy of defect detection is improved.
Fig. 4 is a schematic diagram of an application example of the product defect detection method according to the embodiment of the present application. As shown in fig. 4, in practical application, the method for detecting product defects can be implemented by using several main modules, such as an image acquisition system, a console, a correction module, a detection module, a training engine, a control module, a database, a business-related system, and the like.
The image acquisition system utilizes an image acquisition device on a production line to carry out all-around image acquisition on the part product to be detected.
The console converts images acquired by the image acquisition system into detection requests (query), performs load balancing and scheduling in real time according to the deployment condition of the online prediction model, and sends the detection requests to the optimal server carrying the prediction model.
The server runs a correction module and a detection module which are trained by a training engine. And after the server performs preset image preprocessing on the received detection request, the correction module is used for correcting the image, and the acquired image to be processed is corrected to a view field consistent with the template image. The correction module outputs the corrected image to the detection module, the detection module is used for carrying out target detection calculation, information such as the position and confidence coefficient of the defect is given, and then the result is returned to the control module.
The control module is designed in combination with the service scene, and can make a processing mode which meets the requirements of the production environment scene, such as alarming, log storage and the like, on the prediction result given by the model according to the service requirements and output response information. The control module also stores the prediction result and the corresponding processing mode into a database.
The business related system is used for making corresponding business operation according to the response information output by the control module, for example, the mechanical arm is operated to take the part with the defect detected by the detection module from the production line.
The database is used for storing the prediction result of the product image, the corresponding template image and the processing mode generated by the control module. After the system has been in operation for a period of time, the accuracy of defect detection and localization can be reviewed manually, and the database can then be updated.
The training engine is used in the training process of the deep learning models in the remediation module and the detection module, and the final model output is deployed to the production environment. The training engine can retrain the target detection model by using the data in the database as training data so as to improve the defect detection accuracy. The model trained each time can gradually replace the old model running on line in a small-flow online mode so as to achieve the purpose that the model expands and generalizes along with the service dynamic.
According to the product defect detection method, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, therefore, in the process of obtaining the defect information of the target product according to the input image and the target detection model, the depth characteristic extraction can be carried out aiming at the difference between the image to be detected and the template image, and the defect detection effect on the product with small size and weak texture characteristics in an industrial scene is improved by putting the attention of the model on the difference between the image to be detected and the template image.
Fig. 5 is a schematic diagram of a product defect detecting apparatus according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
an image obtaining module 510, configured to obtain an image to be detected of a target product;
a difference determining module 520, configured to determine a color characteristic difference between the image to be detected and the template image according to the color characteristic value of the image to be detected and a pre-stored color characteristic value of the template image;
a first channel processing module 530, configured to use the color feature difference as a feature value of a first channel, to obtain an input image including the first channel;
and the defect detection module 540 is configured to obtain defect information of the target product according to the input image and the target detection model.
Illustratively, as shown in fig. 6, the image acquisition module 510 includes:
an acquisition unit 511, configured to acquire an original image of a target product;
a first determining unit 512 for determining position information of the identification point of the target product in the original image;
and a correcting unit 513, configured to correct the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image, so as to obtain an image to be detected aligned with the template image.
Illustratively, the orthotic unit 513 includes:
the determining subunit is used for determining an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
and the alignment subunit is used for correcting the original image according to the affine transformation matrix to obtain the image to be detected aligned with the template image.
Illustratively, the first determining unit is configured to input the original image to the high-resolution network HRNet model, and obtain position information of the identification point output by the HRNet model in the original image.
Illustratively, as shown in fig. 6, the defect detecting module 540 includes:
an input unit 541, configured to input an input image to a target detection model, so as to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
the second determining unit 542 is configured to determine a defect type corresponding to the defect position according to the correspondence between the mask information and the defect type.
Illustratively, as shown in fig. 6, the input image further includes a second channel and/or a third channel;
the device also includes:
a second channel processing module 550, configured to use the color feature value of the image to be detected as a feature value of a second channel, and/or,
and a third channel processing module 560, configured to use the color feature value of the template image as the feature value of the third channel.
Illustratively, as shown in fig. 6, the apparatus further includes:
the third determining unit 570 is configured to determine a processing manner for the target product according to the defect information of the target product.
Illustratively, as shown in fig. 6, the apparatus further includes:
the storage module 580 is configured to store the defect information of the target product and the label information corresponding to the defect information in the training database;
the updating module 590 is configured to invoke the defect information and the label information corresponding to the defect information from the training database, and update the target detection model.
The device provided by the embodiment of the application can realize the method provided by the embodiment of the application and has corresponding beneficial effects.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the product defect detection method provided by the present application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the product defect detection method provided by the present application.
The memory 702, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the product defect detection method in the embodiments of the present application (e.g., the image acquisition module 510, the difference determination module 520, the first channel processing module 530, and the defect detection module 540 shown in fig. 5). The processor 701 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 702, that is, implements the product defect detection method in the above-described method embodiment.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for product defect detection, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 702 may optionally include memory located remotely from processor 701, which may be connected to electronics for product defect detection via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the product defect detecting method may further include: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for product defect detection, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, etc. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the application, the characteristic value of the first channel included in the input image is the color characteristic difference value between the image to be detected of the target product and the template image, therefore, in the process of obtaining the defect information of the target product according to the input image and the target detection model, the depth characteristic extraction can be carried out aiming at the difference between the image to be detected and the template image, and the defect detection effect of the product with small size and weak texture characteristics in an industrial scene is improved by putting the attention of the model on the difference between the image to be detected and the template image.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (18)
1. A method of product defect detection, comprising:
acquiring an image to be detected of a target product;
determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and a pre-stored color characteristic value of the template image;
taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel;
and obtaining the defect information of the target product according to the input image and the target detection model.
2. The method of claim 1, wherein the acquiring of the image to be detected of the target product comprises:
acquiring an original image of the target product;
determining the position information of the identification point of the target product in the original image;
and correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image.
3. The method according to claim 2, wherein said rectifying the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image comprises:
determining an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
and correcting the original image according to the affine transformation matrix to obtain the image to be detected aligned with the template image.
4. The method of claim 2, wherein the determining the location information of the identification point of the target product in the original image comprises:
and inputting the original image into a high-resolution network HRNet model to obtain the position information of the identification point output by the HRNet model in the original image.
5. The method of claim 1, wherein the obtaining defect information of the target product according to the input image and a target detection model comprises:
inputting the input image into the target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
and determining the defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type.
6. The method of claim 1, wherein the input image further comprises a second channel and/or a third channel;
the method further comprises the following steps:
using the color characteristic value of the image to be detected as the characteristic value of the second channel, and/or,
and taking the color characteristic value of the template image as the characteristic value of a third channel.
7. The method of any of claims 1 to 6, further comprising:
and determining a processing mode of the target product according to the defect information of the target product.
8. The method of any of claims 1 to 6, further comprising:
storing the defect information of the target product and the marking information corresponding to the defect information in a training database;
and calling the defect information and the marking information corresponding to the defect information from the training database, and updating the target detection model.
9. A product defect detection apparatus, comprising:
the image acquisition module is used for acquiring an image to be detected of a target product;
the difference value determining module is used for determining a color characteristic difference value between the image to be detected and the template image according to the color characteristic value of the image to be detected and the color characteristic value of the pre-stored template image;
the first channel processing module is used for taking the color characteristic difference value as a characteristic value of a first channel to obtain an input image comprising the first channel;
and the defect detection module is used for obtaining the defect information of the target product according to the input image and the target detection model.
10. The apparatus of claim 9, wherein the image acquisition module comprises:
the acquisition unit is used for acquiring an original image of the target product;
the first determining unit is used for determining the position information of the identification point of the target product in the original image;
and the correcting unit is used for correcting the original image according to the position information of the identification point in the original image and the position information of the identification point in the template image to obtain the image to be detected aligned with the template image.
11. The apparatus of claim 10, wherein the orthotic unit comprises:
the determining subunit is used for determining an affine transformation matrix according to the position information of the identification point in the original image and the position information of the identification point in the template image;
and the alignment subunit is used for correcting the original image according to the affine transformation matrix to obtain the image to be detected aligned with the template image.
12. The apparatus of claim 10, wherein the first determining unit is configured to input the original image into a high-resolution network HRNet model, and obtain position information of the identified point output by the HRNet model in the original image.
13. The apparatus of claim 9, wherein the defect detection module comprises:
the input unit is used for inputting the input image into the target detection model to obtain a defect position output by the target detection model and mask information corresponding to the defect position;
and the second determining unit is used for determining the defect type corresponding to the defect position according to the corresponding relation between the mask information and the defect type.
14. The apparatus of claim 9, wherein the input image further comprises a second channel and/or a third channel;
the device further comprises:
a second channel processing module for using the color characteristic value of the image to be detected as the characteristic value of the second channel, and/or,
and the third channel processing module is used for taking the color characteristic value of the template image as the characteristic value of a third channel.
15. The apparatus of any of claims 9 to 14, further comprising:
and the third determining unit is used for determining a processing mode of the target product according to the defect information of the target product.
16. The apparatus of any of claims 9 to 14, further comprising:
the storage module is used for storing the defect information of the target product and the marking information corresponding to the defect information in a training database;
and the updating module is used for calling the defect information and the marking information corresponding to the defect information from the training database and updating the target detection model.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010849926.3A CN111986178A (en) | 2020-08-21 | 2020-08-21 | Product defect detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010849926.3A CN111986178A (en) | 2020-08-21 | 2020-08-21 | Product defect detection method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111986178A true CN111986178A (en) | 2020-11-24 |
Family
ID=73442796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010849926.3A Pending CN111986178A (en) | 2020-08-21 | 2020-08-21 | Product defect detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986178A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365491A (en) * | 2020-11-27 | 2021-02-12 | 上海市计算技术研究所 | Method for detecting welding seam of container, electronic equipment and storage medium |
CN112446865A (en) * | 2020-11-25 | 2021-03-05 | 创新奇智(广州)科技有限公司 | Flaw identification method, flaw identification device, flaw identification equipment and storage medium |
CN112598627A (en) * | 2020-12-10 | 2021-04-02 | 广东省大湾区集成电路与系统应用研究院 | Method, system, electronic device and medium for detecting image defects |
CN112884743A (en) * | 2021-02-22 | 2021-06-01 | 深圳中科飞测科技股份有限公司 | Detection method and device, detection equipment and storage medium |
CN113252678A (en) * | 2021-03-24 | 2021-08-13 | 上海万物新生环保科技集团有限公司 | Appearance quality inspection method and equipment for mobile terminal |
CN113344094A (en) * | 2021-06-21 | 2021-09-03 | 梅卡曼德(北京)机器人科技有限公司 | Image mask generation method and device, electronic equipment and storage medium |
CN113591569A (en) * | 2021-06-28 | 2021-11-02 | 北京百度网讯科技有限公司 | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium |
CN113609897A (en) * | 2021-06-23 | 2021-11-05 | 阿里巴巴新加坡控股有限公司 | Defect detection method and defect detection system |
CN113689397A (en) * | 2021-08-23 | 2021-11-23 | 湖南视比特机器人有限公司 | Workpiece circular hole feature detection method and workpiece circular hole feature detection device |
CN113870225A (en) * | 2021-09-28 | 2021-12-31 | 广州市华颉电子科技有限公司 | Method for detecting content and pasting quality of artificial intelligent label of automobile domain controller |
CN113933294A (en) * | 2021-11-08 | 2022-01-14 | 中国联合网络通信集团有限公司 | Concentration detection method and device |
CN114354623A (en) * | 2021-12-30 | 2022-04-15 | 苏州凌云视界智能设备有限责任公司 | Weak mark extraction algorithm, device, equipment and medium |
CN114419035A (en) * | 2022-03-25 | 2022-04-29 | 北京百度网讯科技有限公司 | Product identification method, model training device and electronic equipment |
CN114612469A (en) * | 2022-05-09 | 2022-06-10 | 武汉中导光电设备有限公司 | Product defect detection method, device and equipment and readable storage medium |
CN114782445A (en) * | 2022-06-22 | 2022-07-22 | 深圳思谋信息科技有限公司 | Object defect detection method and device, computer equipment and storage medium |
CN114998097A (en) * | 2022-07-21 | 2022-09-02 | 深圳思谋信息科技有限公司 | Image alignment method, device, computer equipment and storage medium |
CN115661161A (en) * | 2022-12-29 | 2023-01-31 | 成都数联云算科技有限公司 | Method, device, storage medium, equipment and program product for detecting defects of parts |
CN115690101A (en) * | 2022-12-29 | 2023-02-03 | 摩尔线程智能科技(北京)有限责任公司 | Defect detection method, defect detection apparatus, electronic device, storage medium, and program product |
CN115937629A (en) * | 2022-12-02 | 2023-04-07 | 北京小米移动软件有限公司 | Template image updating method, template image updating device, readable storage medium and chip |
CN116046790A (en) * | 2023-01-31 | 2023-05-02 | 北京百度网讯科技有限公司 | Defect detection method, device, system, electronic equipment and storage medium |
CN116228746A (en) * | 2022-12-29 | 2023-06-06 | 摩尔线程智能科技(北京)有限责任公司 | Defect detection method, device, electronic apparatus, storage medium, and program product |
CN116883416A (en) * | 2023-09-08 | 2023-10-13 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for detecting defects of industrial products |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871895A (en) * | 2019-02-22 | 2019-06-11 | 北京百度网讯科技有限公司 | The defect inspection method and device of circuit board |
US20200134800A1 (en) * | 2018-10-29 | 2020-04-30 | International Business Machines Corporation | Precision defect detection based on image difference with respect to templates |
CN111369545A (en) * | 2020-03-10 | 2020-07-03 | 创新奇智(重庆)科技有限公司 | Edge defect detection method, device, model, equipment and readable storage medium |
-
2020
- 2020-08-21 CN CN202010849926.3A patent/CN111986178A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200134800A1 (en) * | 2018-10-29 | 2020-04-30 | International Business Machines Corporation | Precision defect detection based on image difference with respect to templates |
CN109871895A (en) * | 2019-02-22 | 2019-06-11 | 北京百度网讯科技有限公司 | The defect inspection method and device of circuit board |
CN111369545A (en) * | 2020-03-10 | 2020-07-03 | 创新奇智(重庆)科技有限公司 | Edge defect detection method, device, model, equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
双锴: "《计算机视觉》", vol. 1, 31 January 2020, 北京邮电大学出版社, pages: 131 - 133 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446865A (en) * | 2020-11-25 | 2021-03-05 | 创新奇智(广州)科技有限公司 | Flaw identification method, flaw identification device, flaw identification equipment and storage medium |
CN112365491A (en) * | 2020-11-27 | 2021-02-12 | 上海市计算技术研究所 | Method for detecting welding seam of container, electronic equipment and storage medium |
CN112598627A (en) * | 2020-12-10 | 2021-04-02 | 广东省大湾区集成电路与系统应用研究院 | Method, system, electronic device and medium for detecting image defects |
CN112884743B (en) * | 2021-02-22 | 2024-03-05 | 深圳中科飞测科技股份有限公司 | Detection method and device, detection equipment and storage medium |
CN112884743A (en) * | 2021-02-22 | 2021-06-01 | 深圳中科飞测科技股份有限公司 | Detection method and device, detection equipment and storage medium |
CN113252678A (en) * | 2021-03-24 | 2021-08-13 | 上海万物新生环保科技集团有限公司 | Appearance quality inspection method and equipment for mobile terminal |
CN113344094A (en) * | 2021-06-21 | 2021-09-03 | 梅卡曼德(北京)机器人科技有限公司 | Image mask generation method and device, electronic equipment and storage medium |
CN113609897A (en) * | 2021-06-23 | 2021-11-05 | 阿里巴巴新加坡控股有限公司 | Defect detection method and defect detection system |
CN113591569A (en) * | 2021-06-28 | 2021-11-02 | 北京百度网讯科技有限公司 | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium |
CN113689397A (en) * | 2021-08-23 | 2021-11-23 | 湖南视比特机器人有限公司 | Workpiece circular hole feature detection method and workpiece circular hole feature detection device |
CN113870225A (en) * | 2021-09-28 | 2021-12-31 | 广州市华颉电子科技有限公司 | Method for detecting content and pasting quality of artificial intelligent label of automobile domain controller |
CN113870225B (en) * | 2021-09-28 | 2022-07-19 | 广州市华颉电子科技有限公司 | Method for detecting content and pasting quality of artificial intelligent label of automobile domain controller |
CN113933294A (en) * | 2021-11-08 | 2022-01-14 | 中国联合网络通信集团有限公司 | Concentration detection method and device |
CN114354623A (en) * | 2021-12-30 | 2022-04-15 | 苏州凌云视界智能设备有限责任公司 | Weak mark extraction algorithm, device, equipment and medium |
CN114419035B (en) * | 2022-03-25 | 2022-06-17 | 北京百度网讯科技有限公司 | Product identification method, model training device and electronic equipment |
CN114419035A (en) * | 2022-03-25 | 2022-04-29 | 北京百度网讯科技有限公司 | Product identification method, model training device and electronic equipment |
CN114612469A (en) * | 2022-05-09 | 2022-06-10 | 武汉中导光电设备有限公司 | Product defect detection method, device and equipment and readable storage medium |
CN114612469B (en) * | 2022-05-09 | 2022-08-12 | 武汉中导光电设备有限公司 | Product defect detection method, device and equipment and readable storage medium |
CN114782445A (en) * | 2022-06-22 | 2022-07-22 | 深圳思谋信息科技有限公司 | Object defect detection method and device, computer equipment and storage medium |
CN114998097A (en) * | 2022-07-21 | 2022-09-02 | 深圳思谋信息科技有限公司 | Image alignment method, device, computer equipment and storage medium |
CN115937629A (en) * | 2022-12-02 | 2023-04-07 | 北京小米移动软件有限公司 | Template image updating method, template image updating device, readable storage medium and chip |
CN115937629B (en) * | 2022-12-02 | 2023-08-29 | 北京小米移动软件有限公司 | Template image updating method, updating device, readable storage medium and chip |
CN115690101A (en) * | 2022-12-29 | 2023-02-03 | 摩尔线程智能科技(北京)有限责任公司 | Defect detection method, defect detection apparatus, electronic device, storage medium, and program product |
CN116228746A (en) * | 2022-12-29 | 2023-06-06 | 摩尔线程智能科技(北京)有限责任公司 | Defect detection method, device, electronic apparatus, storage medium, and program product |
CN115661161A (en) * | 2022-12-29 | 2023-01-31 | 成都数联云算科技有限公司 | Method, device, storage medium, equipment and program product for detecting defects of parts |
CN116046790A (en) * | 2023-01-31 | 2023-05-02 | 北京百度网讯科技有限公司 | Defect detection method, device, system, electronic equipment and storage medium |
CN116046790B (en) * | 2023-01-31 | 2023-10-27 | 北京百度网讯科技有限公司 | Defect detection method, device, system, electronic equipment and storage medium |
CN116883416A (en) * | 2023-09-08 | 2023-10-13 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for detecting defects of industrial products |
CN116883416B (en) * | 2023-09-08 | 2023-11-24 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for detecting defects of industrial products |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986178A (en) | Product defect detection method and device, electronic equipment and storage medium | |
CN111523468B (en) | Human body key point identification method and device | |
CN108229343B (en) | Target object key point detection method, deep learning neural network and device | |
CN111693534B (en) | Surface defect detection method, model training method, device, equipment and medium | |
CN111598164B (en) | Method, device, electronic equipment and storage medium for identifying attribute of target object | |
CN111291885A (en) | Near-infrared image generation method, network generation training method and device | |
CN112529073A (en) | Model training method, attitude estimation method and apparatus, and electronic device | |
CN112949767B (en) | Sample image increment, image detection model training and image detection method | |
CN112966742A (en) | Model training method, target detection method and device and electronic equipment | |
CN111881908B (en) | Target detection model correction method, detection device, equipment and medium | |
CN113537374B (en) | Method for generating countermeasure sample | |
CN112330730B (en) | Image processing method, device, equipment and storage medium | |
CN113436100B (en) | Method, apparatus, device, medium, and article for repairing video | |
CN112241716B (en) | Training sample generation method and device | |
CN111767853A (en) | Lane line detection method and device | |
US20210374977A1 (en) | Method for indoor localization and electronic device | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
CN111524113A (en) | Lifting chain abnormity identification method, system, equipment and medium | |
CN113642471A (en) | Image identification method and device, electronic equipment and storage medium | |
CN116245193A (en) | Training method and device of target detection model, electronic equipment and medium | |
CN113516697B (en) | Image registration method, device, electronic equipment and computer readable storage medium | |
CN111709428A (en) | Method and device for identifying key point positions in image, electronic equipment and medium | |
CN111523467A (en) | Face tracking method and device | |
CN111275827A (en) | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment | |
CN112749701B (en) | License plate offset classification model generation method and license plate offset classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |