CN111652852A - Method, device and equipment for detecting surface defects of product - Google Patents

Method, device and equipment for detecting surface defects of product Download PDF

Info

Publication number
CN111652852A
CN111652852A CN202010382866.9A CN202010382866A CN111652852A CN 111652852 A CN111652852 A CN 111652852A CN 202010382866 A CN202010382866 A CN 202010382866A CN 111652852 A CN111652852 A CN 111652852A
Authority
CN
China
Prior art keywords
image
defect
defects
sampling
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010382866.9A
Other languages
Chinese (zh)
Other versions
CN111652852B (en
Inventor
崔浩
黄虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202010382866.9A priority Critical patent/CN111652852B/en
Publication of CN111652852A publication Critical patent/CN111652852A/en
Application granted granted Critical
Publication of CN111652852B publication Critical patent/CN111652852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a method, a device and equipment for detecting surface defects of products, which comprises the following steps: acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects; if so, sliding sampling is carried out on the image to be detected by using a sliding window through a pre-trained defect detection model, defect detection is carried out on a sampled window area, and a window area position identifier with defects in the image to be detected is output; carrying out image segmentation on the window area with the defects by using an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects; and connecting the adjacent contour areas through a region growing algorithm to obtain the form and the number of defects in the image to be detected. The invention can perform sliding sampling detection on the image, and solves the problems that the prior art is time-consuming and labor-consuming, cannot detect the fine defects on the surface and has poor positioning performance.

Description

Method, device and equipment for detecting surface defects of product
Technical Field
The invention relates to the technical field of computers, in particular to a method, a device and equipment for detecting surface defects of products.
Background
With the rapid development of the manufacturing industry in China, the quantity and the types of products produced by the industry are increased day by day. The quality requirements of people on products are higher and higher, the quality of the surface of the product not only affects the appearance of the product, but also the more serious functional defects directly lead to the commercial value depreciation of the product. In the production of chemical fiber products, due to the influence of equipment and processes, extremely fine defects often appear in the chemical fiber products, and even the width of some defects is only the size of one pixel. When the thin object to be detected moves, human eyes cannot well distinguish the form of the object to be detected and even cannot perceive the thin object to be detected. The traditional method for manually selecting the defective products is time-consuming and labor-consuming, has limited resolution, and can cause missed detection and false detection of easily tired human eyes. This series of problems necessarily results in a reduction in product quality and an increase in enterprise operating costs.
With the development of deep learning technology, the processing technology of computer on image is leaped. A series of problems on an industrial production line are solved through a non-contact visual technology, so that industrial production automation of a factory is realized. The problems that the product defects are misjudged and missed by labor cost generated by manpower and subjectivity of workers are solved. Therefore, how to rapidly detect the defects on the surface of the product to improve the production line efficiency and the product quality becomes a problem to be solved urgently. Because the surface defect target of the chemical fiber product is relatively fine and has large interference, the features are difficult to effectively extract by relying on the traditional machine learning and image processing method.
When the surface defects of chemical fiber products are detected in the prior process, a method for manually selecting the surface defects of the chemical fiber products is adopted, so that time and labor are wasted, and the problems of false detection and missed detection possibly caused by factors such as human eye resolution, small defects and the like are also solved.
Disclosure of Invention
The invention provides a method, a device and equipment for detecting surface defects of products, which are used for solving the problems that the time and labor are wasted, the surface fine defects cannot be detected and the positioning property is poor when the surface defects of chemical fiber products are detected in the prior art.
According to a first aspect of embodiments of the present application, there is provided a method for detecting surface defects of a product, the method including:
acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
if yes, sliding sampling is carried out on the image to be detected through a pre-trained defect detection model by using a sliding window, defect detection is carried out on a sampled window area, and a window area position mark with defects in the image to be detected is output;
carrying out image segmentation on the window area with the defects by using an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
and connecting the adjacent contour areas through a region growing algorithm to obtain the form and the number of the defects in the image to be detected.
Optionally, the pre-trained defect detection model is generated by the following training method:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by using a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
and inputting the images in the plurality of samples into an initialization network model, adjusting parameters of the initialization network model according to the defect positions output by the initialization network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the sliding sampling of the image by using the sliding window includes at least one of the following steps:
performing sliding sampling on the image in the horizontal direction by using a sliding window;
utilizing a sliding window to perform sliding sampling on the image in the vertical direction;
and when the sliding window slides in the horizontal direction/the vertical direction, performing sliding sampling on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, initializing a network model comprising a sampling part and a detection part comprises at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window.
Optionally, adjusting parameters of the current network model according to the defect position output by the network model and the labeled defect position includes:
adjusting the height and/or width of a sliding window adopted by a sampling part in the current network model;
and adjusting parameters of a neural network layer of a detected part in the current network model.
Optionally, adjusting parameters of the current network model, and ending the parameter adjustment when a training end condition is reached, includes at least one of the following steps:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
Optionally, the image segmentation of the window region with the defect by using an image segmentation algorithm includes:
when the output operation is carried out on the skip transmission layer of the U-Net network framework, the attention mechanism is used for the output operation from the skip transmission layerInput x of up-sampling layerupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
Optionally, input x from the upsampling layer is processed by an attention mechanismupAdding weight coefficients, including:
for xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein, WattFor the weight coefficients, Sigmoid is the first activation function, Conv _1 × 1 is the convolution operation with convolution kernel 1, xconvFor the skip transport layer, R is xconvAnd xupThe relevance of (c).
Determining the xconvAnd xupIncludes the following steps:
are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe calculation formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
Optionally, acquiring an image and determining whether the image is an image to be detected for detecting a specified type of defect, includes:
acquiring an image and inputting the image into a classification prediction model, wherein the image is input as a network classification model by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected needing to detect the specified type of defects according to the classification result of the classification prediction model.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for detecting surface defects of a product, the apparatus including:
the determining module is used for acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
the detection module is used for performing sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model if the image is determined to be the image to be detected needing to detect the specified type of defects, performing defect detection on a sampled window area, and outputting a window area position identifier with the defects in the image to be detected;
the segmentation module is used for carrying out image segmentation on the window area with the defects by utilizing an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
and the connecting module is used for connecting the adjacent contour regions through a region growing algorithm to obtain the form and the number of the defects in the image to be detected.
Optionally, the detection module is configured to generate the pre-trained defect detection model by the following training method:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by using a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
and inputting the images in the plurality of samples into an initialization network model, adjusting parameters of the initialization network model according to the defect positions output by the initialization network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the detection module is configured to perform sliding sampling on the image by using a sliding window, and includes at least one of the following steps:
performing sliding sampling on the image in the horizontal direction by using a sliding window;
utilizing a sliding window to perform sliding sampling on the image in the vertical direction;
and when the sliding window slides in the horizontal direction/the vertical direction, performing sliding sampling on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the detection module is configured to initialize a network model including a sampling portion and a detection portion, and includes at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window.
Optionally, the detecting module is configured to adjust a parameter of the current network model according to the defect position output by the network model and the labeled defect position, and includes:
adjusting the height and/or width of a sliding window adopted by a sampling part in the current network model;
and adjusting parameters of a neural network layer of a detected part in the current network model.
Optionally, the detection module is configured to adjust parameters of the current network model, and end the parameter adjustment when a training end condition is reached, and includes at least one of the following steps:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
Optionally, the segmentation module is configured to perform image segmentation on the window region with the defect by using an image segmentation algorithm, and includes:
when the output operation is carried out on the skip transmission layer of the U-Net network framework, the input x from the upper sampling layer is processed by an attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
Optionally, the segmentation module is for applying an attention mechanism to input x from the upsampling layerupAdding weight coefficients, including:
for xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein, WattFor the weight coefficients, Sigmoid is the first activation function, Conv _1 × 1 is the convolution operation with convolution kernel 1, xconvFor the skip transport layer, R is xconvAnd xupThe relevance of (c).
Optionally, the segmentation module is configured to determine the xconvAnd xupIncludes the following steps:
are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe calculation formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
Optionally, the determining module is configured to acquire an image and determine whether the image is an image to be detected for detecting a specified type of defect, and includes:
acquiring an image and inputting the image into a classification prediction model, wherein the image is input as a network classification model by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected needing to detect the specified type of defects according to the classification result of the classification prediction model.
According to a third aspect of embodiments of the present application, there is provided a product surface defect detecting apparatus, including: a processor and a memory, wherein the memory is configured to store a program;
the processor is configured to execute the program in the memory, so as to cause the computer to execute the above aspects of the embodiments of the present application and any method related to the aspects.
Optionally, the apparatus further comprises:
the conveying device comprises a base and a product fixing module, wherein the product fixing module is positioned above the base and connected with the base, the base is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing a product;
the conveying module is positioned below the base and used for conveying products;
the device comprises an image acquisition module and an optical module, wherein the image acquisition module is positioned at the top or the bottom of the optical module and used for acquiring the surface image of the product, and the optical module is used for emitting illumination and assisting the image acquisition module in image acquisition.
Optionally, the image acquisition module and the optical module comprise:
the upper image acquisition module and the upper optical module are positioned at the top of the product fixing module, and the lower image acquisition module and the lower optical module are positioned at the bottom of the product fixing device.
According to a fourth aspect of the embodiments of the present application, there is provided a chip, the chip is coupled with a memory in a user equipment, so that the chip invokes program instructions stored in the memory when running, thereby implementing the above aspects of the embodiments of the present application and any method that may be involved in the aspects.
According to a fifth aspect of the embodiments of the present application, there is provided a computer-readable storage medium storing program instructions, which, when executed on a computer, cause the computer to perform the above aspects of the embodiments of the present application and any of the methods that the aspects relate to.
According to a sixth aspect of embodiments of the present application, there is provided a computer program product, which, when run on an electronic device, causes the electronic device to perform a method of implementing the various aspects of embodiments of the present application and any possible ones of the various aspects.
In addition, for technical effects brought by any one implementation manner of the second aspect to the sixth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
The method, the device and the equipment for detecting the surface defects of the products have the following beneficial effects that:
according to the method, the device and the equipment for detecting the surface defects of the product, after the image to be detected is input into a defect detection model, the window area with the defects is detected by means of sliding sampling and defect detection on the sampled window area, and then the shape and the number of the defects in the image to be detected are obtained through an image segmentation algorithm and an area growth algorithm, so that the calculation resources of a computer can be saved, the limitation of a deep neural network on the size of the input image is avoided, the tiny defects are better identified, and the detection precision and the detection speed of the defect detection model are increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a method for detecting surface defects of a product according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a pre-training method of a classification detection model according to an embodiment of the present invention;
fig. 3 is a schematic network structure diagram of a defect detection model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a method for pre-training a defect detection model according to an embodiment of the present invention;
FIG. 5 is a schematic view of a device for detecting surface defects of a product according to an embodiment of the present invention;
FIG. 6 is a schematic view of a device for detecting surface defects of a product according to an embodiment of the present invention;
fig. 7 is a schematic view of a product surface defect detecting apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" in the embodiments of the present invention describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application scenario described in the embodiment of the present invention is for more clearly illustrating the technical solution of the embodiment of the present invention, and does not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems. In the description of the present invention, the term "plurality" means two or more unless otherwise specified.
How to rapidly detect the defects on the surface of the product to improve the production line efficiency and the product quality becomes a problem to be solved urgently. Because the surface defect target of the chemical fiber product is relatively fine and has larger interference, the characteristics are difficult to effectively extract depending on the traditional machine learning and image processing method, so that the detection in the aspect has no larger breakthrough, the tripwire is taken as a common defect in the production of the chemical fiber product, and the forming of the tripwire is inseparable from the process, the mechanical state and the production management control of a production line. The appearance of the stumbled yarn not only affects the appearance of chemical fiber products, but also affects the unwinding performance of the products, thereby affecting the production of downstream manufacturers. The method for detecting the surface defects of the chemical fiber products in the prior process has the following problems:
1) the method for manually selecting the surface defects of the chemical fiber products is time-consuming and labor-consuming, and can also cause the problems of false detection and missed detection due to factors such as human eye resolution, small defects and the like;
2) when the traditional machine learning and image processing method is used for detecting the fine defects on the surface of a chemical fiber product, effective fine features cannot be extracted, and the algorithm has poor adaptability and robustness;
3) the traditional machine learning and image processing method cannot quickly locate the tiny defects on the surface of the high-resolution image of the chemical fiber product.
Specifically, a convolutional neural network is adopted to perform rectangular scanning classification on the whole area to determine whether tripwire exists in a certain rectangle, and then a method for acquiring the position of the tripwire through linear feature extraction and a traditional image segmentation algorithm has the following problems:
1) the detection efficiency can be greatly reduced when high-resolution images are dealt with;
2) the method has the advantages that the requirements on the ambient light for positioning the spinning cake area by adopting threshold segmentation and shape fitting are high, and the positioning of the spinning cake is seriously influenced by uneven illumination and illumination change;
3) the method for classifying the rectangle in a sliding mode based on the convolutional neural network (VGG) classification is adopted, the size of the rectangle is difficult to set, the rectangle is too large to be beneficial to classification of fine defects and wire tripping splicing in the later period, and the rectangle is too small to set, so that the overall speed of the algorithm is reduced, and normal filiform texture and wire tripping texture cannot be distinguished.
In the prior art, a non-linear gray level co-occurrence matrix characteristic of a defective image and an imperfect image is also constructed, and then a defect area is located by measuring the similarity of the characteristic of the defective image and the feature of the imperfect image, but the following problems also exist:
1) the adoption of the global characteristic can not well describe the tiny defects on the surface of the fabric, and the missed detection of the tiny defects is inevitably led;
2) the traditional image processing method based on the gray level co-occurrence matrix has higher requirement on the environment and poor robustness.
In order to solve the problems, the application provides a method for detecting the surface defects of the product, which can detect the image of the specified type of defects, then slide-sample the image to be detected by using a sliding window through a defect detection model provided by the application, detect the defects of the sampled window, and obtain the forms and the number of the defects existing in the image to be detected by using an image segmentation algorithm and a region growing algorithm.
The method provided by the embodiment of the application can effectively detect tripwire and carry out pixel level segmentation and number statistics. The method is used for counting the obtained tripwire defects, adjusting the process of a production line and the running state of equipment, and reducing the occurrence of the tripwire defects, thereby improving the quality of products and the efficiency of the production line. The method overcomes the problems that the manual selection of the tiny stumble wire defects in the prior art is time-consuming and labor-consuming, the missed detection and the false detection are serious, the existing method has poor anti-jamming capability and low detection efficiency, the tiny stumble wire defects are missed and the false detection are poor, and the like.
As shown in fig. 1, a method for detecting surface defects of a product provided by an embodiment of the present application includes:
step S101, acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
the method for detecting the surface defects of the product provided by the embodiment of the application can be used for detecting various defects on the surface of a chemical fiber product, and is optionally mainly applied to the detection of the tripwire defects, wherein the specified type of defects are tripwire type defects;
the method comprises the steps of determining whether an image is an image to be detected needing to detect the specified type of defects or not by detecting the content of the acquired image, acquiring the images of the upper bottom surface, the lower bottom surface or the side surface of a paper tube of the chemical fiber product when acquiring the image, wherein the stumbling defects mainly appear on the upper bottom surface and the lower bottom surface of the chemical fiber product, so that the acquired image is the images of the upper bottom surface and the lower bottom surface of the chemical fiber product, namely the image to be detected needing to detect the specified type of defects, specifically, the chemical fiber product in the acquired images of the upper bottom surface and the lower bottom surface of the chemical fiber product is circular or semicircular, the acquired image internalized fiber product in the side surface of the chemical fiber product is rectangular, and whether the acquired image is the image to be detected needing to detect the specified type of defects or not can be determined according to.
The method for determining whether the acquired image is the image to be detected needing to detect the specified type of defects can be characterized in that the image characteristics of the upper bottom surface, the lower bottom surface and the side surface of a chemical fiber product are preset, the acquired image is compared with the characteristics, whether the acquired image is the image to be detected is determined, optionally, the acquired image is input into a classification prediction model by using a classification prediction model obtained by training in advance, and whether the image is the image to be detected needing to detect the specified type of defects is determined according to the classification result of the classification prediction model. The method comprises the steps of utilizing a training sample comprising a plurality of images and labeled defect types to input the images as a network classification model, carrying out model training by taking the labeled defect types of the output images as targets to obtain the classification prediction model, wherein the labeled defect types are tripwire defects or other defect types labeled according to image contents. In the embodiment of the application, ResNet50 is used as the classification prediction model, and the classification prediction model is pre-trained based on a gradient descent and sectional learning rate method.
Step S102, if yes, sliding sampling is carried out on the image to be detected through a pre-trained defect detection model by using a sliding window, defect detection is carried out on a sampled window area, and a window area position mark with defects in the image to be detected is output;
and if the acquired image is determined to be the to-be-detected image of the tripwire defect type, inputting the to-be-detected image into a pre-trained defect detection model, performing sliding sampling on the to-be-detected image and performing defect detection on a sampling window area, and outputting a window area position identifier with defects in the to-be-detected image.
The defect detection model comprises a sampling part and a detection part, the sampling part utilizes a sliding window to perform sliding sampling on an image to be detected input into the defect detection model, and the detection part performs defect detection on a window area obtained by the sliding sampling.
The method comprises the steps that a defect detection model needs to be pre-trained before sliding sampling and detection are carried out, a network model comprising a sampling part and a detection part needs to be initialized firstly in the pre-training process, then, the defect position comprising a plurality of images and labels is used as a training sample, parameters of the current network model are adjusted according to the defect position output by the network model and the labeled defect position, and parameter adjustment is finished when the training finishing condition is met, so that the defect detection model is obtained.
The parameter adjustment comprises parameter adjustment of the sampling part and the detection part, specifically, the parameter adjustment of the sampling part comprises adjustment of the height and/or width of a sliding window, and when the detection part detects a window area obtained by sliding sampling, the detection part performs parameter adjustment on the detection part by taking a better detection defect as a target. And when the training ending condition is met, namely the detection speed or the detection precision of the current network model or other parameter indexes capable of representing the performance of the defect detection model can meet the preset requirement, ending the parameter adjustment at the moment to obtain the defect detection model.
The window area position identification with defects in the image to be detected, which is output in the embodiment of the application, is determined by the sliding length of the sliding window when the sliding window horizontally slides and vertically slides.
In the prior art, the input of a large-resolution image into a deep neural network mainly has the following three problems:
1) the limitation of the deep neural network model on the size of the image size;
2) the large-resolution image input to the deep neural network model easily causes the exhaustion of computer computing resources;
3) the deep network cannot effectively extract fine target features from a large-resolution image.
Aiming at the problems, the method for detecting the sliding avoids the limitation of a deep neural network model on the size of the image size, avoids the problem that the large-resolution image causes the exhaustion of computer computing resources, can more effectively extract fine target features, and better detects the defect features in the image.
Step S103, carrying out image segmentation on the window area with the defects by using an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
wherein, when the output operation is carried out on the jump transmission layer of the U-Net network framework, the input x from the upper sampling layer is processed by an attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
Attention Mechanism (Attention Mechanism) in neural network is a resource allocation scheme to allocate computing resources to more important tasks while solving the problem of information overload in the case of limited computing power. In neural network learning, generally speaking, the more parameters of a model, the stronger the expression ability of the model, and the larger the amount of information stored by the model, but this may cause a problem of information overload. By introducing the attention mechanism, information which is more critical to the current task is focused in a plurality of input information, the attention degree to other information is reduced, even irrelevant information is filtered, the information overload problem can be solved, the task processing efficiency and accuracy are improved, and by introducing the attention mechanism model in the existing U-Net network framework, the problem that tiny defect characteristics are not obvious can be solved, so that the output of a jump transmission layer can better obtain the characteristics of the defects in the image.
And S104, connecting adjacent contour regions through a region growing algorithm to obtain the form and the number of defects in the image to be detected.
During image segmentation, because the middle area is not obvious, outline areas on two sides of a defect can be obtained, or only a part of the defect can be collected in a window area during sliding sampling, adjacent outline areas need to be connected through an area growing algorithm, and the form and the number of the defects in the image to be detected are obtained.
The processes of the embodiments of the present application will be described in detail below with reference to specific embodiments.
In the embodiment of the application, whether an image is to-be-detected image needing to detect a specified type of defect is determined through a classification prediction model, the classification prediction model needs to be pre-trained, and the specific implementation process is as shown in fig. 2:
step S201, obtaining an image and inputting a classification prediction model, wherein the image is used as a network classification model to be input by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets;
the marking defect type is to determine whether the type of the tripwire defect or other types of defects according to the image content;
the embodiment of the application adopts ResNet50 as a network model of a classification prediction model. The classification prediction is carried out by a residual connection mode in a ResNet network, and the formula is shown as follows:
xcur=F(xpre,Wconv)+Wsamxpre
wherein x ispreFeature Map (Feature Map) of the upper layer; wconvLearning parameters for the next layer; f represents the operation rule of the next layer (including convolution, pooling, ReLu, BN, etc.); wsamSampling operation (convolution or pooling) for previous layer model in order to guarantee F (x)pre,Wconv) Is x is the output dimensionpreKeeping consistent; x is the number ofcurAnd the output characteristic diagram of the target layer is obtained.
The network model adopts a cross entropy loss function of a softmax classifier, and parameter adjustment is carried out on the classification prediction model based on gradient descent and a sectional learning rate. Suppose the output of the network x ═ x1,x2……xi]The label information of the sample is y ═ y1,y2,……,yi]And outputting a result through each dimension of the softmax classifier as follows:
Figure BDA0002482692310000141
the output of the softmax classifier is
Figure BDA0002482692310000142
The final loss function L of the model is:
Figure BDA0002482692310000151
after obtaining the loss function, adjusting the parameters of the model by adopting a gradient descent method, wherein the parameters W of the network model are expressed as:
Figure BDA0002482692310000152
wherein W is the parameter of the network model, η represents the learning rate, L and
Figure BDA0002482692310000153
in the context of a correlation, the correlation,
Figure BDA0002482692310000154
and xiCorrelation, xiRelated to W.
And step S202, finishing training to obtain a classification prediction model when the condition for finishing model parameter adjustment is met.
After determining that the acquired image is an image to be detected, inputting the image to be detected into a defect detection model for defect detection, as shown in fig. 3, which is a schematic network structure diagram of the defect detection model, the defect detection model includes a sampling part 301 and a detection part 302, the sampling part is used for performing sliding sampling on the image by using a sliding window, the detection part is used for performing defect detection on the window region, specifically, during the sliding sampling, only the image to be detected may be subjected to horizontal sliding sampling or vertical sliding sampling, as an optional implementation manner, the sliding window may be used to perform sliding sampling in the horizontal direction on the image first, and then the sliding window is used to perform sliding sampling in the vertical direction on the image, or the sliding window may be used to perform sliding sampling in the vertical direction on the image first, and then the sliding window is used to perform sliding sampling in the horizontal direction on the image, or, the two sliding windows are used for simultaneously carrying out sliding sampling on the image in the horizontal direction and the vertical direction;
when the sliding window slides in the horizontal direction/the vertical direction, the image is subjected to sliding sampling according to the speed of moving the fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction, specifically, the fixed proportion is a pre-specified numerical value, specifically, a numerical value between [0 and 1], a person skilled in the art can set the fixed proportion according to actual requirements, and the overlapping area of two adjacent sliding window areas can be obtained by subtracting the integral area from the integral area of the sliding window and multiplying the integral area by the fixed proportion.
Utilize sliding window to carry out the sliding sampling through the defect detection model the image of waiting to detect to before carrying out defect detection to the window region of sampling, at first need carry out the pre-training to the defect detection model, as shown in figure 4, include:
step S401, initializing a network model comprising a sampling part and a detection part;
wherein, initializing the parameters of the sampling part comprises:
initializing the height and width of a sliding window adopted by a sampling part, wherein the height and width of the initialized sliding window are the height and width input by the network model;
initializing the sliding direction of a sliding window adopted by the sampling part, wherein the sliding direction is a horizontal direction and a vertical direction;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window. The fixed ratio is a preset value and is set by a person skilled in the art according to actual requirements.
Step S402, obtaining a sample set comprising a plurality of samples, wherein each sample comprises an image and a labeled defect position;
the samples are used for determining the surface images of the chemical fiber products for detecting the specified defect types and marking the positions of the defects on each image.
And generating more samples by adjusting the image angle, the saturation, the exposure and the tone during training so as to improve the detection precision of the defect detection model.
Step S403, inputting images in the multiple samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model;
the parameter adjustment comprises the adjustment of the height and/or width of a sliding window adopted by a sampling part in the initialized network model; and adjusting parameters of a neural network layer of a detected part in the initialized network model.
When parameters of the initialized network model are adjusted, for a sampling part of the initialized network model, only the width (horizontal direction) of a sliding window sliding in the horizontal direction can be adjusted, or only the height (vertical direction) of a sliding window sliding in the vertical direction can be adjusted.
And adjusting parameters of a neural network layer of a detected part in the initialized network model.
The method for achieving the training end condition in the embodiment of the application specifically comprises the following two modes:
1) determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
the position of a defect in an image can be detected through the network model, and the detection precision of the detection model is determined through the output defect position and the marked defect position, wherein the detection precision can be obtained by averaging the detection precision of a plurality of sliding windows, or can be the detection precision corresponding to any sliding window area, and the detection precision can be a numerical value between [0 and 1 ];
the detection accuracy meeting the condition may be that the plurality of parameters are adjusted for multiple times, at this time, the detection accuracy is also changed correspondingly, the parameter corresponding to the highest detection accuracy is selected as the parameter corresponding to the defect detection model, as an optional implementation manner, a detection accuracy threshold is preset, and when the detection accuracy of the network model exceeds the threshold, the corresponding parameter is selected as the parameter corresponding to the defect detection model.
2) Determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
As an optional implementation, the detection speed is normalized and preferably mapped to a value between [0 and 1], so that the size of the sliding window corresponding to the higher the detection speed is, the higher the detection precision is, is optimal.
Similarly, when the weighted sum value meets the requirement, the multiple parameters may be adjusted for multiple times, at this time, the detection accuracy and the detection accuracy also change correspondingly, the parameter corresponding to the highest weighted sum value is selected as the parameter corresponding to the defect detection model, as an optional implementation manner, the threshold corresponding to the weighted sum value is preset, and when the weighted sum value exceeds the threshold, the corresponding parameter is selected as the parameter corresponding to the defect detection model.
The embodiment of the application adopts an overlapped sliding detection method with feedback information, and the method automatically sets the size of a sliding window according to the detection precision and the detection speed obtained by each training so as to decide the problems of the detection speed, the detection precision and the setting of the sliding window of a network model.
The pre-training process of the defect detection model is described below with reference to specific embodiments:
1) initializing a network model comprising a sampling part and a detection part specifically comprises:
in the embodiment of the present application, the width and height of an initialized sliding window are respectively denoted by W and H (usually initialized to be the width and height of a network input), the width and height of an image in an acquired training sample are respectively denoted by W and H, a fixed ratio of each horizontal sliding and vertical sliding of the sliding window is denoted by Rx and Ry, a ratio for adjusting the size of the sliding window is set as N ═ 0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4, a network model effect weighting factor is set as α, a detection speed of the model is set as V, a detection precision is set as P, a model overall performance obtained by weighted summation of the detection speed and the detection precision is Perf, and times of training are set as ∈ [0, len (N) ], where len (N) represents a value of N.
2) Acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
3) inputting the image in the sample into the current network model, and adjusting the parameters of the current network model according to the defect position output by the network model and the labeled defect position specifically comprises:
inputting the images in the sample into the current network model to obtain the detection speed V and the detection precision P of the network model, and calculating the overall performance of the network model, which can be expressed as:
Perf=αP+(1-α)V
respectively adjusting the width and the height of the sliding window to be w and h according to the proportion of adjusting the size of the sliding window, and obtaining w and h as follows:
w=w*N[times]
h=h*N[times]
and selecting the optimal size of the sliding window according to the maximum Perf principle, and adjusting the parameters of the current network model.
The method for identifying the position of the window region with the defect in the image to be detected output by the obtained defect detection model is as follows:
calculating the step length of sliding horizontal sliding and vertical sliding of the sliding window, wherein the step length in the horizontal direction is as follows: sx ═ 1-Rx × w; vertical direction step length: and Sy is (1-Ry) h. Assuming that the i-th sliding window in the horizontal direction and the j-th sliding window in the vertical direction are (i, j), the sliding window rectangular region Rect is [ Point (i × Sx, j × Sy), ] w, Height is h ], Rect represents the output sliding window region identifier, Point represents the rectangular window Rect start Point, and Width and Height represent the Width and Height of the rectangular window Rect, respectively.
In the embodiment of the present application, a target detection model Yolov3-tiny is used as a network model of a detection part in the embodiment of the present application, after the network model is pre-trained, a parameter adjustment needs to be performed on a defect detection model by a fine tuning method, specifically, a gradient descent algorithm with momentum is used, and a learning strategy of a segmented learning rate is used, and the specific strategy is as follows:
Figure BDA0002482692310000191
W=W-ηvi
wherein v isiAnd vi-1Representing current and past gradient values, respectively, β representing momentum factors, η representing learning rates, W representing the above network model parameters;
this application embodiment is through adding the sampling part, solves the big problem that causes of image resolution ratio through the slip sample, adds feedback information at the training in-process to this size of adjusting the sliding window, tiny stumbling silk defect all carries out artifical mark among the prior art and need expend a large amount of time, and annotator's subjectivity also can influence the mark quality moreover. According to the defect detection method and device, the defect detection model is finely adjusted on the basis of the pre-training network model, so that the defect can be automatically marked by the model, and the difficulty of manual marking and wire tripping is reduced.
For describing the embodiments of the present application in detail, first, an existing U-Net network framework is described, where the existing U-Net network framework includes an encoder and a decoder, the encoder is also called a down-sampling layer, and the decoder is also called an up-sampling layer. The encoder has four sub-modules, each containing two convolutional layers, each sub-module being followed by a down-sampling by a max-pool layer (max pool). If the resolution of the input image is 572x572, the resolutions of the 1 st to 4 th modules are 284x284, 140x140, 68x68 and 32x32, respectively. Since the convolution uses a valid pattern, the resolution of the latter sub-module is equal to (resolution of the previous sub-module-4)/2 here.
The decoder contains four sub-modules and the resolution is sequentially increased by the up-sampling operation until it coincides with the resolution of the input image (the actual output is smaller than the input image due to the valid mode used for convolution). The network also uses a skip transport layer to connect the upsampled result to the output of a sub-module of the same resolution in the encoder as the input to the next sub-module in the decoder.
In the embodiment of the application, when the output operation is carried out on the jump transmission layer of the U-Net network framework, the input x from the upper sampling layer is subjected to the attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe specific implementation process comprises the following steps:
1) calculating the input of the jump transmission layer by an attention mechanism algorithm to obtain the output of the jump transmission layer, specifically to the input x from the upper sampling layerupAdding weight coefficients, and calculating the formula as follows:
xfinal=ψ(xconv,xup)=Watt×xup
wherein x isfinalFor the output of said hopping transport layer, # xconv,xup) For the algorithm of attention mechanism, xconvInput from a down-sampling layer for the hopping transport layer, xupFor the input of the hopping transport layer from the upsampling layer, WattIs the weight coefficient.
2) For xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein Sigmoid is the first activation function, Conv _1 × 1 is a convolution operation with a convolution kernel of 1, and R is xconvAnd xupThe relevance of (c).
3) Are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
In the embodiment of the present application, a first activation function and a second activation function are used between two layers of neurons in a neural network, in the multilayer neural network, a neuron (neuron) signal of a previous layer, that is, a result calculated by a linear unit wx + b, needs to be input to a next layer, but before the signal is input to the next layer, the signal needs to be activated once, that is, f is signed (wx + b), or f is ReLU (wx + b), that is, after passing through the first activation function and the second activation function, the signal is input to the next layer of neurons.
In the embodiment of the present application, when the U-Net network frame connects the upsampling result with the output of the sub-module with the same resolution in the encoder at the skip transmission layer, the weight coefficient is added as the input of the next sub-module in the decoder, and the calculation process and the specific implementation process are as described above and are not described herein again.
The method for detecting surface defects of a product according to the present invention is explained above, and the apparatus for detecting surface defects of a product is explained below.
Referring to fig. 5, an apparatus for detecting surface defects of a product according to an embodiment of the present invention includes:
a determining module 501, configured to acquire an image and determine whether the image is an image to be detected, where the specified type of defect needs to be detected;
a detection module 502, configured to, if it is determined that the image is an image to be detected that needs to detect a specified type of defect, perform sliding sampling on the image to be detected by using a sliding window through a pre-trained defect detection model, perform defect detection on a sampled window area, and output a window area position identifier where a defect exists in the image to be detected;
a segmentation module 503, configured to perform image segmentation on the window area with the defect by using an image segmentation algorithm, to obtain an outline area of each defect in the window area with the defect;
a connecting module 504, configured to connect adjacent contour regions through a region growing algorithm, so as to obtain a form and a number of defects in the image to be detected.
Optionally, the detection module is configured to generate the pre-trained defect detection model by the following training method:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by using a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
and inputting the images in the plurality of samples into an initialization network model, adjusting parameters of the initialization network model according to the defect positions output by the initialization network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the detection module is configured to perform sliding sampling on the image by using a sliding window, and includes at least one of the following steps:
performing sliding sampling on the image in the horizontal direction by using a sliding window;
utilizing a sliding window to perform sliding sampling on the image in the vertical direction;
and when the sliding window slides in the horizontal direction/the vertical direction, performing sliding sampling on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the detection module is configured to initialize a network model including a sampling portion and a detection portion, and includes at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window.
Optionally, the detecting module is configured to adjust a parameter of the current network model according to the defect position output by the network model and the labeled defect position, and includes:
adjusting the height and/or width of a sliding window adopted by a sampling part in the current network model;
and adjusting parameters of a neural network layer of a detected part in the current network model.
Optionally, the detection module is configured to adjust parameters of the current network model, and end the parameter adjustment when a training end condition is reached, and includes at least one of the following steps:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
Optionally, the segmentation module is configured to perform image segmentation on the window region with the defect by using an image segmentation algorithm, and includes:
when the output operation is carried out on the skip transmission layer of the U-Net network framework, the input x from the upper sampling layer is processed by an attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
Optionally, the segmentation module is for applying an attention mechanism to input x from the upsampling layerupAdding weight coefficients, including:
for xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein, WattFor the weight coefficients, Sigmoid is the first activation function, Conv _1 × 1 is the convolution operation with convolution kernel 1, xconvFor the skip transport layer, R is xconvAnd xupThe relevance of (c).
Optionally, the segmentation module is configured to determine the xconvAnd xupIncludes the following steps:
are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe calculation formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
Optionally, the determining module is configured to acquire an image and determine whether the image is an image to be detected for detecting a specified type of defect, and includes:
acquiring an image and inputting the image into a classification prediction model, wherein the image is input as a network classification model by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected needing to detect the specified type of defects according to the classification result of the classification prediction model.
The above describes a product surface defect detection apparatus in the present embodiment from the perspective of a modular functional entity, and the following describes a product surface defect detection apparatus in the present embodiment from the perspective of hardware processing.
Referring to fig. 6, in an embodiment of the present application, an apparatus for detecting surface defects of a product includes at least one processor 601, at least one memory 602, and a bus system 609;
wherein the memory stores program code that, when executed by the processor, causes the processor to perform the following:
acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
if yes, sliding sampling is carried out on the image to be detected through a pre-trained defect detection model by using a sliding window, defect detection is carried out on a sampled window area, and a window area position mark with defects in the image to be detected is output;
carrying out image segmentation on the window area with the defects by using an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
and connecting the adjacent contour areas through a region growing algorithm to obtain the form and the number of the defects in the image to be detected.
Fig. 6 is a schematic diagram of a product surface defect detecting apparatus according to an embodiment of the present disclosure, where the apparatus 600 may generate relatively large differences according to different configurations or performances, and may include one or more processors (CPU) 601 (e.g., one or more processors) and a memory 602, one or more storage media 603 (e.g., one or more mass storage devices) for storing applications 604 or data 605. Wherein the memory 602 and storage medium 603 may be transient or persistent storage. The program stored in the storage medium 603 may include one or more modules (not shown), and each module may include a series of instruction operations in the information processing apparatus. Further, the processor 601 may be arranged to communicate with the storage medium 603 and execute a series of instruction operations in the storage medium 603 on the device 600.
The device 600 may also include one or more wired or wireless network interfaces 607, one or more input-output interfaces 608, and/or one or more operating systems 606, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
Optionally, the pre-trained defect detection model is generated by the following training method:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by using a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
and inputting the images in the plurality of samples into an initialization network model, adjusting parameters of the initialization network model according to the defect positions output by the initialization network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the processor is configured to perform sliding sampling on the image by using a sliding window, and includes at least one of the following steps:
performing sliding sampling on the image in the horizontal direction by using a sliding window;
utilizing a sliding window to perform sliding sampling on the image in the vertical direction;
and when the sliding window slides in the horizontal direction/the vertical direction, performing sliding sampling on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the processor is configured to initialize a network model including a sampling part and a detection part, and includes at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window.
Optionally, the processor is configured to adjust parameters of the current network model according to the defect position output by the network model and the labeled defect position, and includes:
adjusting the height and/or width of a sliding window adopted by a sampling part in the current network model;
and adjusting parameters of a neural network layer of a detected part in the current network model.
Optionally, the processor is configured to adjust parameters of the current network model, and end the parameter adjustment when a training end condition is reached, and the method includes at least one of the following steps:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
Optionally, the processor is configured to perform image segmentation on the window region with the defect by using an image segmentation algorithm, and includes:
when the output operation is carried out on the skip transmission layer of the U-Net network framework, the input x from the upper sampling layer is processed by an attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
Optionally, the processor is for applying an attention mechanism to input x from the upsampling layerupAdding weight coefficients, including:
for xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein, WattFor the weight coefficients, Sigmoid is the first activation function, Conv _1 × 1 is the convolution operation with convolution kernel 1, xconvFor the skip transport layer, R is xconvAnd xupThe relevance of (c).
Optionally, the processor is configured to determine the xconvAnd xupIncludes the following steps:
are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe calculation formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
Optionally, the processor is configured to acquire an image and determine whether the image is an image to be detected for detecting a specified type of defect, and includes:
acquiring an image and inputting the image into a classification prediction model, wherein the image is input as a network classification model by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected needing to detect the specified type of defects according to the classification result of the classification prediction model.
As shown in fig. 7, the apparatus further includes:
a base 702 and a product fixing module 705 which is positioned above the base and connected with the base, wherein the base is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing a product;
a transfer module 701 located below the base for transferring the product;
the device comprises an image acquisition module 703 and an optical module 704, wherein the image acquisition module is positioned at the top or the bottom of the optical module and used for acquiring the surface image of the product, and the optical module is used for emitting illumination and assisting the image acquisition module in image acquisition.
The image acquisition module and the optical module comprise:
an upper image capture module 7031 and an upper optical module 7041 located at the top of the product fixture, and a lower image capture module 7032 and a lower optical module 7042 located at the bottom of the product fixture.
When the embodiment of the application detects product defects, firstly, a product sample to be detected is transmitted to an image acquisition area through a transmission module, when the sample to be detected passes through the image acquisition module, the image acquisition module of the equipment is triggered to be matched with an optical module to finish the acquisition of a high-resolution tripwire defect picture, and finally, the acquired image is processed through a processor to obtain the defect form and the number of the product sample to be detected.
The embodiment of the invention also provides a computer-readable storage medium, which comprises instructions, and when the computer-readable storage medium runs on a computer, the computer is enabled to execute the product surface defect detection method provided by the embodiment.
The embodiment of the present application further provides a computer program product, which includes a computer program, where the computer program includes program instructions, and when the program instructions are executed by an electronic device, the electronic device is caused to execute the method for detecting surface defects of a product provided in the foregoing embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided by the present application are introduced in detail, and the present application applies specific examples to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A method for detecting surface defects of a product, the method comprising:
acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
if yes, sliding sampling is carried out on the image to be detected through a pre-trained defect detection model by using a sliding window, defect detection is carried out on a sampled window area, and a window area position mark with defects in the image to be detected is output;
carrying out image segmentation on the window area with the defects by using an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
and connecting the adjacent contour areas through a region growing algorithm to obtain the form and the number of the defects in the image to be detected.
2. The method of claim 1, wherein the pre-trained defect detection model is generated by training:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by using a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, wherein each sample comprises an image and an annotated defect position;
and inputting the images in the plurality of samples into an initialization network model, adjusting parameters of the initialization network model according to the defect positions output by the initialization network model and the labeled defect positions, and ending parameter adjustment when a training ending condition is reached to obtain the pre-trained defect detection model.
3. The method of claim 2, wherein the sliding sampling of the image using the sliding window comprises at least one of:
performing sliding sampling on the image in the horizontal direction by using a sliding window;
utilizing a sliding window to perform sliding sampling on the image in the vertical direction;
and when the sliding window slides in the horizontal direction/the vertical direction, performing sliding sampling on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
4. The method of claim 2, wherein initializing the network model including the sampling portion and the detection portion comprises at least one of:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of the sampling part moving in a unit time while sliding in a horizontal direction/a vertical direction using a sliding window, wherein the fixed length is determined by a fixed ratio of a side length in the horizontal direction/the vertical direction of the sliding window.
5. The method according to any one of claims 2 to 4, wherein adjusting parameters of the current network model according to the defect positions output by the network model and the labeled defect positions comprises:
adjusting the height and/or width of a sliding window adopted by a sampling part in the current network model;
and adjusting parameters of a neural network layer of a detected part in the current network model.
6. The method according to any one of claims 2 to 4, wherein the adjusting of the parameters of the current network model is performed, and the adjusting of the parameters is finished when the training end condition is reached, and the method comprises at least one of the following steps:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and finishing the parameter adjustment when the detection precision meets the requirement;
determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of an image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and finishing the parameter adjustment when the weighted sum value meets requirements.
7. The method of claim 1, wherein image segmentation of the defective window region using an image segmentation algorithm comprises:
when the output operation is carried out on the skip transmission layer of the U-Net network framework, the input x from the upper sampling layer is processed by an attention mechanismupAdding weight coefficient to obtain output x of jump transmission layerfinalThe U-Net network frame comprises a down-sampling layer, an up-sampling layer and a jump transmission layer connecting the down-sampling layer and the up-sampling layer.
8. The method of claim 7, wherein input x from the upsampling layer is applied by an attention mechanismupAdding weight coefficients, including:
for xconvAnd xupThe convolution operation with the convolution kernel size of 1 is carried out on the correlation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
Watt=Sigmoid(Conv_1×1(R))
wherein, WattFor the weight coefficients, Sigmoid is the first activation function, Conv _1 × 1 is the convolution operation with convolution kernel 1, xconvFor the skip transport layer, R is xconvAnd xupThe relevance of (c).
9. The method of claim 8, wherein x is determinedconvAnd xupIncludes the following steps:
are respectively paired with xconvAnd xupPerforming convolution operation with convolution kernel size of 1 and summing, and calculating the summation result through a second activation function to obtain xconvAnd xupThe calculation formula of the relevance R is as follows:
R=ReLU(Conv_1×1(xconv)+Conv_1×1(xup))
wherein ReLU is the second activation function.
10. The method of claim 1, wherein acquiring an image and determining whether the image is to be detected for a specified type of defect comprises:
acquiring an image and inputting the image into a classification prediction model, wherein the image is input as a network classification model by utilizing a training sample comprising a plurality of images and labeled defect types, and model training is carried out by taking the labeled defect types of the output image as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected needing to detect the specified type of defects according to the classification result of the classification prediction model.
11. A product surface defect detecting apparatus, comprising:
the determining module is used for acquiring an image and determining whether the image is an image to be detected for detecting the specified type of defects;
the detection module is used for performing sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model if the image is determined to be the image to be detected needing to detect the specified type of defects, performing defect detection on a sampled window area, and outputting a window area position identifier with the defects in the image to be detected;
the segmentation module is used for carrying out image segmentation on the window area with the defects by utilizing an image segmentation algorithm to obtain the outline area of each defect in the window area with the defects;
and the connecting module is used for connecting the adjacent contour regions through a region growing algorithm to obtain the form and the number of the defects in the image to be detected.
12. A product surface defect detecting apparatus, comprising: a processor and a memory, wherein the memory is configured to store a program;
the processor is configured to execute the program in the memory to cause the computer to perform the method of any one of claims 1 to 10.
13. The apparatus of claim 12, further comprising:
the conveying device comprises a base and a product fixing module, wherein the product fixing module is positioned above the base and connected with the base, the base is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing a product;
the conveying module is positioned below the base and used for conveying products;
the device comprises an image acquisition module and an optical module, wherein the image acquisition module is positioned at the top or the bottom of the optical module and used for acquiring the surface image of the product, and the optical module is used for emitting illumination and assisting the image acquisition module in image acquisition.
14. The apparatus of claim 13, wherein the image acquisition module and optical module comprise:
the upper image acquisition module and the upper optical module are positioned at the top of the product fixing module, and the lower image acquisition module and the lower optical module are positioned at the bottom of the product fixing device.
15. A computer-readable storage medium comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 10.
CN202010382866.9A 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment Active CN111652852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010382866.9A CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010382866.9A CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Publications (2)

Publication Number Publication Date
CN111652852A true CN111652852A (en) 2020-09-11
CN111652852B CN111652852B (en) 2024-03-29

Family

ID=72346817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010382866.9A Active CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN111652852B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365443A (en) * 2020-10-16 2021-02-12 珠海市奥德维科技有限公司 Hexahedron defect detection method and medium based on deep learning
CN112964732A (en) * 2021-02-04 2021-06-15 科大智能物联技术有限公司 Spinning cake defect visual detection system and method based on deep learning
CN113034502A (en) * 2021-05-26 2021-06-25 深圳市勘察研究院有限公司 Drainage pipeline defect redundancy removing method
CN113077454A (en) * 2021-04-19 2021-07-06 凌云光技术股份有限公司 Image defect fitting method, system and storage medium
CN113592787A (en) * 2021-07-13 2021-11-02 苏州汇川控制技术有限公司 Light emitting component detection method, light emitting component detection device, terminal equipment and storage medium
CN114529507A (en) * 2021-12-30 2022-05-24 广西慧云信息技术有限公司 Shaving board surface defect detection method based on visual transducer
CN114789743A (en) * 2022-06-22 2022-07-26 成都铁安科技有限责任公司 Method and system for monitoring abnormal operation of train wheels
CN115147348A (en) * 2022-05-05 2022-10-04 合肥工业大学 Improved YOLOv 3-based tire defect detection method and system
CN115587989A (en) * 2022-10-21 2023-01-10 国家工业信息安全发展研究中心 Workpiece CT image defect detection and segmentation method and system
CN115965816A (en) * 2023-01-05 2023-04-14 无锡职业技术学院 Glass defect classification and detection method and system based on deep learning
CN115984268A (en) * 2023-03-20 2023-04-18 杭州百子尖科技股份有限公司 Target detection method and device based on machine vision, electronic equipment and medium
CN114529507B (en) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 Visual transducer-based particle board surface defect detection method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07147309A (en) * 1993-11-25 1995-06-06 Nikon Corp Detector for pattern defect
EP0742431A1 (en) * 1995-05-10 1996-11-13 Mahlo GmbH & Co. KG Method and apparatus for detecting flaws in moving fabrics or the like
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109859171A (en) * 2019-01-07 2019-06-07 北京工业大学 A kind of flooring defect automatic testing method based on computer vision and deep learning
CN109978867A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Toy appearance quality determining method and its relevant device
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information
CN110175548A (en) * 2019-05-20 2019-08-27 中国科学院光电技术研究所 Remote sensing images building extracting method based on attention mechanism and channel information
CN110865077A (en) * 2019-11-15 2020-03-06 上海电器科学研究所(集团)有限公司 Visual inspection system for appearance defects in RFID antenna production

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07147309A (en) * 1993-11-25 1995-06-06 Nikon Corp Detector for pattern defect
EP0742431A1 (en) * 1995-05-10 1996-11-13 Mahlo GmbH & Co. KG Method and apparatus for detecting flaws in moving fabrics or the like
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109859171A (en) * 2019-01-07 2019-06-07 北京工业大学 A kind of flooring defect automatic testing method based on computer vision and deep learning
CN109978867A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Toy appearance quality determining method and its relevant device
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information
CN110175548A (en) * 2019-05-20 2019-08-27 中国科学院光电技术研究所 Remote sensing images building extracting method based on attention mechanism and channel information
CN110865077A (en) * 2019-11-15 2020-03-06 上海电器科学研究所(集团)有限公司 Visual inspection system for appearance defects in RFID antenna production

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JACOB KONIG ET AL.: "A Convolutional Neural Network for Pavement Surface Crack Segmentation Using Residual Connections and Attention Gating", 《ICIP 2019》, pages 1460 - 1464 *
JUNWEN CHEN ET AL.: "Automatic Defect Detection of Fasteners on the Catenary Support Device Using Deep Convolutional Neural Network", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》, 4 December 2017 (2017-12-04) *
吴良斌, 北京:航空工业出版社 *
张涛 等: "综合管廊巡检机器人综述", 《地下空间与工程学报》, 31 December 2019 (2019-12-31) *
邡鑫;史峥;: "基于卷积神经网络的晶圆缺陷检测与分类算法", 计算机工程, no. 08, 15 August 2018 (2018-08-15) *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365443A (en) * 2020-10-16 2021-02-12 珠海市奥德维科技有限公司 Hexahedron defect detection method and medium based on deep learning
CN112964732A (en) * 2021-02-04 2021-06-15 科大智能物联技术有限公司 Spinning cake defect visual detection system and method based on deep learning
CN113077454A (en) * 2021-04-19 2021-07-06 凌云光技术股份有限公司 Image defect fitting method, system and storage medium
CN113034502A (en) * 2021-05-26 2021-06-25 深圳市勘察研究院有限公司 Drainage pipeline defect redundancy removing method
CN113034502B (en) * 2021-05-26 2021-08-24 深圳市勘察研究院有限公司 Drainage pipeline defect redundancy removing method
CN113592787A (en) * 2021-07-13 2021-11-02 苏州汇川控制技术有限公司 Light emitting component detection method, light emitting component detection device, terminal equipment and storage medium
CN114529507A (en) * 2021-12-30 2022-05-24 广西慧云信息技术有限公司 Shaving board surface defect detection method based on visual transducer
CN114529507B (en) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 Visual transducer-based particle board surface defect detection method
CN115147348A (en) * 2022-05-05 2022-10-04 合肥工业大学 Improved YOLOv 3-based tire defect detection method and system
CN114789743B (en) * 2022-06-22 2022-09-16 成都铁安科技有限责任公司 Method and system for monitoring abnormal running of train wheels
CN114789743A (en) * 2022-06-22 2022-07-26 成都铁安科技有限责任公司 Method and system for monitoring abnormal operation of train wheels
CN115587989A (en) * 2022-10-21 2023-01-10 国家工业信息安全发展研究中心 Workpiece CT image defect detection and segmentation method and system
CN115587989B (en) * 2022-10-21 2023-08-18 国家工业信息安全发展研究中心 Workpiece CT image defect detection segmentation method and system
CN115965816A (en) * 2023-01-05 2023-04-14 无锡职业技术学院 Glass defect classification and detection method and system based on deep learning
CN115965816B (en) * 2023-01-05 2023-08-22 无锡职业技术学院 Glass defect classification and detection method and system based on deep learning
CN115984268A (en) * 2023-03-20 2023-04-18 杭州百子尖科技股份有限公司 Target detection method and device based on machine vision, electronic equipment and medium
CN115984268B (en) * 2023-03-20 2023-06-30 杭州百子尖科技股份有限公司 Target detection method and device based on machine vision, electronic equipment and medium

Also Published As

Publication number Publication date
CN111652852B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN111652852A (en) Method, device and equipment for detecting surface defects of product
CN108764048B (en) Face key point detection method and device
CN110458095B (en) Effective gesture recognition method, control method and device and electronic equipment
CN110544258B (en) Image segmentation method and device, electronic equipment and storage medium
CN106845502B (en) Wearable auxiliary device for equipment maintenance and visual equipment maintenance guiding method
CN111797653B (en) Image labeling method and device based on high-dimensional image
CN110909611B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN108629306B (en) Human body posture recognition method and device, electronic equipment and storage medium
CN111709310B (en) Gesture tracking and recognition method based on deep learning
US10558844B2 (en) Lightweight 3D vision camera with intelligent segmentation engine for machine vision and auto identification
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN112040198A (en) Intelligent water meter reading identification system and method based on image processing
Huang et al. Obstacle distance measurement under varying illumination conditions based on monocular vision using a cable inspection robot
CN109920018A (en) Black-and-white photograph color recovery method, device and storage medium neural network based
CN111429424A (en) Heating furnace inlet abnormity identification method based on deep learning
Lin et al. A pointer type instrument intelligent reading system design based on convolutional neural networks
CN116071315A (en) Product visual defect detection method and system based on machine vision
CN115797846A (en) Wind power generation blade block defect comparison method and device and electronic equipment
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN111767826A (en) Timing fixed-point scene abnormity detection method
CN110956184A (en) Abstract diagram direction determination method based on HSI-LBP characteristics
CN117237681A (en) Image processing method, device and related equipment
Aguirre et al. Using a deep learning model on images to obtain a 2d laser people detector for a mobile robot
CN108446693B (en) Marking method, system, equipment and storage medium of target to be identified
CN113139540B (en) Backboard detection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huarui Technology Co.,Ltd.

Address before: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant