CN111652852B - Product surface defect detection method, device and equipment - Google Patents

Product surface defect detection method, device and equipment Download PDF

Info

Publication number
CN111652852B
CN111652852B CN202010382866.9A CN202010382866A CN111652852B CN 111652852 B CN111652852 B CN 111652852B CN 202010382866 A CN202010382866 A CN 202010382866A CN 111652852 B CN111652852 B CN 111652852B
Authority
CN
China
Prior art keywords
image
defect
detection
defects
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010382866.9A
Other languages
Chinese (zh)
Other versions
CN111652852A (en
Inventor
崔浩
黄虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202010382866.9A priority Critical patent/CN111652852B/en
Publication of CN111652852A publication Critical patent/CN111652852A/en
Application granted granted Critical
Publication of CN111652852B publication Critical patent/CN111652852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device and equipment for detecting surface defects of a product, comprising the following steps: acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the specified type of defects; if yes, performing sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model, performing defect detection on a sampled window area, and outputting a window area position mark with defects in the image to be detected; image segmentation is carried out on the window area with the defects by utilizing an image segmentation algorithm, so that outline areas of all defects in the window area with the defects are obtained; and connecting adjacent contour areas through an area growth algorithm to obtain the shape and the number of defects in the image to be detected. The invention can carry out sliding sampling detection on the image, and solves the problems that the prior art is time-consuming and labor-consuming, can not detect tiny defects on the surface and has poor positioning property.

Description

Product surface defect detection method, device and equipment
Technical Field
The invention relates to the technical field of computers, in particular to a method, a device and equipment for detecting surface defects of a product.
Background
With the rapid development of the manufacturing industry in China, the number and variety of products produced industrially are also increasing. The quality requirements of people on products are also higher and higher, the quality of the surfaces of the products not only can influence the appearance of the products, but also more serious functional defects can directly lead to commercial value devaluation of the products. In the production of chemical fibre products, particularly fine defects often occur in the chemical fibre products, even with the width of some defects being only one pixel in size, due to the influence of equipment and processes. When the small object to be measured moves, the human eyes cannot well distinguish the form of the object to be measured and even cannot perceive the small object to be measured. The traditional method for manually selecting defective products is time-consuming and labor-consuming, has limited resolution, and can cause the conditions of missed detection and false detection of tired eyes. This series of problems necessarily results in a decrease in product quality and an increase in the operating costs of the enterprise.
With the development of deep learning technology, computer processing technology on images has been leaped. A series of problems on an industrial production line are solved by a non-contact visual technology, so that industrial production automation of a factory is realized. The problems of misjudgment, missed judgment and the like of product defects due to labor cost and subjectivity of workers generated by manpower are solved. Therefore, how to quickly detect defects on the surface of the product to improve the production line efficiency and the product quality is a problem to be solved. Because the surface defect targets of chemical fiber products are relatively fine and have large interference, the characteristics are difficult to effectively extract depending on the traditional machine learning and image processing methods.
When the existing process is used for detecting the surface defects of the chemical fiber products, a method for manually selecting the surface defects of the chemical fiber products is adopted, so that time and labor are wasted, and the problem of false leakage detection caused by factors such as resolution of human eyes and finer defects is also likely to occur.
Disclosure of Invention
The invention provides a method, a device and equipment for detecting surface defects of a product, which are used for solving the problems that the time and the labor are wasted, the tiny defects on the surface cannot be detected and the positioning performance is poor when the surface defects of a chemical fiber product are detected in the prior art.
According to a first aspect of embodiments of the present application, there is provided a method for detecting a surface defect of a product, the method comprising:
acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the specified type of defects;
if yes, performing sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model, performing defect detection on a sampled window area, and outputting a window area position mark with defects in the image to be detected;
image segmentation is carried out on the window area with the defects by utilizing an image segmentation algorithm, so that outline areas of all defects in the window area with the defects are obtained;
and connecting adjacent contour areas through an area growth algorithm to obtain the shape and the number of defects in the image to be detected.
Optionally, the pre-trained defect detection model is generated by the following training means:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the sliding window is used for sliding sampling the image, which includes at least one step as follows:
sliding sampling is carried out on the image in the horizontal direction by utilizing a sliding window;
sliding sampling in the vertical direction is carried out on the image by utilizing a sliding window;
when the sliding window slides in the horizontal direction/the vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, initializing a network model including a sampling portion and a detection portion includes at least one of:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction.
Optionally, adjusting parameters of the current network model according to the defect position output by the network model and the marked defect position, including:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted;
and adjusting the neural network layer parameters of the detection part in the current network model.
Optionally, adjusting parameters of the current network model, and ending parameter adjustment when the training ending condition is reached, including at least one step of:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
And determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
Optionally, image segmentation is performed on the window area with the defect by using an image segmentation algorithm, including:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer.
Optionally, input x from the upsampling layer is input x by an attention mechanism up Adding a weight coefficient, comprising:
for x conv And x up The correlation of the (1) convolution kernel is subjected to convolution operation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
W att =Sigmoid(Conv_1×1(R))
wherein W is att For the weight coefficient, sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, x conv For the input from the downsampling layer of the skip transport layer, R is x conv And x up Is a relationship of (a) and (b).
Determining the x conv And x up Comprises:
respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The correlation R of (2) is calculated as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
Optionally, acquiring an image and determining whether the image is an image to be detected for which a specified type of defect needs to be detected, including:
acquiring images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and marked defect types, and model training is carried out by taking the marked defect types of the images as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected, which needs to detect the defects of the specified type, according to the classification result of the classification prediction model.
According to a second aspect of embodiments of the present application, there is provided a product surface defect detection device, the device comprising:
the determining module is used for acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the defect of the specified type;
The detection module is used for carrying out sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model, carrying out defect detection on a sampled window area and outputting a window area position mark with defects in the image to be detected if the image is determined to be the image to be detected with the defects of the specified type;
the segmentation module is used for carrying out image segmentation on the window area with the defects by utilizing an image segmentation algorithm to obtain outline areas of all defects in the window area with the defects;
and the connecting module is used for connecting the adjacent contour areas through an area growing algorithm to obtain the form and the number of the defects in the image to be detected.
Optionally, the detection module is configured to generate the pre-trained defect detection model by the following training method:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
Inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the detection module is configured to slide sample the image with a sliding window, and includes at least one step of:
sliding sampling is carried out on the image in the horizontal direction by utilizing a sliding window;
sliding sampling in the vertical direction is carried out on the image by utilizing a sliding window;
when the sliding window slides in the horizontal direction/the vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the detection module is configured to initialize a network model including a sampling portion and a detection portion, and includes at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
Initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction.
Optionally, the detecting module is configured to adjust parameters of the current network model according to the defect position output by the network model and the marked defect position, and includes:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted;
and adjusting the neural network layer parameters of the detection part in the current network model.
Optionally, the detecting module is configured to adjust parameters of the current network model, and end parameter adjustment when the training end condition is reached, including at least one step of:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
and determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
Optionally, the segmentation module is configured to perform image segmentation on the window area with the defect by using an image segmentation algorithm, and includes:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer.
Optionally, the segmentation module is configured to input x from the upsampling layer through an attention mechanism up Adding a weight coefficient, comprising:
for x conv And x up The correlation of the (1) convolution kernel is subjected to convolution operation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
W att =Sigmoid(Conv_1×1(R))
wherein W is att For the weight coefficient, sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, x conv For the input from the downsampling layer of the skip transport layer, R is x conv And x up Is a relationship of (a) and (b).
Optionally, the segmentation module is configured to determine the x conv And x up Comprises:
respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The correlation R of (2) is calculated as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
Optionally, the determining module is configured to acquire an image and determine whether the image is an image to be detected that needs to detect a defect of a specified type, and includes:
acquiring images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and marked defect types, and model training is carried out by taking the marked defect types of the images as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected, which needs to detect the defects of the specified type, according to the classification result of the classification prediction model.
According to a third aspect of embodiments of the present application, there is provided a product surface defect detection apparatus, comprising: a processor and a memory, wherein the memory is used for storing a program;
the processor is configured to execute the program in the memory, so that the computer executes the method that may be involved in each of the above aspects and any of the aspects related to the embodiments of the present application.
Optionally, the apparatus further comprises:
the device comprises a base and a product fixing module, wherein the product fixing module is positioned above the base and connected with the base, the base is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing a product;
The conveying module is positioned below the base and is used for conveying products;
the image acquisition module is positioned at the top or the bottom of the optical module and is used for acquiring the surface image of the product, and the optical module is used for emitting illumination and assisting the image acquisition module to acquire the image.
Optionally, the image acquisition module and the optical module comprise:
the upper image acquisition module and the upper optical module are positioned at the top of the product fixing module, and the lower image acquisition module and the lower optical module are positioned at the bottom of the product fixing device.
According to a fourth aspect of embodiments of the present application, there is provided a chip coupled to a memory in a user equipment, such that the chip, when running, invokes program instructions stored in the memory, implementing the above aspects of embodiments of the present application and any possible related methods related to the aspects.
According to a fifth aspect of embodiments of the present application, there is provided a computer readable storage medium storing program instructions that, when run on a computer, cause the computer to perform the aspects of embodiments of the present application and any one of the possible related methods related to the aspects.
According to a sixth aspect of embodiments of the present application, there is provided a computer program product, which when run on an electronic device, causes the electronic device to perform any one of the possible related methods of implementing the above aspects and aspects of embodiments of the present application.
In addition, the technical effects caused by any implementation manner of the second aspect to the sixth aspect may refer to the technical effects caused by different implementation manners of the first aspect, which are not described herein.
The method, the device and the equipment for detecting the surface defects of the product have the following beneficial effects:
according to the method, the device and the equipment for detecting the surface defects of the product, after the defect detection model inputs the image to be detected, the window area with the defects is detected by sliding sampling and performing defect detection on the sampled window area, and the form and the number of the defects in the image to be detected are obtained through the image segmentation algorithm and the area growth algorithm, so that the computing resources of a computer can be saved, the limitation of a deep neural network on the size of the input image is avoided, the tiny defects are better identified, and the detection precision and the detection speed of the defect detection model are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a method for detecting surface defects of a product according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a pre-training method of a classification detection model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network structure of a defect detection model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a pre-training method of a defect detection model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a device for detecting surface defects of a product according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a product surface defect detecting apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a product surface defect detecting apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the embodiment of the invention, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The application scenario described in the embodiment of the present invention is for more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention, and as a person of ordinary skill in the art can know that the technical solution provided by the embodiment of the present invention is applicable to similar technical problems as the new application scenario appears. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
How to quickly detect the defects on the surface of the product to improve the production line efficiency and the product quality becomes a problem to be solved urgently. Because the surface defect targets of chemical fiber products are relatively tiny and have larger interference, the characteristics are difficult to effectively extract depending on the traditional machine learning and image processing methods, so that no large breakthrough exists in detection of the characteristics, the stumble wire is a common defect in the production of the chemical fiber products, and the formation of the stumble wire is indistinguishable from the process, the mechanical state and the production management control of a production line. The presence of snagging can affect not only the appearance of the chemical fiber product, but also the unwinding performance of the product and thus the production of downstream manufacturers. The existing method for detecting the surface defects of chemical fiber products in the process has the following problems:
1) By adopting a method for manually selecting the surface defects of chemical fiber products, the method is time-consuming and labor-consuming, and can also be possibly subject to the problem of false leakage detection caused by factors such as resolution of human eyes, finer defects and the like;
2) When detecting tiny defects on the surface of a chemical fiber product, the traditional machine learning and image processing method cannot extract effective tiny features, and has poor algorithm adaptability and robustness;
3) Conventional machine learning and image processing methods are unable to quickly locate fine defects on the high resolution image surface of chemical fiber products.
Specifically, a method for performing rectangular scanning classification on the whole area by adopting a convolutional neural network to determine whether a certain rectangle has a stumbling wire or not, and acquiring the stumbling wire position by using linear feature extraction and a traditional image segmentation algorithm has the following problems:
1) The detection efficiency is greatly reduced when dealing with high resolution images;
2) The threshold segmentation and shape fitting are adopted to position the spinning cake area, so that the requirement on the ambient light is high, and uneven illumination and illumination change can seriously influence the positioning of the spinning cake;
3) The method of carrying out rectangular sliding classification based on convolutional neural network (Visual Geometry Group, VGG) classification is difficult to set the size of a rectangle, the classification of tiny defects and later wire tripping and splicing are not facilitated due to the fact that the rectangular setting is too large, the overall speed of an algorithm is reduced due to the fact that the rectangular setting is too small, and normal wire texture and wire tripping texture cannot be distinguished.
In the prior art, a non-linear gray level co-occurrence matrix characteristic of the defective image and the non-defective image is constructed, and then the defective area is positioned by measuring the characteristic similarity of the defective image and the non-defective image, but the following problems are also existed:
1) The global characteristic can not well describe the tiny defects on the surface of the fabric, and the tiny defects can be inevitably missed;
2) The traditional image processing method based on the gray level co-occurrence matrix has higher requirements on the environment and poor robustness.
Aiming at the problems, the application provides a product surface defect detection method which can detect images with defects of a specified type to be detected, then the defect detection model provided by the application is used for carrying out sliding sampling on the images to be detected by utilizing a sliding window, carrying out defect detection on the sampled window, and obtaining the form and the number of the defects in the images to be detected by utilizing an image segmentation algorithm and a region growing algorithm.
The method provided by the embodiment of the application can effectively detect the stumbling wire and carry out pixel fraction and number statistics. The method is used for counting the obtained stumbling defects, adjusting the process and the running state of equipment of the production line, and reducing the occurrence of the stumbling defects, thereby improving the quality of products and the efficiency of the production line. The problems that the manual selection of the tiny wire-stumbling defects in the prior art is time-consuming and labor-consuming, the false detection of the tiny wire-stumbling defects is serious, the anti-interference capability of the existing method is poor, the detection efficiency is low, the tiny wire-stumbling defects are missed, detected and the like are solved.
As shown in fig. 1, a method for detecting surface defects of a product according to an embodiment of the present application includes:
step S101, acquiring an image and determining whether the image is an image to be detected, which needs to detect a defect of a specified type;
the product surface defect detection method provided by the embodiment of the application can detect various defects on the surface of the chemical fiber product, and is optionally mainly applied to the aspect of detecting the stumble defects, wherein the specified type of defects are stumble type defects;
according to the method, whether the image is the image to be detected of the specified type of defect is determined by detecting the obtained image content, the image of the upper bottom surface, the lower bottom surface or the side surface of the chemical fiber product paper tube is usually collected during image collection, and the thread catching defect mainly appears on the upper bottom surface and the lower bottom surface of the chemical fiber product, so that the obtained image is determined to be the image of the upper bottom surface and the lower bottom surface of the chemical fiber product, namely the image to be detected of the specified type of defect is determined to be the image to be detected of the specified type of defect, specifically, the chemical fiber product in the image of the upper bottom surface and the image of the lower bottom surface of the collected chemical fiber product is usually circular or semicircular, the image of the side surface of the collected chemical fiber product is usually rectangular, and whether the obtained image is the image to be detected of the specified type of defect is determined according to the method.
The method for determining whether the acquired image is an image to be detected which needs to detect the specified type of defects can be that the image characteristics of the upper bottom surface, the lower bottom surface and the side surfaces of the chemical fiber product are preset, the acquired image is compared with the characteristics to determine whether the acquired image is the image to be detected, optionally, the acquired image can be input into a classification prediction model by utilizing a classification prediction model which is obtained through training in advance, and whether the image is the image to be detected which needs to detect the specified type of defects is determined according to the classification result of the classification prediction model. The method comprises the steps of inputting an image serving as a network classification model by utilizing a training sample comprising a plurality of images and marked defect types, and carrying out model training by taking the marked defect types of the image as targets to obtain a classification prediction model, wherein the marked defect types are the tripwire defects or other defect types marked according to image content. In the embodiment of the application, resNet50 is adopted as the classification prediction model, and the classification prediction model is pre-trained based on a gradient descent and segmented learning rate method.
Step S102, if yes, sliding sampling is carried out on the image to be detected by utilizing a sliding window through a pre-trained defect detection model, defect detection is carried out on a sampled window area, and a window area position mark with defects in the image to be detected is output;
If the acquired image is determined to be the image to be detected of the tripwire defect type, inputting the image to be detected into a pre-trained defect detection model, performing sliding sampling on the image to be detected and performing defect detection on a sampling window area, and outputting a window area position identifier of the defect in the image to be detected.
The defect detection model comprises a sampling part and a detection part, wherein the sampling part utilizes a sliding window to carry out sliding sampling on an image to be detected input into the defect detection model, and the detection part carries out defect detection on a window area obtained by sliding sampling.
The method comprises the steps that the defect detection model needs to be pre-trained before sliding sampling and detection are carried out, a network model comprising a sampling part and a detection part needs to be initialized in the pre-training process, then defect positions comprising a plurality of images and labels are used as training samples, parameters of the current network model are adjusted according to the defect positions output by the network model and the defect positions of the labels, and parameter adjustment is finished when training finishing conditions are reached, so that the defect detection model is obtained.
The parameter adjustment comprises parameter adjustment of a sampling part and a detection part, specifically, the parameter adjustment of the sampling part comprises adjustment of the height and/or the width of a sliding window, and the detection part is used for parameter adjustment of the detection part with better detection defects as targets when detecting window areas obtained by sliding sampling. And when the training ending condition is reached, namely the detection speed or detection precision of the current network model or other parameter indexes capable of representing the performance of the defect detection model can meet the preset requirement, ending parameter adjustment at the moment to obtain the defect detection model.
The position mark of the window area with the defect in the image to be detected, which is output in the embodiment of the application, is determined by the sliding length of the sliding window when the sliding window slides horizontally and vertically.
In the prior art, the input of the large-resolution image into the deep neural network mainly has the following three problems:
1) The deep neural network model limits the size of the image;
2) Inputting the large-resolution image into the deep neural network model is easy to cause the exhaustion of computer computing resources;
3) The depth network cannot efficiently extract fine target features from the large resolution image.
Aiming at the problems, the sliding detection method is provided, the limitation of the depth neural network model on the size of the image is avoided, the problem that the computer computing resources are exhausted due to the large-resolution image is avoided, fine target features can be extracted more effectively, and the defect features in the image are detected better.
Step S103, image segmentation is carried out on the window area with the defects by utilizing an image segmentation algorithm, so as to obtain outline areas of all defects in the window area with the defects;
wherein, when the jump transmission layer of the U-Net network frame carries out output operation, the input x from the up-sampling layer is processed by the attention mechanism up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer.
The attention mechanism (Attention Mechanism) in neural networks is a resource allocation scheme that allocates computing resources to more important tasks while solving the information overload problem in situations where computing power is limited. In neural network learning, in general, the more parameters of a model are, the more expressive power of the model is, and the larger the amount of information stored in the model is, but this causes a problem of information overload. The attention mechanism is introduced, so that more critical information of the current task is focused in a plurality of input information, the attention degree of other information is reduced, even irrelevant information is filtered out, the information overload problem can be solved, the task processing efficiency and accuracy are improved, the problem that fine defect characteristics are not obvious can be solved by introducing an attention mechanism model into the existing U-Net network frame, and the defect characteristics in an image can be better obtained by the output of a jump transmission layer.
Step S104, connecting adjacent contour areas through an area growing algorithm to obtain the form and the number of defects in the image to be detected.
When the image is segmented, since the middle area is not obvious, the outline areas on two sides of a defect can be obtained, or when in sliding sampling, only one part of the defect can be acquired in one window area, therefore, the adjacent outline areas are required to be connected through an area growing algorithm, and the form and the number of the defect in the image to be detected are obtained.
The process of the embodiments of the present application is described in detail below in connection with specific embodiments.
In the embodiment of the application, whether the image is the image to be detected for detecting the specified type of defect is determined through a classification prediction model, the classification prediction model needs to be pre-trained, and the specific implementation process is as shown in fig. 2:
step S201, obtaining images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and labeled defect types, and model training is performed with the labeled defect types of the images as targets;
the marking defect type is that whether the defect is a stumble type defect or other types of defects is determined according to the image content;
the embodiment of the application adopts ResNet50 as a network model of the classification prediction model. Classification prediction is performed through a residual connection mode in a ResNet network, and the formula is shown as follows:
x cur =F(x pre ,W conv )+W sam x pre
Wherein x is pre To get up toA Feature Map (Feature Map) of a layer; w (W) conv Learning parameters for the next layer; f represents the next layer of operation rules (including convolution, pooling, reLu and BN, etc.); w (W) sam Sampling operations (convolution or pooling) for the upper layer model to guarantee F (x pre ,W conv ) The output dimension x of (2) pre Keeping consistency; x is x cur And outputting a characteristic diagram for the target layer.
The network model adopts a cross entropy loss function of a softmax classifier, and parameter adjustment is carried out on the classification prediction model based on gradient descent and a sectional learning rate. Let the output x= [ x ] of the network 1 ,x 2 ……x i ]The label information of the sample is y= [ y ] 1 ,y 2 ,……,y i ]The output of each dimension through the softmax classifier is:
the output result of the softmax classifier isThe final loss function L of the model is:
after obtaining the loss function, adopting a gradient descent method to adjust parameters of the model, wherein the parameters W of the network model are expressed as follows:
wherein W is a parameter of the network model; η represents a learning rate; l andrelated (I)>And x i Correlation, x i Is related to W.
And S202, ending training to obtain a classification prediction model when the ending model parameter adjustment condition is met.
After the obtained image is determined to be the image to be detected, the image to be detected is input into a defect detection model for defect detection, as shown in fig. 3, which is a network structure schematic diagram of the defect detection model, where the defect detection model includes a sampling portion 301 and a detection portion 302, the sampling portion is used for performing sliding sampling on the image by using a sliding window, and the detection portion is used for performing defect detection on the window area, specifically, during sliding sampling, only the image to be detected may be subjected to horizontal sliding sampling or vertical sliding sampling, as an optional implementation manner, the sliding window may be used for performing sliding sampling on the image in the horizontal direction first, then the sliding window may be used for performing sliding sampling on the image in the vertical direction, or the sliding window may be used for performing sliding sampling on the image in the horizontal direction, and then the sliding window may be used for performing sliding sampling on the image in the horizontal direction and the vertical direction simultaneously;
When the sliding window slides in the horizontal direction/vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/vertical direction, specifically, the fixed proportion is a pre-designated numerical value, specifically, the fixed proportion can be a numerical value between [0-1], the sliding window can be set according to actual requirements by a person skilled in the art, and the integral area of the sliding window is subtracted by multiplying the integral area by the fixed proportion, so that the overlapping area of two adjacent sliding window areas can be obtained.
The image to be detected is subjected to sliding sampling by using a sliding window through a defect detection model, and before defect detection is performed on a sampled window area, the defect detection model needs to be pre-trained first, as shown in fig. 4, and the method comprises the following steps:
step S401, initializing a network model comprising a sampling part and a detection part;
wherein initializing parameters of the sampling portion includes:
initializing the height and width of a sliding window adopted by a sampling part, wherein the height and width of the initialized sliding window are the height and width input by the network model;
initializing the sliding direction of a sliding window adopted by the sampling part, wherein the sliding direction is a horizontal direction and a vertical direction;
Initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction. The fixed ratio is a preset value, and is set by a person skilled in the art according to actual requirements.
Step S402, a sample set comprising a plurality of samples is obtained, and each sample comprises an image and a marked defect position;
the samples are used for determining the surface images of chemical fiber products for detecting the designated defect types and the positions of defects on each marked image.
Wherein, more samples are generated by adjusting the angle, saturation, exposure and tone of the image during training, so as to improve the detection precision of the defect detection model.
Step S403, inputting the images in the plurality of samples into an initialized network model, and adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model;
the parameter adjustment comprises the adjustment of the height and/or width of a sliding window adopted by a sampling part in the initialized network model; and adjusting the neural network layer parameters of the detection part in the initialized network model.
When the parameters of the initialized network model are adjusted, for the sampling part of the initialized network model, only the width (in the horizontal direction) of the sliding window sliding in the horizontal direction or only the height (in the vertical direction) of the sliding window sliding in the vertical direction can be adjusted, as an alternative implementation mode, the height and the width of the sliding window are adjusted at the same time, specifically, a plurality of proportions are preset, when the adjustment is performed, the width and the height of the sliding window are changed through different proportions on the basis of the width and the height of the initialized sliding window, and when the height and the width of the sliding window are adjusted at the same time, the proportions of the height and the width adjustment are the same.
And adjusting the neural network layer parameters of the detection part in the initialized network model.
In this embodiment, the training ending condition is achieved in the following two modes:
1) Determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
the position of the defect in the image can be detected through the network model, and the detection precision of the detection model is determined through the output defect position and the marked defect position, wherein the detection precision can be obtained by averaging the detection precision of a plurality of sliding windows, and can also be the detection precision corresponding to any sliding window area, and the detection precision can be a numerical value between [0-1 ];
The detection accuracy meeting the condition may be that the detection accuracy is changed by adjusting the plurality of parameters for a plurality of times, at this time, a parameter corresponding to the highest detection accuracy is selected as a parameter corresponding to the defect detection model, and as an optional implementation manner, a detection accuracy threshold is preset, and when the detection accuracy of the network model exceeds the threshold, the corresponding parameter is selected as a parameter corresponding to the defect detection model.
2) And determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
The foregoing weighted summation of the detection precision and the detection speed, where the weights corresponding to the detection precision and the detection speed are preset values, may be set according to the actual needs of those skilled in the art, and the sum of the weights corresponding to the detection precision and the detection speed is 1, and as an alternative implementation manner, the detection speed is normalized, preferably mapped to a value between [0-1], so that the faster the detection speed, the higher the detection precision, the optimal the size of the sliding window corresponding to the higher the detection precision.
Similarly, when the weighted sum value meets the requirement, the multiple parameters may be adjusted for multiple times, at this time, the detection precision and the detection precision also change correspondingly, and the parameter corresponding to the highest weighted sum value is selected as the parameter corresponding to the defect detection model.
According to the method, the size of the sliding window is automatically set according to the detection precision and the detection speed obtained by each training, so that the problems of network model detection speed, detection precision and sliding window setting are solved.
The pre-training process of the defect detection model is described below in connection with specific embodiments:
1) The initializing of the network model including the sampling portion and the detecting portion specifically includes:
in this embodiment of the present application, the width and the height of the initialized sliding window are respectively denoted as W and H (usually initialized as the width and the height of the network input), the width and the height of the image in the obtained training sample are respectively denoted as W and H, the fixed proportion of each horizontal sliding and vertical sliding of the sliding window is respectively denoted as Rx and Ry, the proportion of the size of the adjusted sliding window is set as n= [0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4], the network model effect weight factor is set as α, the detection speed of the model is set as V, the detection precision is set as P, the overall performance of the model obtained by performing weighted summation on the detection speed and the detection precision is Perf, and the number of times of training is set as time e [0, len (N) ], where len (N) represents the value of N.
2) Acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
3) Inputting the image in the sample into the current network model, and adjusting the parameters of the current network model according to the defect position output by the network model and the marked defect position specifically comprises the following steps:
inputting the image in the sample into the current network model to obtain the detection speed V and the detection precision P of the network model, and calculating the overall performance of the network model, wherein the overall performance can be expressed as follows:
Perf=αP+(1-α)V
the width and the height of the sliding window are respectively w and h according to the proportion of adjusting the size of the sliding window, and the w and the h are respectively shown as follows:
w=w*N[times]
h=h*N[times]
and selecting the optimal sliding window size according to the principle of Perf maximization, and adjusting the parameters of the current network model.
The method for identifying the position of the window area with the defect in the image to be detected output by the obtained defect detection model is as follows:
calculating the step length of sliding horizontal sliding and vertical sliding of the sliding window, and the step length in the horizontal direction: sx= (1-Rx) w; step length in vertical direction: sy= (1-Ry) h. Assuming that the ith sliding window in the horizontal direction and the jth sliding window in the vertical direction are used as (i, j), a sliding window rectangular region rect= [ Point (i×sx, j×sy), width=w, height=h ], rect represents an output sliding window region identifier, point represents a rectangular window Rect start Point, and Width and Height represent the Width and Height of the rectangular window Rect, respectively.
According to the embodiment of the application, the target detection model Yolov3-tiny is adopted as a network model of the detection part of the embodiment of the application, after the network model is pre-trained, parameter adjustment is further needed to be carried out on the defect detection model through a fine adjustment method, a gradient descent algorithm with momentum is specifically adopted, a sectional learning rate learning strategy is adopted, and the specific strategy is as follows:
W=W-ηv i
wherein v is i And v i-1 Representing the current and past gradient values, respectively; beta represents a momentum factor; η represents a learning rate; w represents the network model parameters;
according to the embodiment of the application, the problem that the image resolution ratio is large is solved by adding the sampling part through sliding sampling, feedback information is added in the training process, so that the size of the sliding window is adjusted, a great amount of time is required to be consumed for manual marking of the tiny tripwire defects in the prior art, and the subjectivity of a marker can also influence the marking quality. According to the embodiment of the application, the defect detection model is finely adjusted on the basis of the pre-training network model, so that the model can automatically mark the defects, and the difficulty of manually marking the stumble wire is reduced.
For a detailed description of the embodiments of the present application, an existing U-Net network framework is first described, which includes an encoder, i.e., a downsampling layer, and a decoder, i.e., an upsampling layer. The encoder has four sub-modules, each containing two convolutional layers, each followed by one downsampling by a max pool layer (max pool). If the resolution of the input image is 572x572, the resolutions of the 1 st to 4 th blocks are 284x284, 140x140, 68x68 and 32x32, respectively. Since the convolution uses valid mode, the resolution of the next sub-module here is equal to (resolution-4 of the previous sub-module)/2.
The decoder contains four sub-modules, the resolution is sequentially increased by an up-sampling operation until it coincides with the resolution of the input image (since the convolution uses valid mode, the actual output is smaller than the input image). The network also uses a skip transport layer to connect the result of the upsampling to the output of a sub-module in the encoder with the same resolution as the input of the next sub-module in the decoder.
In the embodiment of the application, when the jump transmission layer of the U-Net network frame carries out output operation, the input x from the up-sampling layer is processed through the attention mechanism up Adding weight coefficient to obtain output x of jump transmission layer final The specific implementation process is as follows:
1) Calculating the input of the jump transmission layer by using an attention mechanism algorithm to obtain the output of the jump transmission layer, specifically the input x from the up-sampling layer up Adding a weight coefficient, and calculating the following formula:
x final =ψ(x conv ,x up )=W att ×x up
wherein x is final For the output of the skipped transport layer, ψ (x conv ,x up ) X is the algorithm of the attention mechanism conv For input from the downsampling layer by the skip transport layer, x up For input from the upsampling layer by the skip transport layer, W att And the weight coefficient is the weight coefficient.
2) For x conv And x up The correlation of the (1) convolution kernel is subjected to convolution operation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
W att =Sigmoid(Conv_1×1(R))
wherein Sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, and R is x conv And x up Is a relationship of (a) and (b).
3) Respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The formula is as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
In this embodiment of the present application, the first activation function and the second activation function are used between two layers of neurons in the neural network, in the multi-layer neural network, a neuron (neuron) signal of an upper layer, that is, a result calculated by the linear unit wx+b, needs to be input to a next layer, but the signal needs to be activated once before being input to the next layer, or f=sigmoid (wx+b), or f=relu (wx+b), that is, after passing through the first activation function and the second activation function, is input to the next layer of neurons.
In the embodiment of the present application, when the U-Net network frame connects the up-sampling result with the output of the sub-module with the same resolution in the encoder at the skip transmission layer, a weight coefficient is added to serve as the input of the next sub-module in the decoder, and the calculation process and the specific implementation process are as shown above, which are not described herein.
The above description is given of a product surface defect detection method in the present invention, and the following description is given of an implementation of the product surface defect detection device.
Referring to fig. 5, a product surface defect detecting device according to an embodiment of the present invention includes:
a determining module 501, configured to acquire an image and determine whether the image is an image to be detected that needs to detect a defect of a specified type;
the detection module 502 is configured to, if it is determined that the image is an image to be detected that needs to detect a defect of a specified type, slide sample the image to be detected by using a sliding window through a pre-trained defect detection model, detect a defect in a sampled window area, and output a position identifier of the window area where the defect exists in the image to be detected;
a segmentation module 503, configured to perform image segmentation on a window area with a defect by using an image segmentation algorithm, so as to obtain a contour area of each defect in the window area with the defect;
and the connection module 504 is configured to connect adjacent contour areas through an area growing algorithm, so as to obtain the form and the number of defects in the image to be detected.
Optionally, the detection module is configured to generate the pre-trained defect detection model by the following training method:
Initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the detection module is configured to slide sample the image with a sliding window, and includes at least one step of:
sliding sampling is carried out on the image in the horizontal direction by utilizing a sliding window;
sliding sampling in the vertical direction is carried out on the image by utilizing a sliding window;
when the sliding window slides in the horizontal direction/the vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the detection module is configured to initialize a network model including a sampling portion and a detection portion, and includes at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction.
Optionally, the detecting module is configured to adjust parameters of the current network model according to the defect position output by the network model and the marked defect position, and includes:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted;
and adjusting the neural network layer parameters of the detection part in the current network model.
Optionally, the detecting module is configured to adjust parameters of the current network model, and end parameter adjustment when the training end condition is reached, including at least one step of:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
And determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
Optionally, the segmentation module is configured to perform image segmentation on the window area with the defect by using an image segmentation algorithm, and includes:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer.
Optionally, the segmentation module is configured to input x from the upsampling layer through an attention mechanism up Adding a weight coefficient, comprising:
for x conv And x up The correlation of the (1) convolution kernel is subjected to convolution operation, the weight coefficient is obtained through calculation of a first activation function, and a calculation formula is as followsThe following steps:
W att =Sigmoid(Conv_1×1(R))
wherein W is att For the weight coefficient, sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, x conv For the input from the downsampling layer of the skip transport layer, R is x conv And x up Is a relationship of (a) and (b).
Optionally, the segmentation module is configured to determine the x conv And x up Comprises:
respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The correlation R of (2) is calculated as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
Optionally, the determining module is configured to acquire an image and determine whether the image is an image to be detected that needs to detect a defect of a specified type, and includes:
acquiring images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and marked defect types, and model training is carried out by taking the marked defect types of the images as targets to obtain the classification prediction model;
and determining whether the image is an image to be detected, which needs to detect the defects of the specified type, according to the classification result of the classification prediction model.
A product surface defect detection device in the embodiment of the present application is described above from the point of view of a modularized functional entity, and a product surface defect detection apparatus in the embodiment of the present application is described below from the point of view of hardware processing.
Referring to FIG. 6, a product surface defect inspection apparatus, at least one processor 601 and at least one memory 602, and a bus system 609 in an embodiment of the present application;
wherein the memory stores program code that, when executed by the processor, causes the processor to perform the following:
acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the specified type of defects;
if yes, performing sliding sampling on the image to be detected by utilizing a sliding window through a pre-trained defect detection model, performing defect detection on a sampled window area, and outputting a window area position mark with defects in the image to be detected;
image segmentation is carried out on the window area with the defects by utilizing an image segmentation algorithm, so that outline areas of all defects in the window area with the defects are obtained;
and connecting adjacent contour areas through an area growth algorithm to obtain the shape and the number of defects in the image to be detected.
Fig. 6 is a schematic diagram of a product surface defect detecting apparatus according to an embodiment of the present application, where the apparatus 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (in english: central processing units, in english: CPU) 601 (for example, one or more processors) and a memory 602, and one or more storage media 603 (for example, one or more mass storage devices) storing application programs 604 or data 605. Wherein the memory 602 and storage medium 603 may be transitory or persistent storage. The program stored in the storage medium 603 may include one or more modules (not shown), each of which may include a series of instruction operations in the information processing apparatus. Still further, the processor 601 may be arranged to communicate with a storage medium 603 and execute a series of instruction operations in the storage medium 603 on the device 600.
The device 600 may also include one or more wired or wireless network interfaces 607, one or more input/output interfaces 608, and/or one or more operating systems 606, such as Windows Server, mac OS X, unix, linux, freeBSD, etc.
Optionally, the pre-trained defect detection model is generated by the following training means:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model.
Optionally, the processor is configured to slide sample the image with a sliding window, including at least one of the following steps:
sliding sampling is carried out on the image in the horizontal direction by utilizing a sliding window;
Sliding sampling in the vertical direction is carried out on the image by utilizing a sliding window;
when the sliding window slides in the horizontal direction/the vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
Optionally, the processor is configured to initialize a network model including a sampling portion and a detection portion, including at least one of the following steps:
initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction.
Optionally, the processor is configured to adjust parameters of the current network model according to the defect position output by the network model and the marked defect position, and includes:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted;
And adjusting the neural network layer parameters of the detection part in the current network model.
Optionally, the processor is configured to adjust parameters of the current network model, and end the parameter adjustment when the training end condition is reached, including at least one step of:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
and determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
Optionally, the processor is configured to perform image segmentation on the window area with the defect by using an image segmentation algorithm, including:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer.
Optionally, the processor is configured to input x from the upsampling layer through an attention mechanism up Adding a weight coefficient, comprising:
for x conv And x up The correlation of the number of the convolution kernels is subjected to convolution operation with the size of 1, the weight coefficient is obtained through calculation of a first activation function, and calculation is performedThe formula is as follows:
W att =Sigmoid(Conv_1×1(R))
wherein W is att For the weight coefficient, sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, x conv For the input from the downsampling layer of the skip transport layer, R is x conv And x up Is a relationship of (a) and (b).
Optionally, the processor is configured to determine the x conv And x up Comprises:
respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The correlation R of (2) is calculated as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
Optionally, the processor is configured to acquire an image and determine whether the image is an image to be detected that needs to detect a defect of a specified type, including:
acquiring images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and marked defect types, and model training is carried out by taking the marked defect types of the images as targets to obtain the classification prediction model;
And determining whether the image is an image to be detected, which needs to detect the defects of the specified type, according to the classification result of the classification prediction model.
As shown in fig. 7, the apparatus further includes:
the base 702 is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing products;
a transfer module 701 positioned below the base for transferring the product;
the image acquisition module 703 and the optical module 704, the image acquisition module is located optical module top or bottom is used for gathering the surface image of product, optical module is used for sending out illumination, assists the image acquisition module carries out image acquisition.
The image acquisition module and the optical module comprise:
an upper image acquisition module 7031 and an upper optical module 7041 positioned at the top of the product fixing module, and a lower image acquisition module 7032 and a lower optical module 7042 positioned at the bottom of the product fixing device.
When product defect detection is carried out, firstly, a product sample to be detected is transmitted to an image acquisition area through a transmission module, when the sample to be detected passes through the image acquisition module, the image acquisition module of the trigger equipment is matched with an optical module to complete acquisition of high-resolution tripwire defect pictures, and finally, the acquired images are processed through a processor to obtain defect forms and quantity of the product sample to be detected.
The embodiment of the invention also provides a computer readable storage medium, which comprises instructions that when run on a computer, cause the computer to execute the product surface defect detection method provided by the embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program comprises program instructions, and when the program instructions are executed by electronic equipment, the electronic equipment is caused to execute the product surface defect detection method provided by the embodiment.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing has described in detail the technical solutions provided herein, and specific examples have been used to illustrate the principles and embodiments of the present application, where the above examples are only used to help understand the methods and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A method for detecting surface defects of a product, the product comprising a chemical fiber product, the method comprising:
acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the specified type of defects;
if so, performing sliding sampling on the image to be detected by using a sliding window through a sampling part of the pre-trained defect detection model, performing defect detection on a sampled window area through a detection part of the pre-trained defect detection model, and outputting a window area position identifier of the defect in the image to be detected;
image segmentation is carried out on the window area with the defects by utilizing an image segmentation algorithm, so that outline areas of all defects in the window area with the defects are obtained;
Connecting adjacent contour areas through an area growth algorithm to obtain the shape and the number of defects in the image to be detected;
image segmentation is carried out on the window area with the defect by utilizing an image segmentation algorithm, and the method comprises the following steps:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer;
the pre-trained defect detection model is generated by the following training mode:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model;
According to the defect position output by the network model and the marked defect position, the parameters of the current network model are adjusted, including:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted; when the height and the width of a sliding window adopted by a sampling part in the current network model are simultaneously adjusted, the height and the width are adjusted in the same proportion;
and adjusting the neural network layer parameters of the detection part in the current network model.
2. The method of claim 1, wherein the sliding sampling of the image using a sliding window comprises at least one of:
sliding sampling is carried out on the image in the horizontal direction by utilizing a sliding window;
sliding sampling in the vertical direction is carried out on the image by utilizing a sliding window;
when the sliding window slides in the horizontal direction/the vertical direction, the sliding sampling is carried out on the image according to the speed of moving a fixed length in unit time, wherein the fixed length is determined by the fixed proportion of the side length of the sliding window in the horizontal direction/the vertical direction.
3. The method of claim 1, wherein initializing a network model comprising a sampling portion and a detection portion comprises at least one of:
Initializing the height and width of a sliding window adopted by the sampling part;
initializing the sliding direction of a sliding window adopted by the sampling part;
initializing a fixed length of movement per unit time when the sampling part slides in the horizontal/vertical direction by using a sliding window, wherein the fixed length is determined by a fixed ratio of the side lengths of the sliding window in the horizontal/vertical direction.
4. The method according to claim 1, wherein the adjusting of the parameters of the current network model is performed and the parameter adjustment is ended when the training end condition is reached, comprising at least one of the steps of:
determining detection precision according to the defect position output by the network model and the marked defect position, performing parameter adjustment according to the detection precision, and ending parameter adjustment when the detection precision meets the requirement;
and determining detection precision according to the defect position output by the network model and the marked defect position, determining detection speed according to the size of the image and detection time, performing parameter adjustment according to a weighted sum value of the detection precision and the detection speed, and ending parameter adjustment when the weighted sum value meets the requirement.
5. The method of claim 1, wherein the input x from the upsampling layer is input x by an attention mechanism up Adding a weight coefficient, comprising:
for x conv And x up The correlation of the (1) convolution kernel is subjected to convolution operation, the weight coefficient is obtained through calculation of a first activation function, and the calculation formula is as follows:
W att =Sigmoid(Conv_1×1(R))
wherein W is att For the weight coefficient, sigmoid is the first activation function, conv_1x1 is the convolution operation with convolution kernel 1, x conv For the input from the downsampling layer of the skip transport layer, R is x conv And x up Is a relationship of (a) and (b).
6. The method of claim 5, wherein the x is determined conv And x up Comprises:
respectively to x conv And x up Performing convolution operation with convolution kernel size of 1, summing, and calculating the summation result through a second activation function to obtain x conv And x up The correlation R of (2) is calculated as follows:
R=ReLU(Conv_1×1(x conv )+Conv_1×1(x up ))
wherein ReLU is the second activation function.
7. The method of claim 1, wherein acquiring an image and determining whether the image is an image to be detected for which a specified type of defect needs to be detected comprises:
acquiring images and inputting a classification prediction model, wherein the images are input as a network classification model by utilizing training samples comprising a plurality of images and marked defect types, and model training is carried out by taking the marked defect types of the images as targets to obtain the classification prediction model;
And determining whether the image is an image to be detected, which needs to detect the defects of the specified type, according to the classification result of the classification prediction model.
8. A product surface defect detection device, wherein the product comprises a chemical fiber product, the device comprising:
the determining module is used for acquiring an image and determining whether the image is an image to be detected, which needs to be detected for the defect of the specified type;
the detection module is used for carrying out sliding sampling on the image to be detected by utilizing a sliding window through a sampling part of a pre-trained defect detection model, carrying out defect detection on a sampled window area through a detection part of the pre-trained defect detection model, and outputting a window area position identifier of the defect in the image to be detected if the image is determined to be the image to be detected of the defect of the specified type;
the segmentation module is used for carrying out image segmentation on the window area with the defects by utilizing an image segmentation algorithm to obtain outline areas of all defects in the window area with the defects;
the connecting module is used for connecting the adjacent contour areas through an area growing algorithm to obtain the form and the number of the defects in the image to be detected;
The segmentation module is used for carrying out image segmentation on the window area with the defect by utilizing an image segmentation algorithm, and comprises the following steps:
input x from the upsampling layer is processed by the attention mechanism when output operations are performed at the skip transport layer of the U-Net network framework up Adding weight coefficient to obtain output x of jump transmission layer final The U-Net network framework comprises a downsampling layer, an upsampling layer and a jumping transmission layer connecting the downsampling layer and the upsampling layer;
the pre-trained defect detection model is generated by the following training mode:
initializing a network model comprising a sampling part and a detection part, wherein the sampling part is used for carrying out sliding sampling on an image by utilizing a sliding window, and the detection part is used for carrying out defect detection on the window area;
acquiring a sample set comprising a plurality of samples, each sample comprising an image and a marked defect location;
inputting the images in the plurality of samples into an initialized network model, adjusting parameters of the initialized network model according to the defect positions output by the initialized network model and the marked defect positions, and ending parameter adjustment when the training ending condition is reached to obtain the pre-trained defect detection model;
According to the defect position output by the network model and the marked defect position, the parameters of the current network model are adjusted, including:
the height and/or width of a sliding window adopted by a sampling part in the current network model are adjusted; when the height and the width of a sliding window adopted by a sampling part in the current network model are simultaneously adjusted, the height and the width are adjusted in the same proportion;
and adjusting the neural network layer parameters of the detection part in the current network model.
9. A product surface defect inspection apparatus, comprising: a processor and a memory, wherein the memory is used for storing a program;
the processor is configured to execute a program in the memory, causing a computer to perform the method of any one of claims 1 to 7.
10. The apparatus as recited in claim 9, further comprising:
the device comprises a base and a product fixing module, wherein the product fixing module is positioned above the base and connected with the base, the base is used for keeping the conveying module to slide stably, and the product fixing module is used for fixing a product;
the conveying module is positioned below the base and is used for conveying products;
The image acquisition module is positioned at the top or the bottom of the optical module and is used for acquiring the surface image of the product, and the optical module is used for emitting illumination and assisting the image acquisition module to acquire the image.
11. The apparatus of claim 10, wherein the image acquisition module and the optical module comprise:
the upper image acquisition module and the upper optical module are positioned at the top of the product fixing module, and the lower image acquisition module and the lower optical module are positioned at the bottom of the product fixing module.
12. A computer readable storage medium comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any of claims 1 to 7.
CN202010382866.9A 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment Active CN111652852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010382866.9A CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010382866.9A CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Publications (2)

Publication Number Publication Date
CN111652852A CN111652852A (en) 2020-09-11
CN111652852B true CN111652852B (en) 2024-03-29

Family

ID=72346817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010382866.9A Active CN111652852B (en) 2020-05-08 2020-05-08 Product surface defect detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN111652852B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365443B (en) * 2020-10-16 2021-10-12 珠海市奥德维科技有限公司 Hexahedron defect detection method and medium based on deep learning
CN112964732A (en) * 2021-02-04 2021-06-15 科大智能物联技术有限公司 Spinning cake defect visual detection system and method based on deep learning
CN113077454A (en) * 2021-04-19 2021-07-06 凌云光技术股份有限公司 Image defect fitting method, system and storage medium
CN113034502B (en) * 2021-05-26 2021-08-24 深圳市勘察研究院有限公司 Drainage pipeline defect redundancy removing method
CN113592787B (en) * 2021-07-13 2024-10-25 苏州汇川控制技术有限公司 Light emitting component detection method, device, terminal equipment and storage medium
CN113989183A (en) * 2021-09-17 2022-01-28 浙江省北大信息技术高等研究院 Wood board defect detection method, device, equipment and medium based on neural network
CN114529507B (en) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 Visual transducer-based particle board surface defect detection method
CN115147348B (en) * 2022-05-05 2023-06-06 合肥工业大学 Tire defect detection method and system based on improved YOLOv3
CN114789743B (en) * 2022-06-22 2022-09-16 成都铁安科技有限责任公司 Method and system for monitoring abnormal running of train wheels
CN115587989B (en) * 2022-10-21 2023-08-18 国家工业信息安全发展研究中心 Workpiece CT image defect detection segmentation method and system
CN115965816B (en) * 2023-01-05 2023-08-22 无锡职业技术学院 Glass defect classification and detection method and system based on deep learning
CN115984268B (en) * 2023-03-20 2023-06-30 杭州百子尖科技股份有限公司 Target detection method and device based on machine vision, electronic equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07147309A (en) * 1993-11-25 1995-06-06 Nikon Corp Detector for pattern defect
EP0742431A1 (en) * 1995-05-10 1996-11-13 Mahlo GmbH & Co. KG Method and apparatus for detecting flaws in moving fabrics or the like
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109859171A (en) * 2019-01-07 2019-06-07 北京工业大学 A kind of flooring defect automatic testing method based on computer vision and deep learning
CN109978867A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Toy appearance quality determining method and its relevant device
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information
CN110175548A (en) * 2019-05-20 2019-08-27 中国科学院光电技术研究所 Remote sensing images building extracting method based on attention mechanism and channel information
CN110865077A (en) * 2019-11-15 2020-03-06 上海电器科学研究所(集团)有限公司 Visual inspection system for appearance defects in RFID antenna production

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07147309A (en) * 1993-11-25 1995-06-06 Nikon Corp Detector for pattern defect
EP0742431A1 (en) * 1995-05-10 1996-11-13 Mahlo GmbH & Co. KG Method and apparatus for detecting flaws in moving fabrics or the like
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109859171A (en) * 2019-01-07 2019-06-07 北京工业大学 A kind of flooring defect automatic testing method based on computer vision and deep learning
CN109978867A (en) * 2019-03-29 2019-07-05 北京百度网讯科技有限公司 Toy appearance quality determining method and its relevant device
CN109993734A (en) * 2019-03-29 2019-07-09 北京百度网讯科技有限公司 Method and apparatus for output information
CN110175548A (en) * 2019-05-20 2019-08-27 中国科学院光电技术研究所 Remote sensing images building extracting method based on attention mechanism and channel information
CN110865077A (en) * 2019-11-15 2020-03-06 上海电器科学研究所(集团)有限公司 Visual inspection system for appearance defects in RFID antenna production

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Convolutional Neural Network for Pavement Surface Crack Segmentation Using Residual Connections and Attention Gating;Jacob Konig et al.;《ICIP 2019》;第1460-1464页 *
Automatic Defect Detection of Fasteners on the Catenary Support Device Using Deep Convolutional Neural Network;Junwen Chen et al.;《IEEE Transactions on Instrumentation and Measurement》;20171204;全文 *
吴良斌.《SAR图像处理与目标识别》.北京:航空工业出版社,2013,第157页. *
基于卷积神经网络的晶圆缺陷检测与分类算法;邡鑫;史峥;;计算机工程;20180815(08);全文 *
综合管廊巡检机器人综述;张涛 等;《地下空间与工程学报》;20191231;全文 *

Also Published As

Publication number Publication date
CN111652852A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111652852B (en) Product surface defect detection method, device and equipment
CN113705478B (en) Mangrove single wood target detection method based on improved YOLOv5
CN108009515B (en) Power transmission line positioning and identifying method of unmanned aerial vehicle aerial image based on FCN
CN107123131B (en) Moving target detection method based on deep learning
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
CN108520273A (en) A kind of quick detection recognition method of dense small item based on target detection
CN111754498A (en) Conveyor belt carrier roller detection method based on YOLOv3
CN109191255B (en) Commodity alignment method based on unsupervised feature point detection
CN111398291A (en) Flat enameled electromagnetic wire surface flaw detection method based on deep learning
CN112396035A (en) Object detection method and device based on attention detection model
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN114973116A (en) Method and system for detecting foreign matters embedded into airport runway at night by self-attention feature
CN111429424A (en) Heating furnace inlet abnormity identification method based on deep learning
CN116152685B (en) Pedestrian detection method and system based on unmanned aerial vehicle visual field
CN115311632A (en) Vehicle weight recognition method and device based on multiple cameras
CN113379603A (en) Ship target detection method based on deep learning
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN111767826A (en) Timing fixed-point scene abnormity detection method
Midwinter et al. Unsupervised defect segmentation with pose priors
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN113763471B (en) Bullet hole detection method and system based on vision
CN114913129A (en) System for identifying and positioning laying defects of composite material
CN117710756B (en) Target detection and model training method, device, equipment and medium
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN116046790B (en) Defect detection method, device, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huarui Technology Co.,Ltd.

Address before: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant