CN114926675A - Method and device for detecting shell stain defect, computer equipment and storage medium - Google Patents
Method and device for detecting shell stain defect, computer equipment and storage medium Download PDFInfo
- Publication number
- CN114926675A CN114926675A CN202210351179.XA CN202210351179A CN114926675A CN 114926675 A CN114926675 A CN 114926675A CN 202210351179 A CN202210351179 A CN 202210351179A CN 114926675 A CN114926675 A CN 114926675A
- Authority
- CN
- China
- Prior art keywords
- image data
- template image
- defect
- detection
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/94—Investigating contamination, e.g. dust
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a method and a device for detecting shell fouling defects, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring original sampling image data and standard template image data; carrying out perspective transformation on the standard template image data to obtain first template image data; respectively preprocessing the first template image data and the original sampling image data to obtain second template image data and first sampling image data; and inputting the second template image data and the first sampling image data into a multi-scale difference RCNN model for detection so as to determine the defect condition of the shell. Therefore, the intelligent degree, the detection efficiency and the detection accuracy of the shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
Description
Technical Field
The invention relates to the technical field of product appearance detection, in particular to a shell fouling defect detection method, a computer readable storage medium, computer equipment and a shell fouling defect detection device.
Background
With the improvement of the production efficiency of the modern assembly line, how to quickly and effectively detect the appearance quality of the product becomes a great technical difficulty. In the related art, the detection modes of product appearance fouling and scratch of the assembly line are mainly divided into three types: manual detection, detection equipment detection based on a traditional algorithm, and single-function detection equipment detection.
The manual detection has the problems of low detection efficiency and high missed detection risk; the detection equipment based on the traditional algorithm detects that the equipment is easily influenced by factors such as production line environment, illumination and the like, so that the problems of false detection, false alarm and low intelligent degree are solved; the single-function detection equipment is used for detection, the defect detection content of the product is single, and one set of equipment can only detect one defect type.
Disclosure of Invention
The present invention is directed to solving, at least in part, one of the technical problems in the related art. Therefore, a first objective of the present invention is to provide a method for detecting an fouling defect of a housing, in which second template image data and first sampling image data are input into a multi-scale differential RCNN model for detection, so as to determine the fouling defect condition of the housing, thereby not only improving the intelligent degree, the detection efficiency and the detection accuracy of the fouling defect detection of the housing, but also simultaneously detecting the fouling and the defect condition of the housing.
A second object of the invention is to propose a computer-readable storage medium.
A third object of the invention is to propose a computer device.
A fourth object of the present invention is to provide a device for detecting a contamination defect of a housing.
In order to achieve the above object, a first aspect of the present invention provides a method for detecting an fouling defect of a housing, the method including: acquiring original sampling image data and standard template image data; carrying out perspective transformation on the standard template image data to obtain first template image data; respectively preprocessing the first template image data and the original sampling image data to obtain second template image data and first sampling image data; and inputting the second template image data and the first sampling image data into a multi-scale difference RCNN model for detection so as to determine the defect condition of the shell.
According to the shell stain defect detection method, the first template image data is obtained by carrying out perspective transformation on the obtained standard template image data, the second template image data and the first sampling image data are obtained after the original template image data and the first template image data are respectively preprocessed, and the second template image data and the first sampling image data are input into the multi-scale differential RCNN model to be detected so as to determine the shell stain defect condition. Therefore, the intelligent degree, the detection efficiency and the detection accuracy of the shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
In addition, the method for detecting the defect of the stained shell according to the above embodiment of the present invention may further have the following additional features:
according to one embodiment of the invention, the inputting of the second template image data and the first sampling image data into the multi-scale differential RCNN model for detection comprises: performing feature extraction on the second template image data to obtain a first feature map, and performing feature extraction on the first sampling image data to obtain a second feature map; carrying out difference and fusion processing on the first characteristic diagram and the second characteristic diagram to obtain characteristic fusion data; the feature fusion data is input to a plurality of detectors for detection.
According to one embodiment of the invention, a VGG16 network is adopted to perform feature extraction on the second template image data and the first sampling image data, and the first feature map and the second feature map are subjected to difference and fusion processing to obtain a difference FPN.
According to one embodiment of the present invention, weight sharing is performed between the first VGG16 network employed in the feature extraction of the second template image data and the second VGG16 network employed in the feature extraction of the first sampled image data.
According to an embodiment of the present invention, differentiating and fusing the first feature map and the second feature map to obtain feature fusion data includes: differentiating the first characteristic diagram and the second characteristic diagram to obtain a plurality of differential diagrams; carrying out up-sampling and summation processing on the plurality of difference images to obtain difference FPN; and obtaining feature fusion data according to the difference FPN.
According to one embodiment of the invention, inputting feature fusion data into a plurality of detectors for detection comprises: respectively inputting the feature fusion data into the RPN of each detector to obtain a plurality of candidate frames; inputting a plurality of candidate boxes into a prediction network of each detector to obtain a plurality of prediction results; and screening the plurality of prediction results to determine the shell fouling defect condition.
According to one embodiment of the invention, inputting a plurality of candidate boxes to a prediction network of each detector comprises: inputting a plurality of candidate frames into a first pooling layer to obtain candidate frames with the same dimensionality; inputting the candidate frames with the same dimensionality into the full-connection branches corresponding to the first pooling layer for classification and regression to obtain first class information and first identification frame information; after the first identification frame information is input into the second pooling layer, classifying and regressing through the full-connection branches corresponding to the second pooling layer to obtain second category information and second identification frame information; after the second identification frame information is input into the third pooling layer, classifying and regressing through the full-connection branch corresponding to the third pooling layer to obtain third category information and third identification frame information; and taking the third category information and the third identification frame information as a prediction result.
According to one embodiment of the invention, perspective transformation is performed on standard template image data, comprising: and calculating a perspective transformation matrix based on the deep learning algorithm model, and performing perspective transformation on the standard template image data according to the perspective transformation matrix.
In order to achieve the above object, a second aspect of the present invention provides a computer readable storage medium, on which a casing contamination defect detection program is stored, and when the casing contamination defect detection program is executed by a processor, the casing contamination defect detection method described in the above embodiments is implemented.
According to the computer readable storage medium of the embodiment of the invention, when the stored shell fouling defect detection program is executed by the processor, the shell fouling defect detection method is executed, so that the intelligent degree, the detection efficiency and the detection accuracy of shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
In order to achieve the above object, a third aspect of the present invention provides a computer device, which includes a memory, a processor, and a casing contamination defect detection program stored in the memory and operable on the processor, and when the processor executes the casing contamination defect detection program, the casing contamination defect detection method described in the above embodiment is implemented.
According to the computer equipment provided by the embodiment of the invention, when the stored shell stain defect detection program is executed by the processor, the shell stain defect detection method is executed, so that the intelligent degree, the detection efficiency and the detection accuracy of shell stain defect detection are improved, and the stain and defect conditions of the shell can be detected simultaneously.
In order to achieve the above object, a fourth aspect of the present invention provides a device for detecting an contamination defect of a housing, including: the acquisition module is used for acquiring original sampling image data and standard template image data; the transformation module is used for carrying out perspective transformation on the standard template image data to obtain first template image data; the preprocessing module is used for respectively preprocessing the first template image data and the original sampling image data to obtain second template image data and first sampling image data; and the detection module is used for inputting the second template image data and the first sampling image data into the multi-scale difference RCNN model for detection so as to determine the case of the shell fouling defect.
According to the device for detecting the shell stain defect, the standard template image data acquired by the acquisition module are subjected to perspective transformation through the transformation module to acquire the first template image data, the first template image data and the original sampling image data acquired by the acquisition module are respectively preprocessed through the preprocessing module to acquire the second template image data and the first sampling image data, and the second template image data and the first sampling image data are input into the multi-scale differential RCNN model through the detection module to be detected so as to determine the shell stain defect condition. Therefore, the intelligent degree, the detection efficiency and the detection accuracy of the shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a method for detecting an enclosure blemish defect in accordance with one embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a deep learning algorithm model for computing a perspective transformation matrix according to an embodiment of the invention;
FIG. 3 is a flow chart of inputting second template image data and first sampled image data into a multi-scale differential RCNN model for detection according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a multi-scale differential RCNN model according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of the structure of a VGG16 model in accordance with one embodiment of the present invention;
FIG. 6 is a schematic diagram of a differential FPN according to one embodiment of the present invention;
FIG. 7 is a flow diagram of feature fusion data input to multiple detectors for detection according to one embodiment of the present invention;
FIG. 8 is a schematic view of an anchor frame according to one embodiment of the present invention;
FIG. 9 is a schematic diagram of a fully connected branch according to one embodiment of the present invention;
FIG. 10 is a block schematic diagram of a computer device according to one embodiment of the invention;
fig. 11 is a block diagram illustrating an apparatus for detecting a contamination defect of a housing according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present invention and should not be construed as limiting the present invention.
Before describing the method for detecting the shell fouling defect according to the embodiment of the present invention, an application scenario of the method for detecting the shell fouling defect will be described.
First, the detection environment of the application scenario is described.
Set up an area array camera respectively in inside both sides face of detection case and bottom to shoot the detection face of product, the bottom has still set up two adjustable light sources of bar, with regulation illumination brightness, and set up white diffuse reflection light source board around the detection case, in order to guarantee that the box light source is sufficient even, when avoiding shooing the detection face of product, produce light pollution.
When detecting, will wait to examine the object through robotic arm and put into foretell detection case, not only can avoid the interference of ambient light, also make simultaneously to wait to examine the target and have comparatively simple background environment, effectively improved the accuracy of cutting apart of waiting to examine the region. Meanwhile, due to the fact that the light source is adjusted and the diffuse reflection plate is added, certain inhibiting effect is achieved on the surface texture of the shell of the product, the uniformity and the reasonability of the lighting effect of the light reflection surface are guaranteed, compared with other existing devices, the detection box can not only guarantee the lighting effect, but also highlight the key area characteristics of the image, and the background part of the image is weakened and faded.
Next, a specific application scenario is described.
On a product production line, when a shell to be detected reaches a detection point, the shell to be detected is hovered for 1 second, a mechanical arm grabs the product to be detected and puts the product into a detection box, meanwhile, a Programmable Logic Controller (PLC) immediately sends a detection signal to a software system in an intelligent defect detection system, the software system sends a corresponding instruction to three area-array cameras in the detection box, and the three area-array cameras are triggered to shoot the shell to be detected simultaneously so as to finish the acquisition of original sampling image data (left/front detection surface image data, right/rear detection surface image data and lower detection surface image data). After the collection is completed, original sampling image data is sent to a server in real time through an Application Programming Interface (API), the server calls a shell contamination defect detection method to process the original sampling image data in real time, a detection result (including the category of the contamination defect and the position of the contamination defect in the image) is displayed on a User Interface (UI) Interface on a display, the detection result is automatically sent to a Programmable Logic Controller (PLC) control system in an intelligent defect detection system, the PLC control system sends a corresponding instruction to a mechanical arm, and the mechanical arm puts the shell into a waste material box or in the next process according to the detection result.
In the meantime, in order to reduce time consumption and storage consumption caused by data storage, the current original sampling image data can be stored in the memory in a soft copy mode before the original sampling image data is processed, and after the intelligent defect detection system detects the defect that the shell is stained, the original sampling image data copied in the memory and the detection result are stored in the local magnetic disk for filing and future reference.
It should be noted that, in the detection process, the mechanical arm puts the object to be detected into the detection box, which not only can avoid the interference of the ambient light, but also makes the object to be detected have a simpler background environment, effectively improving the segmentation accuracy of the region to be detected. The adjustable light source and the diffuse reflection light source plate are added, so that a certain inhibiting effect is achieved on the surface texture of the product shell, and the uniformity and the rationality of the lighting effect of the reflection surface are guaranteed.
Moreover, the vibration Of the pipeline equipment may cause interference to the area-array camera, so as to avoid that the camera vibration causes the loss Of the boundary in the original sampling image data, thereby omitting the flaw at the boundary, the shooting equipment in the application scene has a Region Of Interest (ROI) function, and the setting Of the ROI maintains certain redundancy, so as to obtain the original sampling image data with less background interference and complete information retention. The intelligent defect detection system under the application scene mainly adopts an MFC (Microsoft Foundation Classes) framework, simplifies the development flow, shortens the development period, and can improve the tidiness and the friendliness of the interface to the maximum extent.
Next, the method for detecting a defect of a contamination of a housing according to the embodiment of the present invention will be described in detail.
Fig. 1 is a flow chart of a method for detecting an enclosure contamination defect according to an embodiment of the present invention. Referring to fig. 1, the method for detecting the defect of the shell stain includes the following steps:
in step S1, the original sample image data and the standard template image data are acquired.
Specifically, raw sampling image data (e.g., lower detection face image data) is acquired by an image capturing device (e.g., an area-array camera), and standard template image data of a corresponding position (bottom face) stored inside a storage medium is retrieved. The original sampling image and the standard template image data are RGB image data.
Step S2 is to perform perspective transformation on the standard template image data to obtain first template image data.
Specifically, a perspective transformation matrix is calculated and applied to the standard template image data to complete the perspective transformation and obtain first template image data.
Further, in some embodiments of the invention, a perspective transformation matrix may be calculated based on the deep learning algorithm model, and the standard template image data may be perspective transformed according to the perspective transformation matrix.
Specifically, referring to fig. 2, the deep learning algorithm model includes: 8 convolutional layers, 3 pooling layers, and 2 fully-connected layers. The image data 1 and the image data 2 are input into a deep learning algorithm model, and the two image data sequentially pass through two Conv3-64(3 × 3 convolution kernels, convolution layers of 64 channels), a first Max-Pool (maximum pooling layer), two Conv3-64(3 × 3 convolution kernels, convolution layers of 64 channels), a second Max-Pool (maximum pooling layer), two Conv3-32(3 × 3 convolution kernels, convolution layers of 32 channels), a third Max-Pool (maximum pooling layer), two Conv3-16(3 × 3 convolution kernels, convolution layers of 16 channels), a first FC-1024 (full connection layers of 1 × 1, 1024 channels) and a last FC-8 × 21 (full connection layers of 8 × 8, 21 channels), and a perspective transformation matrix is output. The image data 2 is generated by randomly perturbing the image data 1 (in order to simulate various deformations that may occur in the real world with respect to the image data 1), and four points are randomly selected at the same positions of the image data 1 and the image data 2 as transformation corner points, respectively. The image data 1 is standard template image data.
And multiplying the perspective transformation matrix by the standard template image data to complete perspective transformation to obtain first template image data, thereby eliminating the deviation between the original sampling image data and the standard template image data caused by the grabbing of a mechanical arm and achieving the high alignment between the two.
Step S3, pre-processing the first template image data and the original sample image data, respectively, to obtain second template image data and first sample image data.
Optionally, in some embodiments of the present invention, graying, histogram equalization, median filtering, and Kirsch edge enhancement processing may be performed on the first template image data and the original sampled image data in sequence.
Specifically, the first template image data and the original sampling image data are subjected to graying processing respectively, the RGB image data are converted into grayscale image data, the operation cost of a server can be reduced, the grayscale image data can have a larger grayscale dynamic range, higher contrast and rich image details through histogram equalization processing, median filtering is performed to reduce the noise of the grayscale image data, the edge information of the grayscale image data is kept as far as possible, and finally Kirsch edge enhancement processing is adopted to highlight the edge characteristics of the defect area in the grayscale image data to obtain second template image data and first sampling image data.
And step S4, inputting the second template image data and the first sampling image data into a multi-scale difference RCNN model for detection so as to determine the defect condition of the shell contamination.
Specifically, the second template image data and the first sampling image data are input into a multi-scale difference RCNN model which is trained in advance, and fouling defect conditions (including abrasion of edges and corners, scratches and surface stains) of the shell are determined through reasoning.
In some embodiments of the present invention, referring to fig. 3, inputting the second template image data and the first sample image data into the multi-scale differential RCNN model for detection includes the following steps:
step S41, performing feature extraction on the second template image data to obtain a first feature map, and performing feature extraction on the first sample image data to obtain a second feature map.
Specifically, feature extraction may be performed on the second template image data and the first sample image data through a feature network (e.g., a VGG (Visual Geometry Group) network), and a first feature map and a second feature map are obtained accordingly.
And step S42, carrying out difference and fusion processing on the first feature map and the second feature map to obtain feature fusion data.
Specifically, after the first feature map and the second feature map are differentiated to obtain a plurality of different-dimension difference maps, feature fusion data can be obtained by unifying the dimensions of adjacent difference maps and fusing the difference maps with the same dimension.
It should be noted that, by performing perspective transformation on the standard template image data, the influence caused by the consistency problem between the standard template image and the original sampling image can be effectively reduced, so that the difference image contains as many defective features as possible, which is beneficial to the subsequent feature extraction.
In some embodiments of the present invention, a VGG16 network (a VGG network with 16 hidden layers) may be used to perform Feature extraction on the second template image data and the first sample image data, and perform difference and fusion processing on the first Feature map and the second Feature map to obtain a difference FPN (Feature Pyramid network).
Specifically, referring to fig. 4, the second template image data may be input to a first VGG16 network for feature extraction, so as to obtain first feature maps of 5 different dimensions, and the first sampled image data may be input to a second VGG16 network for feature extraction, so as to obtain second feature maps of 5 different dimensions. And (3) performing difference on the first characteristic diagram and the second characteristic diagram with the same dimension to obtain a plurality of difference diagrams (not marked in the diagrams), and performing fusion processing on the difference diagrams to obtain a difference FPN. Wherein the weights of the first and second VGG16 networks in feature extraction can be shared.
As shown in fig. 5, the feature extraction of the second template image data by the first VGG network includes: inputting the second template image into a first VGG16 network, and obtaining a first feature map 1 through Conv1-1(1 × 1 convolution kernel, convolution layer of 1 channel), Conv1-1(1 × 1 convolution kernel, convolution layer of 2 channels) and Pool layer; then, obtaining a first characteristic diagram 2 by Conv2-1 (convolution kernel 2 x 2, convolution layer of 1 channel), Conv2-2 (convolution kernel 2 x 2, convolution layer of 2 channel) and Pool layer; then, a first characteristic diagram 3 is obtained through Conv3-1 (convolution kernel 3 × 3, convolution layer of 1 channel), Conv3-2 (convolution kernel 3 × 3, convolution layer of 2 channel), Conv3-3 (convolution kernel 3 × 3, convolution layer of 3 channel) and Pool (pooling layer); then, obtaining a first characteristic diagram 4 through Conv4-1(4 × 4 convolution kernels and convolution layers of 1 channel), Conv4-2(4 × 4 convolution kernels and convolution layers of 2 channels), Conv4-3(4 × 4 convolution kernels and convolution layers of 3 channels) and Pool (pooling layer); finally, the first characteristic diagram 5 was obtained by passing through Conv5-1(5 × 5 convolution kernel, 1-channel convolution layer), Conv5-2(5 × 5 convolution kernel, 2-channel convolution layer), Conv5-3(5 × 5 convolution kernel, 3-channel convolution layer) and Pool (pooling layer).
The first sampled image data is input into a second VGG16 network, and is subjected to the convolution and pooling processes similar to those described above, so as to obtain a second feature map 1, a second feature map 2, a second feature map 3, a second feature map 4 and a second feature map 5, respectively.
Optionally, in some embodiments of the present invention, the differentiating and fusing the first feature map and the second feature map to obtain feature fusion data includes: carrying out difference on the first characteristic diagram and the second characteristic diagram to obtain a plurality of difference diagrams; carrying out up-sampling and summation processing on the plurality of difference images to obtain difference FPN; and obtaining feature fusion data according to the difference FPN.
Referring to fig. 6, obtaining a plurality of difference maps includes: the difference between the first feature diagram 1 and the second feature diagram 1 is obtained as a difference diagram 1, the difference between the first feature diagram 2 and the second feature diagram 2 is obtained as a difference diagram 2, the difference between the first feature diagram 3 and the second feature diagram 3 is obtained as a difference diagram 3, the difference between the first feature diagram 4 and the second feature diagram 4 is obtained as a difference diagram 4, and the difference between the first feature diagram 5 and the second feature diagram 5 is obtained as a difference diagram 5.
Obtaining the differential FPN includes: fusing the difference image 5 as the characteristic into the image 1; the difference image 5 is expanded into image data with the same dimension as the difference image 4 in an up-sampling mode, and the difference image 4 is added (corresponding elements are added) to obtain a feature fusion image 2; expanding the difference image 4 into image data with the same dimension as the difference image 3 in an up-sampling mode, and adding the difference image 3 to obtain a feature fusion image 3; expanding the difference image 3 into image data with the same dimension as the difference image 2 in an up-sampling mode, and adding the difference image 2 to obtain a feature fusion image 4; and expanding the difference image 2 into image data with the same dimension as that of the difference image 1 in an upsampling mode, and adding the difference image 1 to obtain a feature fusion image 5. The feature fusion map 1, the feature fusion map 2, the feature fusion map 3, the feature fusion map 4, and the feature fusion map 5 may be regarded as the differential FPN.
It should be noted that, in the difference maps of 5 different dimensions, the image with the smaller dimension has deep features, large receptive field, high semantic information, and is used for detecting small target defects, and the image with the larger dimension has shallow features, small receptive field, and is used for detecting large target defects.
Specifically, since the feature information of adjacent difference maps is similar, a feature fusion map (for example, the difference fusion map 1, the feature fusion map 3, and the feature fusion map 5) in which the feature information is greatly different can be included in the difference FPN as the feature fusion data.
In step S43, the feature fusion data is input to a plurality of detectors and detected.
Specifically, each detector includes an RPN (Region candidate Network) and a prediction Network. Based on the computational complexity, the feature fusion graph 1, the feature fusion graph 3, and the feature fusion graph 5 in the feature fusion data may be input into the RPN and the prediction network, respectively, in the corresponding detector.
In some embodiments of the present invention, as illustrated with reference to FIG. 7, inputting feature fusion data to a plurality of detectors for detection includes:
step S81, respectively inputting the feature fusion data into RPN (Recurrent Convolutional Neural Networks) of each detector to obtain a plurality of candidate frames.
Specifically, the feature fusion map 1, the feature fusion map 3, and the feature fusion map 5 are input to the RPN of each detector, respectively, to obtain 2000 candidate boxes B0, respectively.
The feature fusion is illustrated by taking the corresponding RPN input in fig. 1 as an example. Referring to fig. 8, taking each element in feature fusion fig. 1 as an anchor point respectively generates 9 anchor frames with different aspect ratios (e.g., 1: 1, 1: 2, 2: 1). Classifying the anchor frames in the feature fusion graph 1 through a softmax classifier in the RPN to obtain the anchor frames containing the flaw information and the anchor frames not containing the flaw information, calculating the regression offset of the anchor frames to obtain accurate anchor frames with high consistency with the real frames, and finally taking the accurate anchor frames with high consistency with the real frames as candidate frames B0.
In step S82, a plurality of candidate blocks are input to the prediction network of each detector, and a plurality of prediction results are obtained.
Specifically, 3 of the 2000 candidate blocks B0 (respectively corresponding to the feature fusion map 1, the feature fusion map 3, and the feature fusion map 5) acquired in step S81 are input to the prediction network of each detector, and a plurality of prediction results are obtained. Wherein, the prediction network comprises: 3 pooling layers and 6 fully connected branches.
Optionally, in some embodiments of the present invention, a plurality of candidate boxes are input into the first pooling layer to obtain candidate boxes of the same dimension; inputting the candidate frames with the same dimensionality into the full-connection branches corresponding to the first pooling layer for classification and regression to obtain first class information and first identification frame information; after the first identification frame information is input into the second pooling layer, classifying and regressing through the full-connection branch corresponding to the second pooling layer to obtain second category information and second identification frame information; after the second identification frame information is input into the third pooling layer, classifying and regressing through the full-connection branch corresponding to the third pooling layer to obtain third category information and third identification frame information; and taking the third category information and the third identification frame information as a prediction result.
Specifically, the description will be given taking the candidate box B0 corresponding to the feature fusion fig. 1 as an example, as shown in fig. 4 and 9. Inputting a candidate frame B0 into a first pooling layer (rolipool _1), obtaining a candidate frame B0 with the same dimensionality, classifying and regressing through two fully-connected branches (H1 modules), and obtaining first class information C1 and first identification frame information B1; inputting the first identification frame information B1 into a second pooling layer (rolipool _2), obtaining first identification frame information B1 with the same dimensionality, classifying and regressing through two full-connection branches (H2 modules), and obtaining second category information C2 and second identification frame information B2; inputting the second identification frame information B3 into a third pooling layer (roipool _3), obtaining second identification frame information B2 with the same dimensionality, classifying and regressing through two full-connection branches (H2 modules), and obtaining third category information C3 and third identification frame information B3.
When the H1 module passes, the IOU (Intersection over Union) may be set to 0.2, so as to perform preliminary screening on the candidate frame B0, and reserve the candidate frame whose IOU is greater than 0.2 as the first identification frame information B1; when the H2 module passes, the value of the IOU is set to 0.5 to filter the first identification box information B1, and the identification box with the IOU greater than 0.5 is retained as second identification box information B2; when passing through the H3 module, the value of the IOU is set to 0.7 to filter the second recognition box information B2, and the recognition boxes with the IOU greater than 0.7 are retained as the third recognition box information B3.
It should be noted that the category information (C1/C2/C3) may include defect types such as dirt, chips, and cracks. The identification frame information (B1/B2/B3) includes the coordinates of the identification frame in the feature fusion graph 1. For example, [ C3, B3] ═ 0.1,0.2,0.7, x, y, w, h ], where the confidence of fouling is 0.1, the confidence of notching is 0.2 and the confidence of cracking is 0.7. The confidence of the crack is highest, so [ C3, B3] indicates that the feature is fused with the crack within the recognition box [ x, y, w, h ] in FIG. 1.
In this example, the multiple classification and regression through the cascade method are processes from coarse positioning to precise positioning in the sight line of the flaw on the shell, so as to obtain a more precise detection result.
And step S83, screening the plurality of prediction results to determine the shell stain defect condition.
Since the anchor points are relatively close to each other, a plurality of recognition frames overlap with respect to the same defect, in order to obtain a more accurate recognition frame, the third recognition frame in the plurality of prediction results may be subjected to non-maximum suppression for screening, that is, the IOUs of the recognition frames in the third recognition frame information B3 and the confidences of the categories corresponding to the iouts are sorted from large to small, and the case contamination defect condition is determined according to the third recognition frame with the front sorted position.
It should be noted that, in the above embodiment, only the prediction process of inputting the feature fusion into the corresponding detector in fig. 1 is described, and the prediction processes of inputting the feature fusion into the corresponding detector in fig. 2 and the feature fusion into fig. 3 are the same, and are not described again here.
Through a large number of experiments, the applicant verifies that the whole detection process can be completed within 1.5s by the shell stain defect detection method of the embodiment.
According to the method for detecting the shell stain defect, the acquired standard template image data is subjected to perspective transformation to obtain first template image data, the original template image data and the first template image data are respectively preprocessed to obtain second template image data and first sampling image data, and the second template image data and the first sampling image data are input into a multi-scale difference RCNN model to be detected so as to determine the shell stain defect condition. Therefore, the intelligent degree, the detection efficiency and the detection accuracy of the shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
In accordance with the above embodiments, the present invention provides a computer readable storage medium, on which a casing damage defect detection program is stored, and when the casing damage defect detection program is executed by a processor, the casing damage defect detection method described in the above embodiments is implemented.
According to the computer readable storage medium of the embodiment of the invention, when the stored shell fouling defect detection program is executed by the processor, the shell fouling defect detection method is executed, so that the intelligent degree, the detection efficiency and the detection accuracy of shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
Referring to fig. 10, the third embodiment of the present invention further provides a computer apparatus, where the computer apparatus 10 includes a memory 11, a processor 12, and a casing contamination defect detection program stored in the memory 11 and operable on the processor 12, and when the processor 12 executes the casing contamination defect detection program, the casing contamination defect detection method described in the foregoing embodiment is implemented.
According to the computer equipment provided by the embodiment of the invention, when the stored shell stain defect detection program is executed by the processor, the shell stain defect detection method is executed, so that the intelligent degree, the detection efficiency and the detection accuracy of shell stain defect detection are improved, and the stain and defect conditions of the shell can be detected simultaneously.
Fig. 11 is a block diagram illustrating an apparatus for detecting a contamination defect of a housing according to an embodiment of the present invention.
Referring to fig. 11, the apparatus 120 for detecting a housing contamination defect includes an acquisition module 121, a conversion module 122, a preprocessing module 123, and a detection module 124. The acquiring module 121 is configured to acquire original sampling image data and standard template image data; the transformation module 122 is configured to: carrying out perspective transformation on the standard template image data to obtain first template image data; the preprocessing module 123 is configured to perform preprocessing on the first template image data and the original sampled image data, respectively, to obtain second template image data and first sampled image data; and the detection module 124 is configured to input the second template image data and the first sample image data into the multi-scale differential RCNN model for detection, so as to determine the case of the contamination defect of the housing.
In some embodiments of the invention, the detection module 124 is configured to: performing feature extraction on the second template image data to obtain a first feature map, and performing feature extraction on the first sampling image data to obtain a second feature map; carrying out difference and fusion processing on the first characteristic diagram and the second characteristic diagram to obtain characteristic fusion data; the feature fusion data is input to a plurality of detectors for detection.
In some embodiments of the present invention, the detection module 124 includes a VGG16 network (not shown), and the VGG16 network is used to perform feature extraction on the second template image data and the first sample image data, and perform difference and fusion processing on the first feature map and the second feature map to obtain a difference FPN.
In some embodiments of the present invention, weight sharing is performed between a first network of VGG16 employed in feature extraction of the second template image data and a second network of VGG16 employed in feature extraction of the first sampled image data.
In some embodiments of the invention, the detection module 124 is configured to: differentiating the first characteristic diagram and the second characteristic diagram to obtain a plurality of differential diagrams; carrying out up-sampling and summation processing on the plurality of difference images to obtain difference FPN; and obtaining feature fusion data according to the difference FPN.
In some embodiments of the invention, the detection module 124 is configured to: respectively inputting the feature fusion data into the RPN of each detector to obtain a plurality of candidate frames; inputting a plurality of candidate boxes into a prediction network of each detector to obtain a plurality of prediction results; and screening the plurality of prediction results to determine the shell fouling defect condition.
In some embodiments of the invention, the detection module 124 is configured to: inputting a plurality of candidate frames into a first pooling layer to obtain candidate frames with the same dimensionality; inputting the candidate boxes with the same dimensionality into full-connection branches corresponding to the first pooling layer to be classified and regressed so as to obtain first class information and first identification box information; after the first identification frame information is input into the second pooling layer, classifying and regressing through the full-connection branch corresponding to the second pooling layer to obtain second category information and second identification frame information; after the second identification frame information is input into the third pooling layer, classifying and regressing through the full-connection branch corresponding to the third pooling layer to obtain third category information and third identification frame information; and taking the third category information and the third identification frame information as a prediction result.
In some embodiments of the invention, the transformation module 122 is configured to: and calculating a perspective transformation matrix based on the deep learning algorithm model, and performing perspective transformation on the standard template image data according to the perspective transformation matrix.
It should be noted that, regarding the description of the apparatus for detecting a housing contamination defect, reference is made to the foregoing description of the method for detecting a housing contamination defect, and details thereof are not described here.
According to the device for detecting the shell stain defect, the standard template image data acquired by the acquisition module are subjected to perspective transformation through the transformation module to acquire the first template image data, the first template image data and the original sampling image data acquired by the acquisition module are respectively preprocessed through the preprocessing module to acquire the second template image data and the first sampling image data, and the second template image data and the first sampling image data are input into the multi-scale differential RCNN model through the detection module to be detected so as to determine the shell stain defect condition. Therefore, the intelligent degree, the detection efficiency and the detection accuracy of the shell fouling defect detection are improved, and the fouling and defect conditions of the shell can be detected simultaneously.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second", and the like, used in the embodiments of the present invention, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated in the embodiments. Thus, a feature of an embodiment of the present invention that is defined by the terms "first," "second," etc. may explicitly or implicitly indicate that at least one of the feature is included in the embodiment. In the description of the present invention, the word "plurality" means at least two or two and more, such as two, three, four, etc., unless specifically limited otherwise in the examples.
In the present invention, unless otherwise explicitly stated or limited by the relevant description or limitation, the terms "mounted," "connected," and "fixed" in the embodiments are to be understood in a broad sense, for example, the connection may be a fixed connection, a detachable connection, or an integrated connection, and it may be understood that the connection may also be a mechanical connection, an electrical connection, etc.; of course, they may be directly connected or indirectly connected through an intermediate medium, or they may be interconnected or in mutual relationship. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific implementation situations.
In the present invention, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the second feature or the first and second features may be indirectly contacting each other through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature "under," "beneath," and "under" a second feature may be directly under or obliquely under the second feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
It should be noted that, the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the combinations should be considered as the scope of the present description.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (12)
1. A method for detecting a contamination defect of a housing, comprising:
acquiring original sampling image data and standard template image data;
carrying out perspective transformation on the standard template image data to obtain first template image data;
respectively preprocessing the first template image data and the original sampling image data to obtain second template image data and first sampling image data;
and inputting the second template image data and the first sampling image data into a multi-scale differential RCNN model for detection so as to determine the defect condition of the fouling of the shell.
2. The method for detecting the defect of the damaged shell according to claim 1, wherein the inputting the second template image data and the first sampling image data into a multi-scale differential RCNN model for detection comprises:
performing feature extraction on the second template image data to obtain a first feature map, and performing feature extraction on the first sampling image data to obtain a second feature map;
carrying out difference and fusion processing on the first feature map and the second feature map to obtain feature fusion data;
inputting the feature fusion data to a plurality of detectors for detection.
3. The method for detecting the defect of the contaminated shell according to claim 2, wherein a VGG16 network is used to perform feature extraction on the second template image data and the first sample image data, and perform difference and fusion processing on the first feature map and the second feature map to obtain a difference FPN.
4. The housing contamination defect detection method according to claim 3, wherein a first VGG16 network used when performing feature extraction on the second template image data and a second VGG16 network used when performing feature extraction on the first sampled image data are shared by weight.
5. The method for detecting the fouling defect of the shell according to claim 3, wherein the step of differentiating and fusing the first characteristic diagram and the second characteristic diagram to obtain characteristic fusion data comprises the following steps:
differentiating the first characteristic diagram and the second characteristic diagram to obtain a plurality of differential diagrams;
after the plurality of difference images are subjected to upsampling and summing processing, the difference FPN is obtained;
and obtaining the feature fusion data according to the difference FPN.
6. The method for detecting the fouling defect of the shell according to claim 2, wherein inputting the feature fusion data into a plurality of detectors for detection comprises:
inputting the feature fusion data to the RPN of each of the detectors, respectively, to obtain a plurality of candidate frames;
inputting the candidate boxes into a prediction network of each detector to obtain a plurality of prediction results;
and screening the plurality of prediction results to determine the shell fouling defect condition.
7. The method of claim 6, wherein inputting the plurality of candidate boxes to the predictive network for each of the detectors comprises:
inputting the candidate frames into a first pooling layer to obtain candidate frames with the same dimensionality;
inputting the candidate boxes with the same dimensionality into full-connection branches corresponding to a first pooling layer for classification and regression to obtain first class information and first identification box information;
after the first identification frame information is input into a second pooling layer, classifying and regressing through full-connection branches corresponding to the second pooling layer to obtain second category information and second identification frame information;
after the second identification frame information is input into a third pooling layer, classifying and regressing through full-connection branches corresponding to the third pooling layer to obtain third category information and third identification frame information;
and taking the third category information and the third identification frame information as the prediction result.
8. The method for detecting the defect of the damaged outer shell according to any one of claims 1 to 7, wherein the perspective transformation is performed on the standard template image data, and comprises the following steps:
and calculating a perspective transformation matrix based on a deep learning algorithm model, and performing perspective transformation on the standard template image data according to the perspective transformation matrix.
9. The method for detecting the fouling defect of the shell according to claim 1, wherein the preprocessing the first template image data and the original sampling image data respectively comprises:
and sequentially carrying out graying, histogram equalization, median filtering and Kirsch edge enhancement on the first template image data and the original sampling image data.
10. A computer-readable storage medium, having stored thereon a housing contamination defect detection program that, when executed by a processor, implements the housing contamination defect detection method according to any one of claims 1 to 9.
11. A computer device comprising a memory, a processor, and a housing contamination defect detection program stored in the memory and executable on the processor, wherein the processor implements the housing contamination defect detection method according to any one of claims 1 to 9 when executing the housing contamination defect detection program.
12. An apparatus for detecting a contamination defect of a housing, comprising:
the acquisition module is used for acquiring original sampling image data and standard template image data;
the transformation module is used for carrying out perspective transformation on the standard template image data to obtain first template image data;
the preprocessing module is used for respectively preprocessing the first template image data and the original sampling image data to obtain second template image data and first sampling image data;
and the detection module is used for inputting the second template image data and the first sampling image data into a multi-scale difference RCNN model for detection so as to determine the case of the shell fouling defect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210351179.XA CN114926675A (en) | 2022-04-02 | 2022-04-02 | Method and device for detecting shell stain defect, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210351179.XA CN114926675A (en) | 2022-04-02 | 2022-04-02 | Method and device for detecting shell stain defect, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114926675A true CN114926675A (en) | 2022-08-19 |
Family
ID=82804468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210351179.XA Pending CN114926675A (en) | 2022-04-02 | 2022-04-02 | Method and device for detecting shell stain defect, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926675A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058141A (en) * | 2023-10-11 | 2023-11-14 | 福建钜鸿百纳科技有限公司 | Glass edging defect detection method and terminal |
-
2022
- 2022-04-02 CN CN202210351179.XA patent/CN114926675A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058141A (en) * | 2023-10-11 | 2023-11-14 | 福建钜鸿百纳科技有限公司 | Glass edging defect detection method and terminal |
CN117058141B (en) * | 2023-10-11 | 2024-03-01 | 福建钜鸿百纳科技有限公司 | Glass edging defect detection method and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7569479B2 (en) | DEFECT DETECTION METHOD, APPARATUS AND SYSTEM | |
CN111325713A (en) | Wood defect detection method, system and storage medium based on neural network | |
CN113592845A (en) | Defect detection method and device for battery coating and storage medium | |
CN111257341B (en) | Underwater building crack detection method based on multi-scale features and stacked full convolution network | |
CN110060237A (en) | A kind of fault detection method, device, equipment and system | |
CN109671071B (en) | Underground pipeline defect positioning and grade judging method based on deep learning | |
Yu et al. | Detecting gear surface defects using background‐weakening method and convolutional neural network | |
AU2020272936B2 (en) | Methods and systems for crack detection using a fully convolutional network | |
CN116665095B (en) | Method and system for detecting motion ship, storage medium and electronic equipment | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
CN115829995A (en) | Cloth flaw detection method and system based on pixel-level multi-scale feature fusion | |
CN115457415A (en) | Target detection method and device based on YOLO-X model, electronic equipment and storage medium | |
CN113313678A (en) | Automatic sperm morphology analysis method based on multi-scale feature fusion | |
CN115829942A (en) | Electronic circuit defect detection method based on non-negative constraint sparse self-encoder | |
CN114926675A (en) | Method and device for detecting shell stain defect, computer equipment and storage medium | |
CN114170168A (en) | Display module defect detection method, system and computer readable storage medium | |
CN117830210A (en) | Defect detection method, device, electronic equipment and storage medium | |
CN117078608A (en) | Double-mask guide-based high-reflection leather surface defect detection method | |
CN116596866A (en) | Defect detection method based on high-resolution image and storage medium | |
CN116152191A (en) | Display screen crack defect detection method, device and equipment based on deep learning | |
CN113283429B (en) | Liquid level meter reading method based on deep convolutional neural network | |
CN115330705A (en) | Skin paint surface defect detection method based on adaptive weighting template NCC | |
CN118096649B (en) | Method, equipment and storage medium for identifying apparent defects of steel bridge weld joints | |
CN116993653B (en) | Camera lens defect detection method, device, equipment, storage medium and product | |
CN118397072B (en) | PVC pipe size detection method and device based on high-resolution semantic segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |