CN113298798A - Main journal defect detection method based on feature fusion - Google Patents
Main journal defect detection method based on feature fusion Download PDFInfo
- Publication number
- CN113298798A CN113298798A CN202110646995.9A CN202110646995A CN113298798A CN 113298798 A CN113298798 A CN 113298798A CN 202110646995 A CN202110646995 A CN 202110646995A CN 113298798 A CN113298798 A CN 113298798A
- Authority
- CN
- China
- Prior art keywords
- image
- features
- main journal
- gray
- main
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 40
- 238000001514 detection method Methods 0.000 title claims abstract description 19
- 230000004927 fusion Effects 0.000 title claims abstract description 16
- 238000005457 optimization Methods 0.000 claims abstract description 14
- 241000283153 Cetacea Species 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 5
- 238000000034 method Methods 0.000 claims description 27
- 238000003708 edge detection Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 239000006185 dispersion Substances 0.000 claims description 3
- 239000000306 component Substances 0.000 description 6
- 238000005286 illumination Methods 0.000 description 6
- 238000000513 principal component analysis Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a main journal defect detection method based on feature fusion, which comprises the following steps: s1: acquiring an original image of the main journal; s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image; s3: converting the collected main journal preprocessing image into a gray scale image; s4: carrying out image enhancement on the gray-scale image; s5: extracting image features from the enhanced graph; s6: normalizing the image features; s7: fusing feature vectors of image features; s8: selecting a Relief characteristic; s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm; s10: and detecting and classifying defects by using the trained SVM model. The main journal defect detection method based on feature fusion can meet the requirements of high defect classification accuracy and good detection real-time performance.
Description
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a main journal defect detection method based on feature fusion.
Background
At present, the ocean development and utilization are increasing day by day, and large ships frequently come and go at sea. The marine crankshaft is used as a core component of a marine engine and plays an important role in safety of large ships in sailing. The main journals, which are an important component of marine crankshafts, require a very high surface quality. The surface defects of the main journal are mainly divided into scratches, pits and spots, which are caused by linear friction, mold damage and external erosion, respectively. On the surface defect detection of the main journal, the detection method commonly used in China at present comprises the following steps:
(1) the surface defects of the main journal are confirmed by human eyes, so that whether the product meets the production requirements or not is judged.
(2) And detecting the defects of the main journal by using an image processing mode and detecting the defects by using a template matching mode.
By means of detection of inspectors, the method not only depends on the quality of the inspectors, but also has low efficiency and insufficient objectivity for the detection of the main journal, and the product quality is difficult to guarantee.
The template matching can cause the waste of detection time, and the defects on the main journal have randomness, which affects the detection precision of the method.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the main journal defect detection method based on feature fusion, which can meet the requirements of high defect classification accuracy and good detection real-time performance.
In order to achieve the above object, the present invention provides a method for detecting defects of a main journal based on feature fusion, comprising the steps of:
s1: acquiring an original image of the main journal;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
s3: converting the collected main journal preprocessing image into a gray scale image;
s4: carrying out image enhancement on the gray level image to obtain an enhanced image;
s5: extracting image features from the enhancement map, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
s6: normalizing the image features;
s7: fusing feature vectors of the image features;
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
s10: and carrying out image processing on the newly acquired original images of the main journal, extracting the image characteristics, and detecting and classifying defects by using the trained SVM model.
Preferably, in the step S2, the ROI region selection step segments an image of a surface region of the main journal from the original image of the main journal by determining an edge of the image of the main journal according to an image histogram by using a Canny edge detection method.
Preferably, in the step S3, the acquired main journal image is converted into a gray-scale map by using an image adaptive gray-scale optimization algorithm.
Preferably, in the step S4, the image is enhanced by using a Retinex algorithm.
Preferably, in the step S5, the geometric features include degree of dispersion, area and centroid; the gray features comprise an average gray value and a gray variance; a Gabor filter is employed to extract the texture features.
Preferably, in the step S6, the image features are normalized by using a Min-Max standard.
Due to the adoption of the technical scheme, the invention has the following beneficial effects:
the method utilizes Canny edge detection to reduce an image calculation area, utilizes an image self-adaptive gray optimization algorithm to complete graying of the image, and utilizes a whale optimization algorithm to calculate the weight of RGB three channels, so that more characteristics of the image are kept as far as possible in the graying process. Because the image has the phenomenon of uneven illumination in the image acquisition process, the Retinex algorithm is used for enhancing the image. In the feature extraction stage, in order to mine more features from the image, the feature parameters with accurate defects are extracted from three aspects of geometric features, gray features and textural features, and because the textural features use Gabor filters, the feature dimension is too high, a PCA principal component dimension reduction method is adopted for the textural features to reduce the feature dimension. Although the extracted features can accurately describe the features of the defects, the difference of the values of different features is large, so all feature vectors are connected by using a Min-Max normalization process. But redundant information still exists in the feature at this time, so the dimensionality of the feature is further reduced by adopting a Relief algorithm. And finally, optimizing parameters of the SVM model by using a whale algorithm, and improving the performance of the classifier.
The method comprises the steps of collecting a main journal surface image through machine vision, and establishing an optimized SVM model through a series of preprocessing, feature extraction and feature dimension reduction methods. In the subject research of surface quality defect classification detection, a feature extraction method is not needed, the classification performance of a classifier can be improved by combining a feature extraction means, but the feature dimension is also increased, and in order to avoid the influence of dimension disaster and overfitting on the accuracy of the classifier, a method for extracting a more effective feature subset by using a Relief algorithm after feature normalization is provided. The method eliminates a large number of features with low correlation degree, so that the performance of the classifier is improved.
Drawings
Fig. 1 is a flowchart of a method for detecting a defect of a main journal based on feature fusion according to an embodiment of the present invention.
Detailed Description
The following description of the preferred embodiment of the present invention, with reference to the accompanying drawings and fig. 1, will provide a better understanding of the function and features of the invention.
Referring to fig. 1, a method for detecting a defect of a main journal based on feature fusion according to an embodiment of the present invention includes:
s1: acquiring an original image I (x, y) of the main journal, wherein the size of the image is M x N;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
in the step S2, the ROI region selection step divides the main journal surface region image from the main journal original image by determining the edge of the main journal image according to the image histogram by using the Canny edge detection method.
In order to adapt to defect detection of main journals with more sizes, the field of view (FOV) needs to be as large as possible, but the calculation amount of subsequent image processing is also greatly increased. Therefore, the main journal surface region needs to be segmented from the original image, namely the ROI region is selected. Here, the Canny edge detection method is used to determine the edge of the main journal image according to the image histogram, and the image size is m × n.
S3: converting the collected main journal preprocessing image into a gray scale image;
in the step S3, the acquired main journal image is converted into a gray scale image by using an image adaptive gray scale optimization algorithm.
And converting the acquired main journal image into a gray level image, and adopting an image self-adaptive gray level optimization algorithm in order to ensure that the characteristics of the original image are kept in the gray level conversion process as much as possible. Splitting the acquired image according to RGB three channels, determining the weight of three channels, and performing gray-scale conversion, wherein the conversion formula is as follows:
Gray(x,y)=WBB(x,y)+WGG(x,y)+WRR(x,y)
in the process, among other things,
WR=countR/(countR+countG+countB);
WG=countR/(countR+countG+countB);
WB=countR/(countR+countG+countB);
WR,WG,WBis the weight of the RGB three components, and countR,countG,countBIs the total number of corresponding RGB three channel histograms greater than the threshold value. Establishing mean square error MSE ═ MSER+MSEG+MSEBAnd calculating a minimum fitness value MSE through a whale optimization algorithm.
S4: carrying out image enhancement on the gray level image to obtain an enhanced image;
in step S4, the image is enhanced by using a Retinex algorithm.
Aiming at the condition that the illumination of the transformed image is uneven, a Retinex algorithm is adopted to enhance the image, and an image model is as follows:
Gray(x,y)=R(x,y)*L(x,y)
wherein Gray (x, y) is an original image, L (x, y) is illumination, and the Retinex algorithm aims to estimate illumination L from the original image so as to decompose R, thereby achieving the purpose of eliminating uneven illumination of the image. In the process, the image is typically transferred to the logarithmic domain, i.e. Gray log (Gray), R log (R), L log (R), so R (x, y) Gray (x, y) -L (x, y), and L (x, y) is an estimate of the illumination L.
S5: extracting image features from the enhanced image, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
in the step S5, the geometric features include degree of dispersion, area, and centroid; the gray features comprise average gray values and gray variances; a Gabor filter is used to extract texture features.
And according to the characteristics of the surface defects of the main journal, extracting the characteristics from geometric, gray and texture characteristics. As the sizes of the connected domains of the defects are different, the dispersity, the area and the mass center are used as initial geometric characteristic parameters. In addition, the gray scales of the defect regions are different from each other on the gray scale histogram, and therefore the average gray scale value and the gray scale variance are used as the initial gray scale feature parameters. And finally, extracting texture features by adopting a Gabor filter, and selecting 5 frequency scales and 8 directions to extract the texture features of the main journal surface image. The selection is to improve the possibility of obtaining the maximum frequency spectrum and extract finer texture features, but the performance of the classifier is reduced by the high-dimensional features, and the problem of overlarge feature redundancy is solved by adopting a PCA (principal component analysis) principal component dimension reduction mode.
S6: normalizing the image features;
in step S6, the image features are normalized by the Min-Max standard.
Because the difference of the values of different characteristics is large, in order to avoid the effect of the great difference of the numbers on the detection of the defect image, the values of the different characteristics are mapped to [0,1], and the Min-Max standard is adopted for normalization:
u→v=(u-umin)/(umax-umin)
wherein u is the original data, uminIs a minimum value of umaxIs a maximum.
S7: fusing feature vectors of image features;
let the feature vectors of the geometric, gray and texture features of the defect image be [ a ] respectively1,a2,a3],[b1,b2,b3],[c1,c2,....,cn]And fusing the characteristic vectors to obtain a fused characteristic vector T:
T=[a1,a2,a3,b1,b2,b3,c1,c2,....,cn]
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
typically, weakly correlated features negatively impact the performance of the classifier because the features have different correlations with the classification target. In order to eliminate the influence of the weak correlation characteristics, the classification capability of the characteristics is evaluated by adopting a Relief algorithm, and the top 80% of the strong correlation characteristics are selected according to a preset threshold value. The weight of each feature in the feature set is calculated from the difference between the distance between each sample and the nearest similar sample, and the specific calculation is as follows:
Wi+1=Wi-diff((x,H(x))+diff(x,M(x))
where x is the sample, find its relative class from the nearest neighbor of the same class and each sampling instance, denoted as H (x) and M (x), respectively.
S9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
and constructing an SVM model, and mapping the sample data in the n-dimensional vector space to a high-dimensional feature space. And taking the classified main journal surface defects as image training samples to obtain a multi-dimensional characteristic data set, calculating key parameters (penalty factor C and nuclear parameter g) in the SVM model by using a Whale Optimization Algorithm (WOA), and building the SVM model by using the linear classification.
S10: and carrying out image processing on the newly acquired original images of the main journal, extracting image characteristics, and detecting and classifying defects by using the trained SVM model.
The embodiment of the invention discloses a method for detecting defects of a main journal based on feature fusion, which comprises the following steps:
graying is carried out through a self-adaptive gray optimization algorithm, and more details in an image are reserved as far as possible.
Secondly, extracting a defect image through geometric, gray and texture features, adjusting the numerical values of different features to [0,1] by adopting Min-Max standard normalization aiming at the condition that the difference of the numerical values of the different features is large, and finally reducing the redundancy of the features by utilizing a Relief method.
And thirdly, optimizing a key parameter punishment factor C and a nuclear parameter g of the classifier by using a whale optimization algorithm, so that the accuracy of the classifier is improved.
While the present invention has been described in detail and with reference to the embodiments thereof as illustrated in the accompanying drawings, it will be apparent to one skilled in the art that various changes and modifications can be made therein. Therefore, certain details of the embodiments are not to be interpreted as limiting, and the scope of the invention is to be determined by the appended claims.
Claims (6)
1. A main journal defect detection method based on feature fusion comprises the following steps:
s1: acquiring an original image of the main journal;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
s3: converting the collected main journal preprocessing image into a gray scale image;
s4: carrying out image enhancement on the gray level image to obtain an enhanced image;
s5: extracting image features from the enhancement map, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
s6: normalizing the image features;
s7: fusing feature vectors of the image features;
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
s10: and carrying out image processing on the newly acquired original images of the main journal, extracting the image characteristics, and detecting and classifying defects by using the trained SVM model.
2. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in the step of S2, the step of selecting ROI regions segments the images of main journal surface regions from the original images of main journals by determining the edges of the images of main journals according to image histograms by using Canny edge detection.
3. The method for detecting defects of main journals based on feature fusion as claimed in claim 1, wherein in step S3, an image adaptive gray scale optimization algorithm is used to convert the acquired main journal images into gray scale images.
4. The method for detecting defects of main journals based on feature fusion as claimed in claim 1, wherein in step S4, the image is enhanced by Retinex algorithm.
5. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in said step of S5, said geometric features comprise degree of dispersion, area and centroid; the gray features comprise an average gray value and a gray variance; a Gabor filter is employed to extract the texture features.
6. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in said step of S6, said image features are normalized by Min-Max standard.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110646995.9A CN113298798A (en) | 2021-06-10 | 2021-06-10 | Main journal defect detection method based on feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110646995.9A CN113298798A (en) | 2021-06-10 | 2021-06-10 | Main journal defect detection method based on feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113298798A true CN113298798A (en) | 2021-08-24 |
Family
ID=77327782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110646995.9A Pending CN113298798A (en) | 2021-06-10 | 2021-06-10 | Main journal defect detection method based on feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298798A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147427A (en) * | 2022-09-06 | 2022-10-04 | 苏州鼎纳自动化技术有限公司 | Visual detection method and system for resistance defects on PCB and computing device |
CN116109638A (en) * | 2023-04-13 | 2023-05-12 | 中铁四局集团有限公司 | Rail break detection method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593670A (en) * | 2013-10-14 | 2014-02-19 | 浙江工业大学 | Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine |
CN111109417A (en) * | 2019-12-23 | 2020-05-08 | 重庆大学 | Route is from planning sugar-painter based on image information |
CN111145165A (en) * | 2019-12-30 | 2020-05-12 | 北京工业大学 | Rubber seal ring surface defect detection method based on machine vision |
CN112036296A (en) * | 2020-08-28 | 2020-12-04 | 合肥工业大学 | Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM |
-
2021
- 2021-06-10 CN CN202110646995.9A patent/CN113298798A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593670A (en) * | 2013-10-14 | 2014-02-19 | 浙江工业大学 | Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine |
CN111109417A (en) * | 2019-12-23 | 2020-05-08 | 重庆大学 | Route is from planning sugar-painter based on image information |
CN111145165A (en) * | 2019-12-30 | 2020-05-12 | 北京工业大学 | Rubber seal ring surface defect detection method based on machine vision |
CN112036296A (en) * | 2020-08-28 | 2020-12-04 | 合肥工业大学 | Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM |
Non-Patent Citations (2)
Title |
---|
巢渊: "基于机器视觉的半导体芯片表面缺陷在线检测关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
曹晓杰: "基于卷积神经网络的超声无损检测图像缺陷识别研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147427A (en) * | 2022-09-06 | 2022-10-04 | 苏州鼎纳自动化技术有限公司 | Visual detection method and system for resistance defects on PCB and computing device |
CN116109638A (en) * | 2023-04-13 | 2023-05-12 | 中铁四局集团有限公司 | Rail break detection method and system |
CN116109638B (en) * | 2023-04-13 | 2023-07-04 | 中铁四局集团有限公司 | Rail break detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111292305B (en) | Improved YOLO-V3 metal processing surface defect detection method | |
CN110097034B (en) | Intelligent face health degree identification and evaluation method | |
CN112907519A (en) | Metal curved surface defect analysis system and method based on deep learning | |
CN110009618A (en) | A kind of Axle Surface quality determining method and device | |
CN108563979B (en) | Method for judging rice blast disease conditions based on aerial farmland images | |
CN113298798A (en) | Main journal defect detection method based on feature fusion | |
CN111383227A (en) | Neural network-based tool wear type identification method and wear value determination method | |
CN116205919A (en) | Hardware part production quality detection method and system based on artificial intelligence | |
CN115131359B (en) | Method for detecting pitting defects on surface of metal workpiece | |
CN115359053A (en) | Intelligent detection method and system for defects of metal plate | |
CN116246174B (en) | Sweet potato variety identification method based on image processing | |
CN116309599B (en) | Water quality visual monitoring method based on sewage pretreatment | |
CN110348461A (en) | A kind of Surface Flaw feature extracting method | |
CN112801106A (en) | Machining defect classification method of tooth restoration product based on machine vision | |
CN111724376B (en) | Paper disease detection method based on texture feature analysis | |
CN110728286B (en) | Abrasive belt grinding material removal rate identification method based on spark image | |
CN114549446A (en) | Cylinder sleeve defect mark detection method based on deep learning | |
CN116758045A (en) | Surface defect detection method and system for semiconductor light-emitting diode | |
CN114373079A (en) | Rapid and accurate ground penetrating radar target detection method | |
CN112258532B (en) | Positioning and segmentation method for callus in ultrasonic image | |
CN113743421A (en) | Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf | |
CN113435460A (en) | Method for identifying brilliant particle limestone image | |
JP2011170890A (en) | Face detecting method, face detection device, and program | |
CN113066041A (en) | Pavement crack detection method based on stack sparse self-coding deep learning | |
CN115464557A (en) | Method for adjusting mobile robot operation based on path and mobile robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210824 |
|
RJ01 | Rejection of invention patent application after publication |