CN113298798A - Main journal defect detection method based on feature fusion - Google Patents

Main journal defect detection method based on feature fusion Download PDF

Info

Publication number
CN113298798A
CN113298798A CN202110646995.9A CN202110646995A CN113298798A CN 113298798 A CN113298798 A CN 113298798A CN 202110646995 A CN202110646995 A CN 202110646995A CN 113298798 A CN113298798 A CN 113298798A
Authority
CN
China
Prior art keywords
image
features
main journal
gray
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110646995.9A
Other languages
Chinese (zh)
Inventor
朱振坤
孙渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202110646995.9A priority Critical patent/CN113298798A/en
Publication of CN113298798A publication Critical patent/CN113298798A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a main journal defect detection method based on feature fusion, which comprises the following steps: s1: acquiring an original image of the main journal; s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image; s3: converting the collected main journal preprocessing image into a gray scale image; s4: carrying out image enhancement on the gray-scale image; s5: extracting image features from the enhanced graph; s6: normalizing the image features; s7: fusing feature vectors of image features; s8: selecting a Relief characteristic; s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm; s10: and detecting and classifying defects by using the trained SVM model. The main journal defect detection method based on feature fusion can meet the requirements of high defect classification accuracy and good detection real-time performance.

Description

Main journal defect detection method based on feature fusion
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a main journal defect detection method based on feature fusion.
Background
At present, the ocean development and utilization are increasing day by day, and large ships frequently come and go at sea. The marine crankshaft is used as a core component of a marine engine and plays an important role in safety of large ships in sailing. The main journals, which are an important component of marine crankshafts, require a very high surface quality. The surface defects of the main journal are mainly divided into scratches, pits and spots, which are caused by linear friction, mold damage and external erosion, respectively. On the surface defect detection of the main journal, the detection method commonly used in China at present comprises the following steps:
(1) the surface defects of the main journal are confirmed by human eyes, so that whether the product meets the production requirements or not is judged.
(2) And detecting the defects of the main journal by using an image processing mode and detecting the defects by using a template matching mode.
By means of detection of inspectors, the method not only depends on the quality of the inspectors, but also has low efficiency and insufficient objectivity for the detection of the main journal, and the product quality is difficult to guarantee.
The template matching can cause the waste of detection time, and the defects on the main journal have randomness, which affects the detection precision of the method.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the main journal defect detection method based on feature fusion, which can meet the requirements of high defect classification accuracy and good detection real-time performance.
In order to achieve the above object, the present invention provides a method for detecting defects of a main journal based on feature fusion, comprising the steps of:
s1: acquiring an original image of the main journal;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
s3: converting the collected main journal preprocessing image into a gray scale image;
s4: carrying out image enhancement on the gray level image to obtain an enhanced image;
s5: extracting image features from the enhancement map, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
s6: normalizing the image features;
s7: fusing feature vectors of the image features;
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
s10: and carrying out image processing on the newly acquired original images of the main journal, extracting the image characteristics, and detecting and classifying defects by using the trained SVM model.
Preferably, in the step S2, the ROI region selection step segments an image of a surface region of the main journal from the original image of the main journal by determining an edge of the image of the main journal according to an image histogram by using a Canny edge detection method.
Preferably, in the step S3, the acquired main journal image is converted into a gray-scale map by using an image adaptive gray-scale optimization algorithm.
Preferably, in the step S4, the image is enhanced by using a Retinex algorithm.
Preferably, in the step S5, the geometric features include degree of dispersion, area and centroid; the gray features comprise an average gray value and a gray variance; a Gabor filter is employed to extract the texture features.
Preferably, in the step S6, the image features are normalized by using a Min-Max standard.
Due to the adoption of the technical scheme, the invention has the following beneficial effects:
the method utilizes Canny edge detection to reduce an image calculation area, utilizes an image self-adaptive gray optimization algorithm to complete graying of the image, and utilizes a whale optimization algorithm to calculate the weight of RGB three channels, so that more characteristics of the image are kept as far as possible in the graying process. Because the image has the phenomenon of uneven illumination in the image acquisition process, the Retinex algorithm is used for enhancing the image. In the feature extraction stage, in order to mine more features from the image, the feature parameters with accurate defects are extracted from three aspects of geometric features, gray features and textural features, and because the textural features use Gabor filters, the feature dimension is too high, a PCA principal component dimension reduction method is adopted for the textural features to reduce the feature dimension. Although the extracted features can accurately describe the features of the defects, the difference of the values of different features is large, so all feature vectors are connected by using a Min-Max normalization process. But redundant information still exists in the feature at this time, so the dimensionality of the feature is further reduced by adopting a Relief algorithm. And finally, optimizing parameters of the SVM model by using a whale algorithm, and improving the performance of the classifier.
The method comprises the steps of collecting a main journal surface image through machine vision, and establishing an optimized SVM model through a series of preprocessing, feature extraction and feature dimension reduction methods. In the subject research of surface quality defect classification detection, a feature extraction method is not needed, the classification performance of a classifier can be improved by combining a feature extraction means, but the feature dimension is also increased, and in order to avoid the influence of dimension disaster and overfitting on the accuracy of the classifier, a method for extracting a more effective feature subset by using a Relief algorithm after feature normalization is provided. The method eliminates a large number of features with low correlation degree, so that the performance of the classifier is improved.
Drawings
Fig. 1 is a flowchart of a method for detecting a defect of a main journal based on feature fusion according to an embodiment of the present invention.
Detailed Description
The following description of the preferred embodiment of the present invention, with reference to the accompanying drawings and fig. 1, will provide a better understanding of the function and features of the invention.
Referring to fig. 1, a method for detecting a defect of a main journal based on feature fusion according to an embodiment of the present invention includes:
s1: acquiring an original image I (x, y) of the main journal, wherein the size of the image is M x N;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
in the step S2, the ROI region selection step divides the main journal surface region image from the main journal original image by determining the edge of the main journal image according to the image histogram by using the Canny edge detection method.
In order to adapt to defect detection of main journals with more sizes, the field of view (FOV) needs to be as large as possible, but the calculation amount of subsequent image processing is also greatly increased. Therefore, the main journal surface region needs to be segmented from the original image, namely the ROI region is selected. Here, the Canny edge detection method is used to determine the edge of the main journal image according to the image histogram, and the image size is m × n.
S3: converting the collected main journal preprocessing image into a gray scale image;
in the step S3, the acquired main journal image is converted into a gray scale image by using an image adaptive gray scale optimization algorithm.
And converting the acquired main journal image into a gray level image, and adopting an image self-adaptive gray level optimization algorithm in order to ensure that the characteristics of the original image are kept in the gray level conversion process as much as possible. Splitting the acquired image according to RGB three channels, determining the weight of three channels, and performing gray-scale conversion, wherein the conversion formula is as follows:
Gray(x,y)=WBB(x,y)+WGG(x,y)+WRR(x,y)
in the process, among other things,
WR=countR/(countR+countG+countB);
WG=countR/(countR+countG+countB);
WB=countR/(countR+countG+countB);
WR,WG,WBis the weight of the RGB three components, and countR,countG,countBIs the total number of corresponding RGB three channel histograms greater than the threshold value. Establishing mean square error MSE ═ MSER+MSEG+MSEBAnd calculating a minimum fitness value MSE through a whale optimization algorithm.
S4: carrying out image enhancement on the gray level image to obtain an enhanced image;
in step S4, the image is enhanced by using a Retinex algorithm.
Aiming at the condition that the illumination of the transformed image is uneven, a Retinex algorithm is adopted to enhance the image, and an image model is as follows:
Gray(x,y)=R(x,y)*L(x,y)
wherein Gray (x, y) is an original image, L (x, y) is illumination, and the Retinex algorithm aims to estimate illumination L from the original image so as to decompose R, thereby achieving the purpose of eliminating uneven illumination of the image. In the process, the image is typically transferred to the logarithmic domain, i.e. Gray log (Gray), R log (R), L log (R), so R (x, y) Gray (x, y) -L (x, y), and L (x, y) is an estimate of the illumination L.
S5: extracting image features from the enhanced image, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
in the step S5, the geometric features include degree of dispersion, area, and centroid; the gray features comprise average gray values and gray variances; a Gabor filter is used to extract texture features.
And according to the characteristics of the surface defects of the main journal, extracting the characteristics from geometric, gray and texture characteristics. As the sizes of the connected domains of the defects are different, the dispersity, the area and the mass center are used as initial geometric characteristic parameters. In addition, the gray scales of the defect regions are different from each other on the gray scale histogram, and therefore the average gray scale value and the gray scale variance are used as the initial gray scale feature parameters. And finally, extracting texture features by adopting a Gabor filter, and selecting 5 frequency scales and 8 directions to extract the texture features of the main journal surface image. The selection is to improve the possibility of obtaining the maximum frequency spectrum and extract finer texture features, but the performance of the classifier is reduced by the high-dimensional features, and the problem of overlarge feature redundancy is solved by adopting a PCA (principal component analysis) principal component dimension reduction mode.
S6: normalizing the image features;
in step S6, the image features are normalized by the Min-Max standard.
Because the difference of the values of different characteristics is large, in order to avoid the effect of the great difference of the numbers on the detection of the defect image, the values of the different characteristics are mapped to [0,1], and the Min-Max standard is adopted for normalization:
u→v=(u-umin)/(umax-umin)
wherein u is the original data, uminIs a minimum value of umaxIs a maximum.
S7: fusing feature vectors of image features;
let the feature vectors of the geometric, gray and texture features of the defect image be [ a ] respectively1,a2,a3],[b1,b2,b3],[c1,c2,....,cn]And fusing the characteristic vectors to obtain a fused characteristic vector T:
T=[a1,a2,a3,b1,b2,b3,c1,c2,....,cn]
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
typically, weakly correlated features negatively impact the performance of the classifier because the features have different correlations with the classification target. In order to eliminate the influence of the weak correlation characteristics, the classification capability of the characteristics is evaluated by adopting a Relief algorithm, and the top 80% of the strong correlation characteristics are selected according to a preset threshold value. The weight of each feature in the feature set is calculated from the difference between the distance between each sample and the nearest similar sample, and the specific calculation is as follows:
Wi+1=Wi-diff((x,H(x))+diff(x,M(x))
where x is the sample, find its relative class from the nearest neighbor of the same class and each sampling instance, denoted as H (x) and M (x), respectively.
S9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
and constructing an SVM model, and mapping the sample data in the n-dimensional vector space to a high-dimensional feature space. And taking the classified main journal surface defects as image training samples to obtain a multi-dimensional characteristic data set, calculating key parameters (penalty factor C and nuclear parameter g) in the SVM model by using a Whale Optimization Algorithm (WOA), and building the SVM model by using the linear classification.
S10: and carrying out image processing on the newly acquired original images of the main journal, extracting image characteristics, and detecting and classifying defects by using the trained SVM model.
The embodiment of the invention discloses a method for detecting defects of a main journal based on feature fusion, which comprises the following steps:
graying is carried out through a self-adaptive gray optimization algorithm, and more details in an image are reserved as far as possible.
Secondly, extracting a defect image through geometric, gray and texture features, adjusting the numerical values of different features to [0,1] by adopting Min-Max standard normalization aiming at the condition that the difference of the numerical values of the different features is large, and finally reducing the redundancy of the features by utilizing a Relief method.
And thirdly, optimizing a key parameter punishment factor C and a nuclear parameter g of the classifier by using a whale optimization algorithm, so that the accuracy of the classifier is improved.
While the present invention has been described in detail and with reference to the embodiments thereof as illustrated in the accompanying drawings, it will be apparent to one skilled in the art that various changes and modifications can be made therein. Therefore, certain details of the embodiments are not to be interpreted as limiting, and the scope of the invention is to be determined by the appended claims.

Claims (6)

1. A main journal defect detection method based on feature fusion comprises the following steps:
s1: acquiring an original image of the main journal;
s2: selecting an ROI (region of interest) region of the main journal image to obtain a main journal preprocessing image;
s3: converting the collected main journal preprocessing image into a gray scale image;
s4: carrying out image enhancement on the gray level image to obtain an enhanced image;
s5: extracting image features from the enhancement map, wherein the image features comprise gray scale features, geometric features and texture features; carrying out PCA principal component dimension reduction on the texture features;
s6: normalizing the image features;
s7: fusing feature vectors of the image features;
s8: evaluating the classification capability of the image features by adopting a Relief algorithm, and selecting the top 80% of strong correlation features according to a preset threshold;
s9: building an SVM model, training the SVM model and optimizing parameters of the SVM model by utilizing a whale optimization algorithm;
s10: and carrying out image processing on the newly acquired original images of the main journal, extracting the image characteristics, and detecting and classifying defects by using the trained SVM model.
2. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in the step of S2, the step of selecting ROI regions segments the images of main journal surface regions from the original images of main journals by determining the edges of the images of main journals according to image histograms by using Canny edge detection.
3. The method for detecting defects of main journals based on feature fusion as claimed in claim 1, wherein in step S3, an image adaptive gray scale optimization algorithm is used to convert the acquired main journal images into gray scale images.
4. The method for detecting defects of main journals based on feature fusion as claimed in claim 1, wherein in step S4, the image is enhanced by Retinex algorithm.
5. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in said step of S5, said geometric features comprise degree of dispersion, area and centroid; the gray features comprise an average gray value and a gray variance; a Gabor filter is employed to extract the texture features.
6. The method for detecting defects of main journals based on feature fusion of claim 1, wherein in said step of S6, said image features are normalized by Min-Max standard.
CN202110646995.9A 2021-06-10 2021-06-10 Main journal defect detection method based on feature fusion Pending CN113298798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110646995.9A CN113298798A (en) 2021-06-10 2021-06-10 Main journal defect detection method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110646995.9A CN113298798A (en) 2021-06-10 2021-06-10 Main journal defect detection method based on feature fusion

Publications (1)

Publication Number Publication Date
CN113298798A true CN113298798A (en) 2021-08-24

Family

ID=77327782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110646995.9A Pending CN113298798A (en) 2021-06-10 2021-06-10 Main journal defect detection method based on feature fusion

Country Status (1)

Country Link
CN (1) CN113298798A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147427A (en) * 2022-09-06 2022-10-04 苏州鼎纳自动化技术有限公司 Visual detection method and system for resistance defects on PCB and computing device
CN116109638A (en) * 2023-04-13 2023-05-12 中铁四局集团有限公司 Rail break detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593670A (en) * 2013-10-14 2014-02-19 浙江工业大学 Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN111109417A (en) * 2019-12-23 2020-05-08 重庆大学 Route is from planning sugar-painter based on image information
CN111145165A (en) * 2019-12-30 2020-05-12 北京工业大学 Rubber seal ring surface defect detection method based on machine vision
CN112036296A (en) * 2020-08-28 2020-12-04 合肥工业大学 Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593670A (en) * 2013-10-14 2014-02-19 浙江工业大学 Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN111109417A (en) * 2019-12-23 2020-05-08 重庆大学 Route is from planning sugar-painter based on image information
CN111145165A (en) * 2019-12-30 2020-05-12 北京工业大学 Rubber seal ring surface defect detection method based on machine vision
CN112036296A (en) * 2020-08-28 2020-12-04 合肥工业大学 Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
巢渊: "基于机器视觉的半导体芯片表面缺陷在线检测关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
曹晓杰: "基于卷积神经网络的超声无损检测图像缺陷识别研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147427A (en) * 2022-09-06 2022-10-04 苏州鼎纳自动化技术有限公司 Visual detection method and system for resistance defects on PCB and computing device
CN116109638A (en) * 2023-04-13 2023-05-12 中铁四局集团有限公司 Rail break detection method and system
CN116109638B (en) * 2023-04-13 2023-07-04 中铁四局集团有限公司 Rail break detection method and system

Similar Documents

Publication Publication Date Title
CN111292305B (en) Improved YOLO-V3 metal processing surface defect detection method
CN110097034B (en) Intelligent face health degree identification and evaluation method
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN110009618A (en) A kind of Axle Surface quality determining method and device
CN108563979B (en) Method for judging rice blast disease conditions based on aerial farmland images
CN113298798A (en) Main journal defect detection method based on feature fusion
CN111383227A (en) Neural network-based tool wear type identification method and wear value determination method
CN116205919A (en) Hardware part production quality detection method and system based on artificial intelligence
CN115131359B (en) Method for detecting pitting defects on surface of metal workpiece
CN115359053A (en) Intelligent detection method and system for defects of metal plate
CN116246174B (en) Sweet potato variety identification method based on image processing
CN116309599B (en) Water quality visual monitoring method based on sewage pretreatment
CN110348461A (en) A kind of Surface Flaw feature extracting method
CN112801106A (en) Machining defect classification method of tooth restoration product based on machine vision
CN111724376B (en) Paper disease detection method based on texture feature analysis
CN110728286B (en) Abrasive belt grinding material removal rate identification method based on spark image
CN114549446A (en) Cylinder sleeve defect mark detection method based on deep learning
CN116758045A (en) Surface defect detection method and system for semiconductor light-emitting diode
CN114373079A (en) Rapid and accurate ground penetrating radar target detection method
CN112258532B (en) Positioning and segmentation method for callus in ultrasonic image
CN113743421A (en) Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf
CN113435460A (en) Method for identifying brilliant particle limestone image
JP2011170890A (en) Face detecting method, face detection device, and program
CN113066041A (en) Pavement crack detection method based on stack sparse self-coding deep learning
CN115464557A (en) Method for adjusting mobile robot operation based on path and mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824

RJ01 Rejection of invention patent application after publication