CN111915572A - Self-adaptive gear pitting quantitative detection system and method based on deep learning - Google Patents
Self-adaptive gear pitting quantitative detection system and method based on deep learning Download PDFInfo
- Publication number
- CN111915572A CN111915572A CN202010671243.3A CN202010671243A CN111915572A CN 111915572 A CN111915572 A CN 111915572A CN 202010671243 A CN202010671243 A CN 202010671243A CN 111915572 A CN111915572 A CN 111915572A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- gear
- pitting
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 71
- 238000013135 deep learning Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 95
- 238000012549 training Methods 0.000 claims abstract description 73
- 238000011156 evaluation Methods 0.000 claims abstract description 45
- 238000007781 pre-processing Methods 0.000 claims abstract description 27
- 230000007797 corrosion Effects 0.000 claims abstract description 20
- 238000005260 corrosion Methods 0.000 claims abstract description 20
- 238000013500 data storage Methods 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 14
- 238000010276 construction Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 45
- 238000012545 processing Methods 0.000 claims description 34
- 230000003044 adaptive effect Effects 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 12
- 238000012216 screening Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 7
- 230000007547 defect Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000009661 fatigue test Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000003706 image smoothing Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000011158 quantitative evaluation Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 4
- 238000011002 quantification Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000009347 mechanical transmission Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of gear pitting detection, and discloses a self-adaptive gear pitting quantitative detection system and method based on deep learning, wherein the self-adaptive gear pitting quantitative detection system based on deep learning comprises: the device comprises an image data acquisition module, an image preprocessing module, a main control module, a model construction module, a sample acquisition module, a model training module, an image recognition module, a result evaluation module, a data storage module and a display module. According to the invention, the gear pitting corrosion is quantitatively detected through the convolutional neural network model, so that the problems of subjective interference, difficulty in accurate quantification and the like in the traditional manual visual detection process are avoided; the method can be used for carrying out pitting detection on different tooth surfaces of different gears, solves the problems of low precision and poor effect of the traditional pitting detection method, carries out grading division on the pitting corrosion of the gears in different forms, can accurately and effectively prevent the gear breakage phenomenon, and meets the working requirement of accurate and intelligent quantitative evaluation and detection of the pitting corrosion of the gears.
Description
Technical Field
The invention belongs to the technical field of gear pitting detection, and particularly relates to a self-adaptive gear pitting quantitative detection system and method based on deep learning.
Background
At present, gear transmission is the most widely used transmission form in mechanical transmission, and has the advantages of more accurate transmission, high efficiency, compact structure and reliable work. The main failure mode of the gear transmission is failure of the gear, and in order to improve and prolong the service life of the gear transmission, the failure mode of the gear needs to be studied deeply. The gear pitting corrosion failure mode is one of the most common gear failure modes, and under the condition of long-term load working, materials fall off from the tooth surface due to stress action of the gear, so that punctiform pits appear, namely initial pitting corrosion. The initial pitting will expand continuously under repeated loading and cause gear breakage, resulting in irreparable loss. Therefore, in order to quantitatively control the rule of the pitting corrosion expansion and effectively prevent the tooth breakage, the quantitative evaluation and detection of the gear pitting corrosion is particularly important.
The traditional gear pitting detection method is mainly observed and determined by naked eyes, wherein for the minimal pitting which is not easily detected by the naked eyes, a microscope is further used for observing and determining. Chinese patent 201910973345.8 discloses a self-adaptive quantitative evaluation and detection device for gear pitting based on deep learning, which comprises a gear box platform, an integrated data acquisition system, an image processing system, a control system, a magnetic base, and a mobile platform, wherein the mobile platform is composed of a slide rail and an electric push rod. By analyzing the existing related technologies, the traditional gear pitting detection method only carries out qualitative evaluation on gear pitting, and not only has complex steps and low efficiency and precision, but also wastes a large amount of human resources. At present, a gear pitting quantitative evaluation and detection method based on deep learning is rarely reported, and a formed product with reliable gear pitting quantitative evaluation and detection is not seen in the market.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the traditional gear pitting detection method is mainly characterized in that observation and determination are carried out through naked eyes, and extremely small pitting is not easy to perceive.
(2) The traditional gear pitting detection method only carries out qualitative evaluation on gear pitting, and not only has complex steps and low efficiency and precision, but also wastes a large amount of human resources.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a self-adaptive gear pitting quantitative detection system and method based on deep learning.
The invention is realized in such a way that a self-adaptive gear pitting quantitative detection method based on deep learning comprises the following steps:
acquiring high-quality tooth surface image data of a gear by using a CCD (charge coupled device) industrial camera through an image data acquisition module to obtain an image to be detected; the CCD industrial camera is vertically arranged with the tooth surface of the detected gear.
And step two, mapping pixel points of the collected tooth surface image originally described in three dimensions into pixel points described in one dimension by utilizing a graying unit of a preprocessing program through an image preprocessing module, so as to realize grayscale processing.
And step three, converting the image signal processed by the graying unit in the step two into a binary image signal only with black and white through a binarization unit, and carrying out binarization processing on the image.
And step four, removing isolated white points on the image strokes, isolated black points outside the image strokes and concave-convex points at the edges of the image strokes by an image smoothing unit so as to smooth the edges of the image strokes.
Fifthly, carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image to obtain a meshing area binary image and a pit-like binary image; and obtaining a pit binary image according to the meshing area binary image and the similar pit binary image.
And step six, the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning is coordinated and controlled by a main control module and a main control machine.
Constructing a convolutional neural network model for carrying out pitting corrosion detection on the gear based on deep learning by utilizing a model construction program through a model construction module; the convolutional neural network comprises at least one convolutional layer, and the at least one convolutional layer comprises a step of carrying out segmentation compression processing on a characteristic matrix and a step of carrying out sparse matrix vector multiplication on the generated sparse matrix.
And step eight, acquiring a large number of binary images of the gear pitting pits by using sample acquisition equipment through a sample acquisition module to serve as sample data for training the convolutional neural network model.
And step nine, carrying out unification treatment on the sample data trained in the step eight to form an extracted feature matrix with consistent dimensionality, and inputting the extracted data feature matrix and a known data classification label into the convolutional neural network model constructed in the step seven after carrying out enhancement treatment on the extracted data feature matrix.
Step ten, training the convolutional neural network model by using a model training program through a model training module according to a large number of collected image samples to obtain a trained convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, a pooling layer, a full-connection layer and an output layer which are sequentially connected.
And step eleven, inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model by using an image recognition program through an image recognition module to perform intelligent recognition of the gear pitting state.
And step twelve, grading the pitting degree of the gear by using a result evaluation module according to the recognition result of the convolutional neural network model and a preset evaluation standard by using a result evaluation program, obtaining pitting grade data of the gear, and generating a result evaluation report.
And thirteen, storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report by using a storage server through a data storage module.
Fourteen, utilizing a display to display the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report through a display module;
in the process of performing coordination control on the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning, the data fusion process of each module by the main control module is as follows:
establishing data fusion training samples for various data transmitted by the image data acquisition module, the image preprocessing module, the model construction module, the sample acquisition module, the model training module, the image recognition module, the result evaluation module, the data storage module and the main control module;
preprocessing a data fusion training sample, extracting corresponding characteristic values and characteristic mean values, and subtracting the characteristic mean values from the characteristic values;
calculating a covariance matrix according to the eigenvalue and the characteristic mean value, and simultaneously calculating a corresponding eigenvalue;
sorting the eigenvalues, selecting N eigenvalues, extracting the largest eigenvalue, and establishing an eigenvector corresponding to the largest eigenvalue;
establishing a corresponding mapping relation according to the characteristic vector, and explaining the mapping relation; fusing the data according to the explanation;
in the process that the data storage module stores various types of data by using the storage server, the process of classifying the data comprises the following steps:
the method comprises the steps of establishing a data classification training set by collected tooth surface image data, preprocessed image data, a convolutional neural network model, training sample images, image recognition results and result evaluation reports, and establishing a similar matrix;
the similar matrix is subjected to matrix transformation to establish a Laplace matrix, and corresponding eigenvalues are solved;
sorting the eigenvalues, solving eigenvectors corresponding to the eigenvalues, and constructing an eigenvalue matrix;
and classifying the data by using Kmeans according to the constructed eigenvalue matrix.
The image recognition module which inputs the preprocessed tooth surface image into the pre-trained convolutional neural network model by using the image recognition program to perform intelligent recognition of the gear pitting state extracts the image characteristics:
dividing the processed image into a plurality of small areas;
determining pixels in each small area, sequencing the pixels, and selecting an intermediate pixel value as a correction pixel;
when the surrounding pixel value is larger than the correction pixel value, the position of the pixel point is marked as 1, otherwise, the position is 0, and the LBP value of the pixel point in the center of the window is obtained;
calculating a histogram of each small region, namely the frequency of occurrence of each number; then, normalizing the histogram;
after normalization is completed, the statistical histograms of each small region are connected into a feature vector, that is, the LBP texture feature vector of the whole graph.
Further, in the second step, the method for performing gray processing on the acquired tooth surface image by using the graying unit includes:
(I) acquiring the gray value of each pixel in the target tooth surface image to obtain a gray matrix;
(II) obtaining a balanced array according to the row/column gray distribution trend in the gray matrix;
and (III) correcting the gray matrix according to the balanced array to obtain the tooth surface image subjected to gray processing.
Further, in the eighth step, the method for acquiring the binary image of the gear pitting pit as sample data by the sample acquisition module includes:
(a) simulating a gear pitting image sample by a sample image generating unit by using a gear box platform and a gear contact fatigue test;
(b) repeatedly collecting gear pitting image samples by using a CCD industrial camera which is vertically arranged with a gear of a gear box through a sample image collecting unit;
(c) and comparing and screening the collected gear pitting image samples through a sample screening unit, and deleting sample images with high similarity.
Further, in step nine, the method for performing enhancement processing on the extracted data features includes:
when the convolutional neural network is transmitted forwards, on each original convolutional kernel, the original convolutional kernel is modulated through dot multiplication of a manual modulation kernel and the original convolutional kernel to obtain a modulated convolutional kernel, and the modulated convolutional kernel is used for replacing the original convolutional kernel to carry out the forward transmission of the neural network so as to enhance the data characteristics.
Further, in step ten, the method for training the convolutional neural network model by using the model training program through the model training module includes the following steps:
(1) collecting a large number of gear pitting images, manually marking the gear pitting images, and identifying the positions and sizes of the pitting so as to obtain training samples for training a convolutional neural network model;
(2) introducing gear pitting image samples into a convolutional neural network model, and performing optimization training on the convolutional neural network model by using an optimization algorithm in combination with artificial marking information;
(3) and storing the parameters of each target detection model after the training end condition is met, thereby obtaining the convolutional neural network model based on deep learning.
Further, in the eleventh step, the method for intelligently identifying the gear pitting state through the convolutional neural network model includes:
1) dividing the preprocessed image to be detected into a plurality of images with the resolution of m multiplied by m by adopting a multi-target detection method;
2) inputting the separated images into a convolutional neural network model based on deep learning, judging whether each input image has a pitting defect, and if so, giving the position of the pitting pit and the size of the pitting pit;
3) and summarizing data information of the point corrosion defects, and outputting a detection result of the point corrosion degree of the gear.
Another object of the present invention is to provide a deep learning-based adaptive gear pitting quantitative detection system applying the deep learning-based adaptive gear pitting quantitative detection method, the deep learning-based adaptive gear pitting quantitative detection system comprising:
the device comprises an image data acquisition module, an image preprocessing module, a main control module, a model construction module, a sample acquisition module, a model training module, an image recognition module, a result evaluation module, a data storage module and a display module.
The image data acquisition module is connected with the main control module and used for acquiring high-quality image data through the tooth surfaces of the gear pair of the CCD industrial camera, and the CCD industrial camera is vertically arranged with the tooth surfaces of the detected gear;
the image preprocessing module is connected with the main control module and used for carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image through a preprocessing procedure to obtain a meshing area binary image and a pit-like binary image; obtaining a pit binary image according to the meshing area binary image and the similar pit binary image;
the main control module is connected with the image data acquisition module, the image preprocessing module, the model construction module, the sample acquisition module, the model training module, the main control module, the image recognition module, the result evaluation module, the data storage module and the display module and is used for carrying out coordination control on the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning through the main control computer;
the model building module is connected with the main control module and used for building a convolution neural network model for carrying out pitting corrosion detection on the gear based on deep learning through a model building program;
the sample acquisition module is connected with the main control module and used for acquiring a large number of gear pitting corrosion images through sample acquisition equipment to serve as samples for convolutional neural network model training;
the model training module is connected with the main control module and used for training the convolutional neural network model by utilizing a large number of collected image samples through a model training program;
the image recognition module is connected with the main control module and used for inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model through an image recognition program to carry out intelligent recognition on the gear pitting state;
the result evaluation module is connected with the main control module and used for grading the pitting degree of the gear according to the recognition result of the convolutional neural network model and a preset evaluation standard through a result evaluation program to obtain pitting grade data of the gear and generate a result evaluation report;
the data storage module is connected with the main control module and used for storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report through a storage server;
and the display module is connected with the main control module and used for displaying the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report through a display.
Further, the image pre-processing module comprises:
the graying unit is used for mapping pixel points originally described in three dimensions into pixel points described in one dimension;
a binarization unit for converting the image signal processed by the graying unit into a binary image signal having only black and white;
an image smoothing unit for removing isolated white dots on the image strokes, isolated black dots outside the strokes, and concave and convex dots of the stroke edges to smooth the stroke edges;
the sample acquisition module comprises:
the sample image generating unit is used for simulating a gear pitting image sample by using a gear contact fatigue test through the gearbox platform;
the sample image acquisition unit is used for repeatedly acquiring gear pitting image samples through a CCD industrial camera which is vertical to a gear of the gear box;
and the sample screening unit is used for carrying out contrast screening on the collected gear pitting image samples and deleting the sample images with high similarity.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the adaptive gear pitting quantitative detection method based on deep learning when executed on an electronic device.
Another object of the present invention is to provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to execute the adaptive gear pitting quantitative detection method based on deep learning.
By combining all the technical schemes, the invention has the advantages and positive effects that: according to the gear pitting detection method, the gear pitting is quantitatively detected based on the deep learning algorithm, so that the problems of subjective interference of detection personnel, difficulty in accurate quantification and the like in the traditional manual visual detection process are solved; the method can be used for carrying out pitting detection on different tooth surfaces of different gears, the method for applying deep learning to carry out segmentation and detection on gear pitting images is provided, the problems of low precision and poor effect of the traditional pitting detection method are solved, the quantitative evaluation of the gear pitting is completed, the gear pitting of different forms is graded, the occurrence of the gear breakage phenomenon can be accurately and effectively prevented, and the working requirement of accurate and intelligent gear pitting quantitative evaluation and detection is met.
According to the invention, the image preprocessing module utilizes the row/column gray distribution trend in the gray matrix to obtain the balanced array, and the balanced array can balance the row/column gray distribution trend in the gray matrix, so that the gray value distribution in the gray matrix is balanced, the brightness of an over-dark area can be improved, the brightness of an over-bright area can be reduced, and the effect of increasing the definition of a target image can be realized; the constructed convolutional neural network model is used for compressing the characteristic matrix of the convolutional layer, so that the training time and the memory consumption of the neural network are reduced, and further, the memory consumption and the zero value calculation in the calculation process are reduced.
Drawings
Fig. 1 is a flowchart of a method for quantitatively detecting pitting of an adaptive gear based on deep learning according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of an adaptive gear pitting quantitative detection system based on deep learning according to an embodiment of the present invention;
in the figure: 1. an image data acquisition module; 2. an image preprocessing module; 3. a main control module; 4. a model building module; 5. a sample collection module; 6. a model training module; 7. an image recognition module; 8. a result evaluation module; 9. a data storage module; 10. and a display module.
Fig. 3 is a flowchart of a method for performing a gray processing on an acquired tooth surface image through a graying unit according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method for acquiring a binary image of a gear pitting pit as sample data by a sample acquisition module according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method for training a convolutional neural network model by a model training module using a model training program according to an embodiment of the present invention.
Fig. 6 is a flowchart of a method for intelligently identifying a gear pitting state through a convolutional neural network model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a system and a method for quantitatively detecting the self-adaptive gear pitting based on deep learning, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the adaptive gear pitting quantitative detection method based on deep learning provided by the embodiment of the present invention includes the following steps:
s101, acquiring high-quality image data by using the tooth surfaces of the gear pair of the CCD industrial camera through the image data acquisition module, wherein the CCD industrial camera is vertically arranged with the tooth surfaces of the detected gear.
S102, carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image by using a preprocessing procedure through an image preprocessing module to obtain a meshing area binary image and a pit-like binary image; and obtaining a pit binary image according to the meshing area binary image and the similar pit binary image.
And S103, performing coordination control on normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning by using a main control computer through a main control module.
And S104, constructing a convolutional neural network model for carrying out pitting corrosion detection on the gear based on deep learning by utilizing a model construction program through a model construction module.
And S105, acquiring a large number of gear pitting images as samples for training the convolutional neural network model by using a sample acquisition module and sample acquisition equipment.
And S106, training the convolutional neural network model by utilizing a model training program through a model training module and utilizing the collected mass image samples.
And S107, inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model by using an image recognition program through an image recognition module to intelligently recognize the gear pitting state.
And S108, grading the pitting degree of the gear by using a result evaluation module and a result evaluation program according to the recognition result of the convolutional neural network model and a preset evaluation standard, and generating a result evaluation report.
And S109, storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report by using a storage server through a data storage module.
And S110, displaying the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report by using a display through a display module.
In the process of coordinately controlling the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning, the data fusion process of each module by the main control module is as follows:
establishing data fusion training samples for various data transmitted by the image data acquisition module, the image preprocessing module, the model construction module, the sample acquisition module, the model training module, the image recognition module, the result evaluation module, the data storage module and the main control module;
preprocessing a data fusion training sample, extracting corresponding characteristic values and characteristic mean values, and subtracting the characteristic mean values from the characteristic values;
calculating a covariance matrix according to the eigenvalue and the characteristic mean value, and simultaneously calculating a corresponding eigenvalue;
sorting the eigenvalues, selecting N eigenvalues, extracting the largest eigenvalue, and establishing an eigenvector corresponding to the largest eigenvalue;
establishing a corresponding mapping relation according to the characteristic vector, and explaining the mapping relation; fusing the data according to the explanation;
in the process that the data storage module stores various data by using the storage server, the process of classifying the data comprises the following steps:
the method comprises the steps of establishing a data classification training set by collected tooth surface image data, preprocessed image data, a convolutional neural network model, training sample images, image recognition results and result evaluation reports, and establishing a similar matrix;
the similar matrix is subjected to matrix transformation to establish a Laplace matrix, and corresponding eigenvalues are solved;
sorting the eigenvalues, solving eigenvectors corresponding to the eigenvalues, and constructing an eigenvalue matrix;
and classifying the data by using Kmeans according to the constructed eigenvalue matrix.
The invention provides an image recognition module which inputs a preprocessed tooth surface image into a pre-trained convolutional neural network model by using an image recognition program to intelligently recognize the gear pitting corrosion state and extracts the image characteristics, wherein the image recognition module comprises the following steps:
dividing the processed image into a plurality of small areas;
determining pixels in each small area, sequencing the pixels, and selecting an intermediate pixel value as a correction pixel;
when the surrounding pixel value is larger than the correction pixel value, the position of the pixel point is marked as 1, otherwise, the position is 0, and the LBP value of the pixel point in the center of the window is obtained;
calculating a histogram of each small region, namely the frequency of occurrence of each number; then, normalizing the histogram;
after normalization is completed, the statistical histograms of each small region are connected into a feature vector, that is, the LBP texture feature vector of the whole graph.
As shown in fig. 2, an adaptive gear pitting quantitative detection system based on deep learning according to an embodiment of the present invention includes: the system comprises an image data acquisition module 1, an image preprocessing module 2, a main control module 3, a model construction module 4, a sample acquisition module 5, a model training module 6, an image recognition module 7, a result evaluation module 8, a data storage module 9 and a display module 10.
The image data acquisition module 1 is connected with the main control module 3 and used for acquiring high-quality image data by a CCD industrial camera to the tooth surface of the gear, and the CCD industrial camera is vertically arranged with the tooth surface of the gear to be detected;
the image preprocessing module 2 is connected with the main control module 3 and is used for carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image through a preprocessing procedure to obtain a meshing area binary image and a pit-like binary image; obtaining a pit binary image according to the meshing area binary image and the similar pit binary image;
the main control module 3 is connected with the image data acquisition module 1, the image preprocessing module 2, the model construction module 4, the sample acquisition module 5, the model training module 6, the image recognition module 7, the result evaluation module 8, the data storage module 9 and the display module 10, and is used for coordinating and controlling the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning through a main control machine;
the model building module 4 is connected with the main control module 3 and used for building a convolution neural network model for carrying out pitting detection on the gear based on deep learning through a model building program;
the sample acquisition module 5 is connected with the main control module 3 and is used for acquiring a large number of gear pitting corrosion images as samples for convolutional neural network model training through sample acquisition equipment;
the model training module 6 is connected with the main control module 3 and used for training the convolutional neural network model by utilizing a large number of collected image samples through a model training program;
the image recognition module 7 is connected with the main control module 3 and used for inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model through an image recognition program to carry out intelligent recognition of the gear pitting state;
the result evaluation module 8 is connected with the main control module 3 and is used for grading the pitting degree of the gear according to the recognition result of the convolutional neural network model and a preset evaluation standard through a result evaluation program to obtain pitting grade data of the gear and generate a result evaluation report;
the data storage module 9 is connected with the main control module 3 and used for storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report through a storage server;
and the display module 10 is connected with the main control module 3 and is used for displaying the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report through a display.
The image preprocessing module 2 provided by the embodiment of the invention comprises:
the graying unit 2-1 is used for mapping pixel points originally described in three dimensions into pixel points described in one dimension;
a binarization unit 2-2 for converting the image signal processed by the graying unit into a binary image signal having only black and white;
and the image smoothing unit 2-3 is used for removing isolated white points on the strokes of the image, isolated black points outside the strokes and concave-convex points of the stroke edges so as to smooth the stroke edges.
The sample collection module 5 provided by the embodiment of the invention comprises:
the sample image generating unit 5-1 is used for simulating a gear pitting image sample by using a gear contact fatigue test through a gear box platform;
the sample image acquisition unit 5-2 is used for repeatedly acquiring gear pitting image samples through a CCD industrial camera which is vertically arranged with a gear of the gear box;
and the sample screening unit 5-3 is used for carrying out contrast screening on the collected gear pitting image samples and deleting sample images with high similarity.
The invention is further described with reference to specific examples.
Example 1
Fig. 1 shows a method for quantitatively detecting a gear pitting of an adaptive gear based on deep learning according to an embodiment of the present invention, and fig. 3 shows a preferred embodiment of the method for performing a gray processing on a collected tooth surface image by using a graying unit according to an embodiment of the present invention, where the method includes:
s201, obtaining the gray value of each pixel in the target tooth surface image to obtain a gray matrix.
And S202, obtaining a balanced array according to the row/column gray distribution trend in the gray matrix.
S203, correcting the gray matrix according to the balanced array to obtain the tooth surface image subjected to gray processing.
Example 2
The method for quantitatively detecting the gear pitting of the self-adaptive gear based on the deep learning, provided by the embodiment of the invention, is shown in fig. 1, and as a preferred embodiment, is shown in fig. 4, and the method for acquiring the binary image of the gear pitting pit as sample data by the sample acquisition module, provided by the embodiment of the invention, comprises the following steps:
s301, simulating a gear pitting image sample by a sample image generating unit by using a gear contact fatigue test through a gear box platform.
And S302, repeatedly collecting the gear pitting image sample by using a CCD industrial camera which is vertically arranged on a gear of the gear box through a sample image collecting unit.
And S303, carrying out contrast screening on the collected gear pitting image samples through a sample screening unit, and deleting sample images with high similarity.
Example 3
Fig. 1 shows a method for quantitatively detecting adaptive gear pitting based on deep learning according to an embodiment of the present invention, and fig. 5 shows a preferred embodiment of the method for training a convolutional neural network model by using a model training program through a model training module according to an embodiment of the present invention, where the method includes:
s401, collecting a large number of gear pitting images, manually marking the gear pitting images, and identifying the positions and sizes of the pitting so as to obtain training samples for training the convolutional neural network model.
S402, introducing the gear pitting image sample into a convolutional neural network model, and performing optimization training on the convolutional neural network model by using an optimization algorithm in combination with artificial marking information.
And S403, storing parameters of each target detection model after the training end condition is met, so as to obtain a convolutional neural network model based on deep learning.
Example 4
Fig. 1 shows a method for quantitatively detecting gear pitting in an adaptive manner based on deep learning according to an embodiment of the present invention, and fig. 6 shows a preferred embodiment of the method for intelligently identifying a gear pitting state through a convolutional neural network model according to an embodiment of the present invention, where the method includes:
s501, dividing the preprocessed image to be detected into a plurality of images with the resolution of m multiplied by m by adopting a multi-target detection method.
S502, inputting the separated images into a convolutional neural network model based on deep learning, judging whether each input image has a pitting defect, and if so, giving the position of the pitting pit and the size of the pitting pit.
And S503, summarizing the data information of the pitting defects and outputting the detection result of the pitting degree of the gear.
Convolutional Neural Networks (CNN) are a type of feed-forward Neural network containing Convolutional calculation and having a deep structure, and integrate ideas of two algorithms, namely Neural network and machine learning, so that the Convolutional Neural network has the autonomous learning capability of machine learning and the capability of the Neural network to process nonlinear problems. The method is widely applied to image, video, audio and text data at present, and is one of representative algorithms for deep learning.
After twenty-first century, with the proposal of deep learning theory and the improvement of numerical computing equipment, convolutional neural networks have been developed rapidly, and are widely applied to the fields of computer vision, natural language processing and the like, and have achieved good classification effects in some pattern recognition fields.
The convolutional neural network is similar to a common neural network in structure and comprises an input layer, a hidden layer and an output layer.
Firstly, an input layer is provided, the input layer of the convolutional neural network can process multidimensional data, and the input layer of the one-dimensional convolutional neural network receives a one-dimensional array or a two-dimensional array, wherein the one-dimensional array is usually time or frequency spectrum sampling; the two-dimensional array may include a plurality of channels; an input layer of the two-dimensional convolutional neural network receives a two-dimensional or three-dimensional array; the input layer of the three-dimensional convolutional neural network receives a four-dimensional array. Since learning is performed using gradient descent, input features of the convolutional neural network need to be normalized. Specifically, before inputting the learning data into the convolutional neural network, normalization is performed on the input data in the channel or time/frequency dimension, and if the input data is a pixel, the original pixel values distributed in [0,255] can be normalized to the [0,1] interval.
The hidden layer mainly comprises a convolutional layer, a pooling layer and a full-connection layer 3 common structures. The extraction of image features usually needs to be effectively extracted through a plurality of convolution layers and pooling layers, so the convolution layers and the pooling layers are alternately arranged, namely one convolution layer is connected with one pooling layer, the convolution layer is connected with one convolution layer after the pooling layer, and the like. In a common construction, a convolutional neural network and other neural networks are mainly different in that the convolutional neural network has convolutional layers and pooling layers. The convolution kernels in the convolutional layers contain weight coefficients, while the pooling layers do not, and therefore the pooling layers may not be considered independent layers. In a convolutional neural network, the number of convolutional layers and pooling layers is usually not only 1, and taking Le Net-5 as an example, the order of 3 types of common structures in the hidden layer is usually: input-convolutional layer-pooling layer-full-link layer-output.
The convolutional neural network is usually a fully-connected layer upstream of the output layer, and thus has the same structure and operation principle as the output layer in the conventional feedforward neural network. For the image classification problem, the output layer outputs the classification label using a logistic function or a normalized exponential function. In an object recognition problem, the output layer may be designed to output the center coordinates, size, and classification of the object. In the image semantic segmentation, the output layer directly outputs the classification result of each pixel.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. The self-adaptive gear pitting quantitative detection method based on deep learning is characterized by comprising the following steps of:
acquiring high-quality tooth surface image data of a gear by using a CCD (charge coupled device) industrial camera through an image data acquisition module to obtain an image to be detected; the CCD industrial camera is vertically arranged with the tooth surface of the detected gear;
mapping pixel points of the collected tooth surface image originally described in three dimensions into pixel points described in one dimension by utilizing a graying unit of a preprocessing program through an image preprocessing module to realize grayscale processing;
step three, converting the image signal processed by the graying unit in the step two into a binary image signal only with black and white through a binarization unit, and carrying out binarization processing on the image;
step four, removing isolated white points on the image strokes, isolated black points outside the image strokes and concave-convex points on the edges of the image strokes by an image smoothing unit so as to smooth the edges of the image strokes;
fifthly, carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image to obtain a meshing area binary image and a pit-like binary image; obtaining a pit binary image according to the meshing area binary image and the similar pit binary image;
step six, the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning is coordinated and controlled by a main control module through a main control machine;
constructing a convolutional neural network model for carrying out pitting corrosion detection on the gear based on deep learning by utilizing a model construction program through a model construction module; the convolutional neural network comprises at least one convolutional layer, and the convolutional layer comprises a feature matrix which is subjected to segmentation compression processing and a generated sparse matrix which is subjected to sparse matrix vector multiplication;
step eight, a sample acquisition module is used for acquiring a large number of binary images of the gear pitting pits by using sample acquisition equipment to serve as sample data of the convolutional neural network model training;
step nine, unifying the sample data trained in the step eight to form an extracted feature matrix with consistent dimensionality, and inputting the extracted data feature matrix and a known data classification label into the convolutional neural network model constructed in the step seven after enhancement processing;
step ten, training the convolutional neural network model by using a model training program through a model training module according to a large number of collected image samples to obtain a trained convolutional neural network model; the convolutional neural network model comprises an input layer, a convolutional layer, a pooling layer, a full-connection layer and an output layer which are connected in sequence;
step eleven, inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model by an image recognition program through an image recognition module to perform intelligent recognition of the gear pitting state;
step twelve, grading the pitting degree of the gear by using a result evaluation module according to the recognition result of the convolutional neural network model and a preset evaluation standard by using a result evaluation program to obtain pitting grade data of the gear and generate a result evaluation report;
thirteen, storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report by using a storage server through a data storage module;
fourteen, utilizing a display to display the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report through a display module;
in the process of performing coordination control on the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning, the data fusion process of each module by the main control module is as follows:
establishing data fusion training samples for various data transmitted by the image data acquisition module, the image preprocessing module, the model construction module, the sample acquisition module, the model training module, the image recognition module, the result evaluation module, the data storage module and the main control module;
preprocessing a data fusion training sample, extracting corresponding characteristic values and characteristic mean values, and subtracting the characteristic mean values from the characteristic values;
calculating a covariance matrix according to the eigenvalue and the characteristic mean value, and simultaneously calculating a corresponding eigenvalue;
sorting the eigenvalues, selecting N eigenvalues, extracting the largest eigenvalue, and establishing an eigenvector corresponding to the largest eigenvalue;
establishing a corresponding mapping relation according to the characteristic vector, and explaining the mapping relation; fusing the data according to the explanation;
in the process that the data storage module stores various types of data by using the storage server, the process of classifying the data comprises the following steps:
the method comprises the steps of establishing a data classification training set by collected tooth surface image data, preprocessed image data, a convolutional neural network model, training sample images, image recognition results and result evaluation reports, and establishing a similar matrix;
the similar matrix is subjected to matrix transformation to establish a Laplace matrix, and corresponding eigenvalues are solved;
sorting the eigenvalues, solving eigenvectors corresponding to the eigenvalues, and constructing an eigenvalue matrix;
carrying out data classification by using Kmeans according to the constructed eigenvalue matrix;
the image recognition module which inputs the preprocessed tooth surface image into the pre-trained convolutional neural network model by using the image recognition program to perform intelligent recognition of the gear pitting state extracts the image characteristics:
dividing the processed image into a plurality of small areas;
determining pixels in each small area, sequencing the pixels, and selecting an intermediate pixel value as a correction pixel;
when the surrounding pixel value is larger than the correction pixel value, the position of the pixel point is marked as 1, otherwise, the position is 0, and the LBP value of the pixel point in the center of the window is obtained;
calculating a histogram of each small region, namely the frequency of occurrence of each number; then, normalizing the histogram;
after normalization is completed, the statistical histograms of each small region are connected into a feature vector, that is, the LBP texture feature vector of the whole graph.
2. The adaptive gear pitting quantitative detection method based on deep learning of claim 1, wherein in the second step, the method of performing the gray processing on the acquired tooth surface image by the graying unit comprises:
(I) acquiring the gray value of each pixel in the target tooth surface image to obtain a gray matrix;
(II) obtaining a balanced array according to the row/column gray distribution trend in the gray matrix;
and (III) correcting the gray matrix according to the balanced array to obtain the tooth surface image subjected to gray processing.
3. The adaptive gear pitting quantitative detection method based on deep learning of claim 1, wherein in step eight, the method for acquiring a binary image of the gear pitting pit as sample data by a sample acquisition module comprises:
(a) simulating a gear pitting image sample by a sample image generating unit by using a gear box platform and a gear contact fatigue test;
(b) repeatedly collecting gear pitting image samples by using a CCD industrial camera which is vertically arranged with a gear of a gear box through a sample image collecting unit;
(c) and comparing and screening the collected gear pitting image samples through a sample screening unit, and deleting sample images with high similarity.
4. The adaptive gear pitting quantitative detection method based on deep learning of claim 1, wherein in the ninth step, the method for enhancing the extracted data features comprises:
when the convolutional neural network is transmitted forwards, on each original convolutional kernel, the original convolutional kernel is modulated through dot multiplication of a manual modulation kernel and the original convolutional kernel to obtain a modulated convolutional kernel, and the modulated convolutional kernel is used for replacing the original convolutional kernel to carry out the forward transmission of the neural network so as to enhance the data characteristics.
5. The adaptive gear pitting quantitative detection method based on deep learning of claim 1, wherein in step ten, the method for training the convolutional neural network model by the model training module using the model training program comprises the following steps:
(1) collecting a large number of gear pitting images, manually marking the gear pitting images, and identifying the positions and sizes of the pitting so as to obtain training samples for training a convolutional neural network model;
(2) introducing gear pitting image samples into a convolutional neural network model, and performing optimization training on the convolutional neural network model by using an optimization algorithm in combination with artificial marking information;
(3) and storing the parameters of each target detection model after the training end condition is met, thereby obtaining the convolutional neural network model based on deep learning.
6. The adaptive gear pitting quantitative detection method based on deep learning of claim 1, wherein in step eleven, the method for intelligent identification of gear pitting state through the convolutional neural network model comprises:
1) dividing the preprocessed image to be detected into a plurality of images with the resolution of m multiplied by m by adopting a multi-target detection method;
2) inputting the separated images into a convolutional neural network model based on deep learning, judging whether each input image has a pitting defect, and if so, giving the position of the pitting pit and the size of the pitting pit;
3) and summarizing data information of the point corrosion defects, and outputting a detection result of the point corrosion degree of the gear.
7. The adaptive gear pitting quantitative detection system based on deep learning, which is applied with the adaptive gear pitting quantitative detection method based on deep learning according to any one of claims 1 to 6, is characterized by comprising:
the image data acquisition module is connected with the main control module and used for acquiring high-quality image data through the tooth surfaces of the gear pair of the CCD industrial camera, and the CCD industrial camera is vertically arranged with the tooth surfaces of the detected gear;
the image preprocessing module is connected with the main control module and used for carrying out gray processing, binarization processing and smoothing processing on the collected tooth surface image through a preprocessing procedure to obtain a meshing area binary image and a pit-like binary image; obtaining a pit binary image according to the meshing area binary image and the similar pit binary image;
the main control module is connected with the image data acquisition module, the image preprocessing module, the model construction module, the sample acquisition module, the model training module, the main control module, the image recognition module, the result evaluation module, the data storage module and the display module and is used for carrying out coordination control on the normal work of each module of the self-adaptive gear pitting quantitative detection system based on deep learning through the main control computer;
the model building module is connected with the main control module and used for building a convolution neural network model for carrying out pitting corrosion detection on the gear based on deep learning through a model building program;
the sample acquisition module is connected with the main control module and used for acquiring a large number of gear pitting corrosion images through sample acquisition equipment to serve as samples for convolutional neural network model training;
the model training module is connected with the main control module and used for training the convolutional neural network model by utilizing a large number of collected image samples through a model training program;
the image recognition module is connected with the main control module and used for inputting the preprocessed tooth surface image into a pre-trained convolutional neural network model through an image recognition program to carry out intelligent recognition on the gear pitting state;
the result evaluation module is connected with the main control module and used for grading the pitting degree of the gear according to the recognition result of the convolutional neural network model and a preset evaluation standard through a result evaluation program to obtain pitting grade data of the gear and generate a result evaluation report;
the data storage module is connected with the main control module and used for storing the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the result evaluation report through a storage server;
and the display module is connected with the main control module and used for displaying the acquired tooth surface image data, the preprocessed image data, the convolutional neural network model, the training sample image, the image recognition result and the real-time data of the result evaluation report through a display.
8. The adaptive gear pitting quantitative detection system based on deep learning of claim 7, wherein the image preprocessing module comprises:
the graying unit is used for mapping pixel points originally described in three dimensions into pixel points described in one dimension;
a binarization unit for converting the image signal processed by the graying unit into a binary image signal having only black and white;
an image smoothing unit for removing isolated white dots on the image strokes, isolated black dots outside the strokes, and concave and convex dots of the stroke edges to smooth the stroke edges;
the sample acquisition module comprises:
the sample image generating unit is used for simulating a gear pitting image sample by using a gear contact fatigue test through the gearbox platform;
the sample image acquisition unit is used for repeatedly acquiring gear pitting image samples through a CCD industrial camera which is vertical to a gear of the gear box;
and the sample screening unit is used for carrying out contrast screening on the collected gear pitting image samples and deleting the sample images with high similarity.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the deep learning based adaptive gear pitting quantitative detection method according to any one of claims 1 to 6 when executed on an electronic device.
10. A computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the adaptive gear pitting quantitative detection method based on deep learning according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010671243.3A CN111915572B (en) | 2020-07-13 | 2020-07-13 | Adaptive gear pitting quantitative detection system and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010671243.3A CN111915572B (en) | 2020-07-13 | 2020-07-13 | Adaptive gear pitting quantitative detection system and method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111915572A true CN111915572A (en) | 2020-11-10 |
CN111915572B CN111915572B (en) | 2023-04-25 |
Family
ID=73226418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010671243.3A Active CN111915572B (en) | 2020-07-13 | 2020-07-13 | Adaptive gear pitting quantitative detection system and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111915572B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112331322A (en) * | 2020-12-04 | 2021-02-05 | 上海蓬海涞讯数据技术有限公司 | Method, device, processor and storage medium for realizing quantitative evaluation processing aiming at special ability of hospital based on neural network |
CN112991260A (en) * | 2021-02-03 | 2021-06-18 | 南昌航空大学 | Infrared nondestructive testing system with light and ultrasonic composite excitation |
CN113034341A (en) * | 2021-05-25 | 2021-06-25 | 浙江双元科技股份有限公司 | Data acquisition processing circuit for Cameralink high-speed industrial camera |
CN113288425A (en) * | 2021-05-27 | 2021-08-24 | 徐州医科大学附属医院 | Visual navigation system for guide pin in fixation of limb fracture |
CN115049667A (en) * | 2022-08-16 | 2022-09-13 | 启东市群鹤机械设备有限公司 | Gear defect detection method |
CN115578365A (en) * | 2022-10-26 | 2023-01-06 | 西南交通大学 | Tooth pitch detection method and equipment for adjacent racks of toothed rail railway |
CN117387517A (en) * | 2023-10-25 | 2024-01-12 | 常州佳恒新能源科技有限公司 | Digital instrument panel quality detection method and system |
CN117474925A (en) * | 2023-12-28 | 2024-01-30 | 山东润通齿轮集团有限公司 | Gear pitting detection method and system based on machine vision |
CN117495867A (en) * | 2024-01-03 | 2024-02-02 | 东莞市星火齿轮有限公司 | Visual detection method and system for precision of small-module gear |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853491A (en) * | 2010-04-30 | 2010-10-06 | 西安电子科技大学 | SAR (Synthetic Aperture Radar) image segmentation method based on parallel sparse spectral clustering |
CN103810722A (en) * | 2014-02-27 | 2014-05-21 | 云南大学 | Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information |
CN104318241A (en) * | 2014-09-25 | 2015-01-28 | 东莞电子科技大学电子信息工程研究院 | Local density spectral clustering similarity measurement algorithm based on Self-tuning |
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | PCA mixed feature fusion based gas-liquid two-phase flow type identification method |
CN106845524A (en) * | 2016-12-28 | 2017-06-13 | 田欣利 | A kind of carburizing and quenching steel grinding textura epidermoidea and burn intelligent identification Method |
CN107633296A (en) * | 2017-10-16 | 2018-01-26 | 中国电子科技集团公司第五十四研究所 | A kind of convolutional neural networks construction method |
CN108932712A (en) * | 2018-06-22 | 2018-12-04 | 东南大学 | A kind of rotor windings quality detecting system and method |
CN109523518A (en) * | 2018-10-24 | 2019-03-26 | 浙江工业大学 | A kind of tire X-ray defect detection method |
CN109754442A (en) * | 2019-01-10 | 2019-05-14 | 重庆大学 | A kind of gear pitting corrosion detection system based on machine vision |
CN109858575A (en) * | 2019-03-19 | 2019-06-07 | 苏州市爱生生物技术有限公司 | Data classification method based on convolutional neural networks |
CN109934811A (en) * | 2019-03-08 | 2019-06-25 | 中国科学院光电技术研究所 | A kind of optical element surface defect inspection method based on deep learning |
CN110567985A (en) * | 2019-10-14 | 2019-12-13 | 重庆大学 | Self-adaptive gear pitting quantitative evaluation and detection device based on deep learning |
CN110660057A (en) * | 2019-11-01 | 2020-01-07 | 重庆大学 | Binocular automatic gear pitting detection device based on deep learning |
CN110738603A (en) * | 2018-07-18 | 2020-01-31 | 中国商用飞机有限责任公司 | image gray scale processing method, device, computer equipment and storage medium |
CN110807823A (en) * | 2019-11-13 | 2020-02-18 | 四川大学 | Image simulation generation method for dot matrix character printing effect |
CN111127533A (en) * | 2019-12-23 | 2020-05-08 | 魏志康 | Neural network-based multi-feature fusion grinding wheel grinding performance prediction method |
CN111260640A (en) * | 2020-01-13 | 2020-06-09 | 重庆大学 | Tree generator network gear pitting image measuring method and device based on cyclean |
-
2020
- 2020-07-13 CN CN202010671243.3A patent/CN111915572B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853491A (en) * | 2010-04-30 | 2010-10-06 | 西安电子科技大学 | SAR (Synthetic Aperture Radar) image segmentation method based on parallel sparse spectral clustering |
CN103810722A (en) * | 2014-02-27 | 2014-05-21 | 云南大学 | Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information |
CN104318241A (en) * | 2014-09-25 | 2015-01-28 | 东莞电子科技大学电子信息工程研究院 | Local density spectral clustering similarity measurement algorithm based on Self-tuning |
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | PCA mixed feature fusion based gas-liquid two-phase flow type identification method |
CN106845524A (en) * | 2016-12-28 | 2017-06-13 | 田欣利 | A kind of carburizing and quenching steel grinding textura epidermoidea and burn intelligent identification Method |
CN107633296A (en) * | 2017-10-16 | 2018-01-26 | 中国电子科技集团公司第五十四研究所 | A kind of convolutional neural networks construction method |
CN108932712A (en) * | 2018-06-22 | 2018-12-04 | 东南大学 | A kind of rotor windings quality detecting system and method |
CN110738603A (en) * | 2018-07-18 | 2020-01-31 | 中国商用飞机有限责任公司 | image gray scale processing method, device, computer equipment and storage medium |
CN109523518A (en) * | 2018-10-24 | 2019-03-26 | 浙江工业大学 | A kind of tire X-ray defect detection method |
CN109754442A (en) * | 2019-01-10 | 2019-05-14 | 重庆大学 | A kind of gear pitting corrosion detection system based on machine vision |
CN109934811A (en) * | 2019-03-08 | 2019-06-25 | 中国科学院光电技术研究所 | A kind of optical element surface defect inspection method based on deep learning |
CN109858575A (en) * | 2019-03-19 | 2019-06-07 | 苏州市爱生生物技术有限公司 | Data classification method based on convolutional neural networks |
CN110567985A (en) * | 2019-10-14 | 2019-12-13 | 重庆大学 | Self-adaptive gear pitting quantitative evaluation and detection device based on deep learning |
CN110660057A (en) * | 2019-11-01 | 2020-01-07 | 重庆大学 | Binocular automatic gear pitting detection device based on deep learning |
CN110807823A (en) * | 2019-11-13 | 2020-02-18 | 四川大学 | Image simulation generation method for dot matrix character printing effect |
CN111127533A (en) * | 2019-12-23 | 2020-05-08 | 魏志康 | Neural network-based multi-feature fusion grinding wheel grinding performance prediction method |
CN111260640A (en) * | 2020-01-13 | 2020-06-09 | 重庆大学 | Tree generator network gear pitting image measuring method and device based on cyclean |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112331322A (en) * | 2020-12-04 | 2021-02-05 | 上海蓬海涞讯数据技术有限公司 | Method, device, processor and storage medium for realizing quantitative evaluation processing aiming at special ability of hospital based on neural network |
CN112991260A (en) * | 2021-02-03 | 2021-06-18 | 南昌航空大学 | Infrared nondestructive testing system with light and ultrasonic composite excitation |
CN113034341A (en) * | 2021-05-25 | 2021-06-25 | 浙江双元科技股份有限公司 | Data acquisition processing circuit for Cameralink high-speed industrial camera |
CN113034341B (en) * | 2021-05-25 | 2021-08-03 | 浙江双元科技股份有限公司 | Data acquisition processing circuit for Cameralink high-speed industrial camera |
CN113288425A (en) * | 2021-05-27 | 2021-08-24 | 徐州医科大学附属医院 | Visual navigation system for guide pin in fixation of limb fracture |
CN115049667A (en) * | 2022-08-16 | 2022-09-13 | 启东市群鹤机械设备有限公司 | Gear defect detection method |
CN115578365A (en) * | 2022-10-26 | 2023-01-06 | 西南交通大学 | Tooth pitch detection method and equipment for adjacent racks of toothed rail railway |
CN115578365B (en) * | 2022-10-26 | 2023-06-20 | 西南交通大学 | Method and equipment for detecting tooth pitch of adjacent racks of rack rail |
CN117387517A (en) * | 2023-10-25 | 2024-01-12 | 常州佳恒新能源科技有限公司 | Digital instrument panel quality detection method and system |
CN117387517B (en) * | 2023-10-25 | 2024-03-15 | 常州佳恒新能源科技有限公司 | Digital instrument panel quality detection method and system |
CN117474925A (en) * | 2023-12-28 | 2024-01-30 | 山东润通齿轮集团有限公司 | Gear pitting detection method and system based on machine vision |
CN117474925B (en) * | 2023-12-28 | 2024-03-15 | 山东润通齿轮集团有限公司 | Gear pitting detection method and system based on machine vision |
CN117495867A (en) * | 2024-01-03 | 2024-02-02 | 东莞市星火齿轮有限公司 | Visual detection method and system for precision of small-module gear |
CN117495867B (en) * | 2024-01-03 | 2024-05-31 | 东莞市星火齿轮有限公司 | Visual detection method and system for precision of small-module gear |
Also Published As
Publication number | Publication date |
---|---|
CN111915572B (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111915572B (en) | Adaptive gear pitting quantitative detection system and method based on deep learning | |
CN111524137B (en) | Cell identification counting method and device based on image identification and computer equipment | |
CN111667455B (en) | AI detection method for brushing multiple defects | |
CN105678332B (en) | Converter steelmaking end point judgment method and system based on flame image CNN recognition modeling | |
CN111862064A (en) | Silver wire surface flaw identification method based on deep learning | |
Yogesh et al. | Computer vision based analysis and detection of defects in fruits causes due to nutrients deficiency | |
CN108564085B (en) | Method for automatically reading of pointer type instrument | |
CN114897816B (en) | Mask R-CNN mineral particle identification and particle size detection method based on improved Mask | |
CN113919442B (en) | Tobacco maturity state identification method based on convolutional neural network | |
CN113129281B (en) | Wheat stem section parameter detection method based on deep learning | |
CN113743421B (en) | Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf | |
CN115830585A (en) | Port container number identification method based on image enhancement | |
CN117314826A (en) | Performance detection method of display screen | |
CN117830321A (en) | Grain quality detection method based on image recognition | |
CN117372332A (en) | Fabric flaw detection method based on improved YOLOv7 model | |
Sun et al. | A novel method for multi-feature grading of mango using machine vision | |
CN115471494A (en) | Wo citrus quality inspection method, device, equipment and storage medium based on image processing | |
CN117790353B (en) | EL detection system and EL detection method | |
CN116342556A (en) | Plateau tunnel potential safety hazard identification method based on thermal infrared remote sensing | |
CN118469953A (en) | Defect detection method, medium and system for potato low-temperature vacuum frying device | |
CN118351100A (en) | Image definition detection and processing method based on deep learning and gradient analysis | |
CN117036961A (en) | Intelligent monitoring method and system for crop diseases and insect pests | |
CN118446997A (en) | Method for detecting silk-screen defect of electronic product shell | |
Wang et al. | Filter collaborative contribution pruning method based on the importance of different-scale layers for surface defect detection | |
CN118333929A (en) | Method for processing multi-defect classification uncertainty |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |