CN111260640B - Tree generator network gear pitting image measuring method and device based on cyclean - Google Patents

Tree generator network gear pitting image measuring method and device based on cyclean Download PDF

Info

Publication number
CN111260640B
CN111260640B CN202010065989.XA CN202010065989A CN111260640B CN 111260640 B CN111260640 B CN 111260640B CN 202010065989 A CN202010065989 A CN 202010065989A CN 111260640 B CN111260640 B CN 111260640B
Authority
CN
China
Prior art keywords
image
pitting
gear
tooth surface
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010065989.XA
Other languages
Chinese (zh)
Other versions
CN111260640A (en
Inventor
秦毅
王志文
陈伟伟
朱才朝
李川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Publication of CN111260640A publication Critical patent/CN111260640A/en
Application granted granted Critical
Publication of CN111260640B publication Critical patent/CN111260640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a tree generator network gear pitting image measuring method based on cycloegan, and provides a corresponding device, belonging to gear fault detection, comprising the following steps: s1: collecting gear pitting image information; s2: carrying out image preprocessing on an original image to eliminate environmental factors; s3: amplifying the gear pitting images based on a tree-shaped generator network of cycle consistency loss cyclean to generate a plurality of gear pitting images; s4: and detecting the plurality of gear pitting images by a gear pitting detection algorithm to obtain the gear pitting grade. The invention can effectively train the tree-shaped generator and the identifier by utilizing the bottom-layer parameter sharing mechanism, leads each branch of the generator to learn towards different data modes by training the tree-shaped generator, and leads the samples to be reconstructed into the original domain samples by the reconstructor, thus being capable of generating different samples under the condition of keeping style conversion.

Description

Tree generator network gear pitting image measuring method and device based on cyclean
Technical Field
The invention belongs to the field of gear fault detection, and relates to a method and a device for measuring a network gear pitting image of a tree generator based on cycloegan.
Background
The fault diagnosis of the gear plays an important role in the life prediction of the gear, gear pitting is a common fault form of the gear, and the measurement of the gear pitting is related to the normal operation of the whole gear mechanical equipment, so the importance of the fault diagnosis of the gear is self-evident. However, due to the lack of a data set of gear pitting images, the accuracy of the gear pitting measurement is often poor when evaluated.
Generative countermeasure networks (GANS) are a new class of depth generation models that have emerged in recent years and are successfully applied to a variety of applications (Goodfellow, 2016) such as image, video generation, image inpainting, semantic segmentation, image-to-image translation, and text-to-image synthesis. From the game theory metaphor, the model is played by a discriminator and a generator for a two-player game of extremely small maxima, where the generator is intended to generate samples similar to those in the training data, and the discriminator is trying to distinguish between the two samples, as described in this paper (Goodfellow, 2014). Training GAN is a challenge, however, as the generator is only focused on generating samples that lie on several patterns, which easily falls into the problem of pattern collapse (Goodfellow, 2016).
Recently, many GAN variants have been proposed to address this problem, and they can be divided into two broad categories: a single generator or multiple generators are trained. For the former, the methods include modifying the target of the discriminator (Metz et al), modifying the target of the generator (Warde, 2016), or additional discriminators generating more useful gradient signals for the generator (Nguyen et al, 2017; durugkar et al, 2016). The common theme of these variants is that under equilibrium conditions the generator shows the ability to restore the data distribution, but convergence is difficult to achieve in practice.
A recent attempt to resolve a pattern crash by modifying the discriminator includes: small batch discriminators (Salimans et al 2016), expanded GAN (Metz et al 2016) and Denoised Feature Matching (DFM) (Warde-Farley and Bengio, 2016). The idea of small-lot discrimination is to allow the discriminator to detect samples that are significantly similar to other generated samples, although this approach can produce visually appealing samples, but is computationally expensive and therefore is typically used for the last hidden layer of the discriminator. The unrolled GAN improves learning by unrolling the computational graph to include additional optimization steps for the discriminator, which can effectively reduce the pattern collapse problem, rather than the unrolling step being costly and not scalable to large datasets. DFM enhances the generator's objective function with a de-noising self-encoder (DAE) to minimize the reconstruction error of the discriminator's penultimate layer activation, with the idea that the gradient signal from the DAE can direct the generator to generate samples close to the actual data activation, DFM is surprisingly effective at avoiding pattern collapse, but deep DAE adds considerable computational cost to the model.
Another approach is to train additional discriminators. D2GAN (Nguyen et al, 2017) uses two discriminators to minimize KL and inverse KL distances, thereby achieving a fair distribution between data patterns. Although this approach may avoid the pattern collapse problem to some extent, its advantages are limited to not exceeding DFM.
Another approach is to train multiple producers, MIX + GAN based on the min-max theorem, by training a MIX of producers and discriminators for five different parameters in the maxmin game, the total reward for the strategy is calculated by the weighted average reward for all producers and discriminators, which results in a computationally expensive training using this approach due to the lack of parameter sharing. Inspiring our idea here is the MAD-GAN (Ghosh et al 2017), which is a method of identifying multiple classes of classifiers by training multiple generators. The idea proposed in this paper to solve the pattern collapse is to enhance the objective function of the generators with a user-defined similarity-based function to encourage different generators to generate different samples, and secondly also to separate the samples of each generator by modifying the objective function of the discriminator to further push different generators towards different pattern generation.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for measuring a gear pitting image of a tree generator network based on cyclegan.
In order to achieve the purpose, the invention provides the following technical scheme:
on one hand, the invention provides a tree generator network gear pitting image measuring method based on cyclegan, which comprises the following steps:
s1: collecting gear pitting image information;
s2: carrying out image preprocessing on an original image to eliminate environmental factors;
s3: amplifying the gear pitting images based on a tree generator network of cycle consistency loss (cyclean) to generate a plurality of gear pitting images;
s4: and detecting the plurality of gear pitting images through a gear pitting detection algorithm to obtain the gear pitting grade.
Further, in step S1, gear pitting image information is acquired by a gear pitting acquisition device, which includes a visual detection module, an image processing module, an illumination module, a gear clamp module, and a workbench for mounting the modules;
the gear clamp module clamps the fixed gear through the three-jaw chuck, the visual detection module is used for acquiring pitting image information of each tooth surface of the gear, the image processing module is used for extracting effective working tooth surfaces from the pitting image information and obtaining pitting parts in the pitting image information through a threshold segmentation means, and the illumination module is used for providing a light source in the process that the visual detection module acquires the pitting image information.
Further, in step S2, the image preprocessing includes image enhancement and tooth surface tilt correction;
the image enhancement includes: adopting a median filtering algorithm to count and sort the gray values of all pixel points in the tooth surface image in the field taking the point as the center, and determining the sorted median as the processed gray value of the point;
the tooth surface tilt correction includes: tooth surface inclination correction based on Radon transformation comprises the following specific implementation steps:
1) Carrying out enhancement processing on horizontal lines in the image by utilizing edge detection;
2) Calculating Radon transformation of the image to obtain an inclination angle;
3) Correcting the inclination of the tooth surface image according to the inclination angle;
the Radon transform mathematical expression is:
Figure SMS_1
wherein f (x, y) represents a certain pixel point in the image matrix, theta represents the angle of Radon transformation of the corrected image, and R represents the image after Radon transformation.
Further, the tree generator network based on cyclic consistency loss (cyclic) in step S3, X and Y respectively represent data sets between different image domains, G: each branch of x-y is trained simultaneously, with the goal of learning a mapping G: x-Y, making the distribution of multiple images from G (Y) indistinguishable from the distribution of the real Y image by means of the antagonism loss, and parameter sharing at the partial level at the bottom of the generator, for its inverse mapping F: y-x, introducing cycle consistency loss, and sharing parameters in other layers except the input layer of the first layer, thereby generating a new confrontation structure, a plurality of branched generators, forming a maximum and minimum game among the discriminators, generating an image distribution of an approaching and a real Y domain by the generator G (x-Y), reconstructing G (Y) into the image distribution of the real x domain by inverse mapping, and determining whether the generated sample is real data or generated by the generator by the discriminator.
Further, the tree generator network based on cyclic egan includes two mapping functions with large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
L trees-GAN (G m ,D Y ,X,Y)=E y~pdata [logD Y(y) ]+E x~pdata(x) [log(1-D Y (G(x) m )]
for each image X in the X domain, this picture style loop requires returning X to the original image domain, i.e., X → G (X) m → F (G (X) m) ≈ X; y — > F (Y) → G (F (Y)) ≈ Y for each picture for the Y domain; this behavior is stimulated by a loss of cyclic consistency as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )。
further, step S4 specifically includes the following steps:
s41: dividing the effective tooth surface of the gear: performing color space transformation on the image, transforming the RGB color space into YcrCb color space to generate a clustered binary matrix, then clustering the image, performing edge detection on the image by using a Roberts differential operator, positioning the approximate region of the tooth surface, then performing image segmentation, and segmenting the partial image of the tooth surface from the background; the edge segmentation is carried out by utilizing a Roberts differential operator, and a gradient operator is approximated by utilizing the vertical difference and the horizontal difference of an image, namely:
Figure SMS_2
wherein f (x, y) represents a certain pixel point in the image matrix, and f (x-1, y) and f (x, y-1) are two pixel points of vertical and horizontal neighborhoods of the image matrix respectively;
after obtaining the gear tooth surface part image, further dividing the working tooth surface from the tooth surface image, and further optimizing and dividing the tooth surface by using morphological processing of corrosion and expansion; carrying out binarization processing on the tooth surface partial image; then, carrying out image morphological treatment, expanding and then corroding, filling small holes, and communicating the tooth surface partial images; corroding and then expanding, and eliminating small and meaningless objects in the binary image; finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information;
s42: carrying out pitting image segmentation by utilizing a U-Net network: the U-Net network comprises convolution layers, a maximum pooling layer (down sampling), a deconvolution layer (up sampling) and a ReLU nonlinear activation function, the U-Net network performs up sampling for 4 times in total, and is connected in the same convolution stage and the deconvolution stage instead of directly performing supervision and loss function back transmission on high-level semantic features, so that the finally recovered feature graph is ensured to be fused with more features of shallow information, the features of different convolution layers are fused, multi-scale prediction and deep supervision can be performed, and the information such as edge recovery of a segmentation graph is finer due to the 4-time up sampling. According to the structure of the U-Net, the information of the bottom layer and the upper layer can be combined. Bottom layer (deep layer) information: and (5) low-resolution information after multiple times of downsampling. It is possible to provide contextual semantic information of the segmented object in the whole image, which can be understood as a feature reflecting the relationship between the object and its environment, which feature contributes to the class judgment of the object. High-level (shallow) information: the high resolution information is passed directly from the encoder to the level decoder via a linking operation. More refined features, such as gradients, etc., can be provided for segmentation. Thereby detecting the pitting part in the working tooth surface, calculating the proportion of the pitting to the whole tooth surface area, and determining the pitting grade of the gear; distinguishing a pitting part and other parts in the working tooth surface, and extracting the pitting part by means of self-adaptive threshold segmentation and clustering segmentation; and calculating the pixel number of the pitting corrosion part and the pixel number of the whole working area to calculate the pitting corrosion ratio.
On the other hand, the invention provides a tree-shaped generator network gear pitting image measuring device based on cyclean, which comprises a visual detection module, an image processing module, an illumination module, a gear clamp module and a workbench for installing the modules;
the gear clamp module is used for clamping a fixed gear and comprises a third slide rail fixedly arranged on the workbench and a gear clamp arranged in the third slide rail in a sliding manner; the gear clamp comprises a rotating motor and a three-jaw chuck, the three-jaw chuck is arranged on an output shaft of the rotating motor, and a gear is clamped by the three-jaw chuck and driven to rotate by the torque of the output shaft of the rotating motor;
the visual detection module is used for acquiring pitting image information of each tooth surface of the gear and comprises a fourth slide rail fixedly arranged on the workbench and a CCD camera slidably arranged on the fourth slide rail; the vision detection module further comprises a fifth slide rail, one end of the fifth slide rail is arranged in the fourth slide rail in a sliding mode, and the CCD camera is arranged on the fifth slide rail through a first rotating block and a first rotating shaft;
the image processing module is used for preprocessing the acquired tooth surface pitting image information, amplifying the gear pitting image based on a tree generator network of a cycle consistency loss cyclean to generate a plurality of gear pitting images, and detecting the plurality of gear pitting images through a gear pitting detection algorithm to obtain a gear pitting grade;
the illumination module is used for providing the light source at the in-process that the visual inspection module acquireed the pitting and loses image information, the illumination module sets up the coplanar with the pivot of gear, the illumination module is including fixed first slide rail that sets up on the workstation, slides the light source anchor clamps that set up in first slide rail and set up on the light source anchor clamps and by the light source of its centre gripping, the illumination module still includes the second slide rail, the one end of second slide rail slides and sets up on first slide rail, the light source anchor clamps slides and sets up on the second slide rail.
Further, the image processing module is used for preprocessing the acquired tooth surface pitting image information and comprises the following contents:
image enhancement: adopting a median filtering algorithm to count and sort the gray values of all pixel points in the tooth surface image in the field taking the point as the center, and determining the sorted median as the processed gray value of the point;
correcting the inclination of the tooth surface: tooth surface tilt correction based on Radon transform comprising:
1) Carrying out enhancement processing on horizontal lines in the image by utilizing edge detection;
2) Calculating Radon transformation of the image to obtain an inclination angle;
3) Correcting the inclination of the tooth surface image according to the inclination angle;
the Radon transform mathematical expression is:
Figure SMS_3
wherein, f (x, y) represents a certain pixel point in the image matrix, theta represents the angle of Radon transformation of the corrected image, and R represents the value in the transformation matrix.
Further, the image processing module amplifies the gear pitting image based on a tree generator network of cycle consistency loss cyclegan to generate a plurality of gear pitting images, and specifically comprises:
x and Y represent data sets between different image domains, respectively, G: each branch of x-y is trained simultaneously, with the goal of learning a mapping G: x-Y, the distribution of multiple images from G (Y) is indistinguishable from the distribution of the true Y image by the adversarial loss, and the partial layer at the bottom of the generator is parameter shared, for its inverse mapping F: y-x, introducing cycle consistency loss, and sharing parameters in other layers except the input layer of the first layer to generate a new countermeasure structure, a plurality of branched generators, forming a maximum and minimum game among the discriminators, generating an image distribution approaching to a real Y domain by the generator G (x-Y), reconstructing the image distribution of the real x domain by a reconstructor through inverse mapping, and determining whether the generated sample is real data or generated by the generator by the discriminator;
the tree generator network based on the cyclic egan comprises two mapping functions with large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
L trees-GAN (G m ,D Y ,X,Y)=E y~pdata [logD Y(y) ]+E x~pdata(x) [log(1-D Y (G(x) m )]
for each image X in the X domain, this picture style loop requires returning X to the original image domain, i.e., X → G (X) m → F (G (X) m) ≈ X; y — > F (Y) → G (F (Y)) ≈ Y for each picture of the Y domain; this behavior is stimulated by a loss of cyclic consistency, as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )。
further, the image processing module detects the multiple gear pitting images through a gear pitting detection algorithm to obtain a gear pitting grade, and specifically comprises:
dividing the effective tooth surface of the gear: performing color space transformation on the image, transforming the RGB color space into YcrCb color space to generate a clustered binary matrix, then clustering the image, performing edge detection on the image by using a Roberts differential operator, positioning the approximate region of the tooth surface, then performing image segmentation, and segmenting the partial image of the tooth surface from the background; the edge segmentation is carried out by utilizing a Roberts differential operator, and a gradient operator is approximated by utilizing the vertical difference and the horizontal difference of an image, namely:
Figure SMS_4
wherein f (x, y) represents a certain pixel point in the image matrix, and f (x-1, y) and f (x, y-1) are two pixel points of vertical and horizontal neighborhoods of the image matrix respectively;
after obtaining the gear tooth surface part image, further dividing the working tooth surface from the tooth surface image, and further optimizing and dividing the tooth surface by using morphological processing of corrosion and expansion; carrying out binarization processing on the tooth surface partial image; then, carrying out image morphological treatment, expanding and then corroding, filling small holes, and communicating the tooth surface partial images; corroding and then expanding, and eliminating small and meaningless objects in the binary image; finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information;
carrying out pitting image segmentation by utilizing a U-Net network: the U-Net network comprises a convolution layer, a maximum pooling layer (down sampling), a deconvolution layer (up sampling) and a ReLU nonlinear activation function, the U-Net network performs up sampling for 4 times in total and is connected in the same convolution stage and the deconvolution stage, so that a pitting part in a working tooth surface is detected, the proportion of pitting accounting for the whole tooth surface area is calculated, and the pitting grade of the gear is determined; distinguishing a pitting part and other parts in the working tooth surface, and extracting the pitting part by means of self-adaptive threshold segmentation and clustering segmentation; and calculating the pixel number of the pitting corrosion part and the pixel number of the whole working area to calculate the pitting corrosion ratio.
The invention has the beneficial effects that: the invention can effectively train the tree-shaped generator and the identifier by utilizing a bottom-layer parameter sharing mechanism, leads each branch of the generator to learn towards different data modes by training the tree-shaped generator, and simultaneously leads the samples to reconstruct the original domain samples by the reconstructor, thus generating different samples under the condition of keeping style conversion.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For a better understanding of the objects, aspects and advantages of the present invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a generator and reconstructor of a tree generator generation model based on cyclic consistency loss;
FIG. 2 is a schematic diagram of a structure of a discriminant for a tree generator generation model based on cycle consistency loss;
FIG. 3 is a schematic diagram of a U-net network architecture;
FIG. 4 is a schematic diagram of the overall structure of a tree generator network gear pitting image measuring device based on cyclean;
FIG. 5 is a schematic structural diagram of an illumination module;
FIG. 6 is a schematic structural diagram of a visual inspection module;
FIG. 7 is a schematic view of a camera fixture;
FIG. 8 is a schematic view of a gear clamp;
FIG. 9 is a schematic view of a gear clamp;
FIG. 10 is a schematic diagram of the connection and function of a module of a tree generator network gear pitting image measuring device based on cyclegan.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by the terms "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not intended to indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limiting the present invention, and the specific meaning of the terms described above will be understood by those skilled in the art according to the specific circumstances.
On one hand, the invention provides a tree generator network gear pitting image measuring method based on cyclegan, which comprises the following steps:
the pictures of gear pitting corrosion are collected in gear pitting corrosion collection equipment of our laboratory, and 600 pieces of samples are collected in total and are divided into two types according to the categories of gears. The pitting corrosion image is mainly acquired by a visual detection device. The camera mainly comprises a light source, a matched clamp, a video camera, a matched clamp and a gear clamp, wherein the camera adopts a Charge Coupled Device (CCD) camera. The gear is positioned and installed through the three-jaw chuck, and the devices are matched with each other to obtain pitting corrosion image information of each tooth surface of the gear. The gear pitting images were all from gears that developed fatigue pitting in the experiments. The number of tooth surfaces on which pitting corrosion occurs on each gear is basically between 1 and 10 according to actual working conditions. The pitting grades of different tooth surfaces are greatly different, in the tooth surface with fatigue pitting, the pitting area accounts for 2 to 10 percent of the effective working tooth surface area, and partial tooth surfaces are also accompanied with other failure modes such as abrasion and the like when pitting occurs.
After the CCD camera shoots the tooth surface image, the collected tooth surface image is analyzed through an image processing method, so that the automatic detection of the gear pitting grade is realized. The raw image transmitted to the computer contains the pitted tooth surface image information part and unnecessary background environment, such as gear box, other tooth surfaces, etc. In the following recognition and detection processes, these may cause an unnecessary amount of calculation or affect the detection accuracy. In this regard, image pre-processing of the original image is required to eliminate environmental factors prior to recognition detection. The image preprocessing mainly comprises image enhancement and tooth surface inclination correction.
The main purpose of image enhancement techniques is to improve the quality and recognizability of images, making them more useful for viewing or further analysis processing. Image enhancement techniques generally highlight or enhance certain features of an image, such as edge information, contour information, contrast, and the like, so as to better display useful information of the image and improve the use value of the image. When the tooth surface image is collected, factors such as the brightness of the environment and the change of illumination conditions can generate noise interference on the image. The invention adopts a median filtering algorithm to count and sort the gray values of all pixel points in any pixel point in the tooth surface image in the field taking the point as the center, and the sorted median is determined as the gray size processed by the point. After median filtering and noise reduction, the contrast of the image is enhanced and adjusted by using gray scale transformation, so that the edge information is more obvious. Considering that different shooting angles exist in the image acquisition process, the tooth surface inclination correction needs to be carried out on the image. Tooth surface inclination correction based on Radon transformation comprises the following specific implementation steps: 1) Carrying out reinforcement processing on horizontal lines in the image by utilizing edge detection; 2) Calculating Radon transformation of the image to obtain an inclination angle; 3) And correcting the inclination of the tooth surface image according to the inclination angle. The Radon transform mathematical expression is:
Figure SMS_5
wherein f (x, y) represents a certain pixel in the image matrix, θ represents the angle of Radon transformation of the corrected image, and R represents the value in the transformation matrix.
And analyzing the tooth surface image by using a tree generator network based on cycle consistency loss cyclegan to realize automatic detection of the gear pitting level.
First, the following describes the establishment of a data generative model (GAN) based on a generative model against training, which trains the JS-divergence between the minimization data generation distribution P-data (x) and the model data distribution of the GAN. And then learned by minimizing the penalty between the generator network G (Z) and the arbiter network D (X), where G (Z) can learn a mapping G: Z- > X. The generator learns to construct the data distribution Pdata (X), produces indistinguishable samples X '= G (Z) by X, uses the source noise signal Z to minimize, and the discriminator learns to distinguish the real data X from the generated samples X' by maximally minimizing countermeasures against loss.
Figure SMS_6
For adaptation between the X-domain and the y-domain of the parallel domain, a Conditional GAN (CGAN) is proposed, which uses a generator to directly learn the mapping function G against loss LP-CGAN by minimizing the parallel condition: x- > y is the sum of the total weight of the components,
Figure SMS_7
where D is a conditional gan [1,3] using a cyclic consistency penalty (cyclic) is proposed for applying CGAN to adaptation between non-parallel domains X and y, where there are two conditional generators, G: x- > Y, and F: y- > X, each generator being trained in the antagonistic environment Dy and Dx, respectively, that is, there are two pairs of non-parallel conditional antagonistic losses Lnp-cgan (GX, DY) and Lnp-cgan (Gy, dx), where the formula is as follows:
Figure SMS_8
in the non-parallel case, its goal is to find the correct pseudo-pair (x, y) on the x and y domains without supervision, in order to ensure that G: x- > y and F: y- > x can learn such a mapping function, cyclegan proposes minimizing the cycle consistency loss by using L1 regularization
Figure SMS_9
Thus, cyclegan combines the two above losses into a competing loss, lcycle loss, learning the unsupervised mapping function between the X and y domains by maximizing Lcycle loss
L CycleGAN =L NPCGAN (G X ,D Y )+L NPCGAN (G Y ,D X )-L cycle (G X ,G Y )
The invention provides a tree generator generation model based on cycle consistency loss. Therefore, the problem of single sample in the cyclic egan generation result can be effectively solved. In the present invention, where X and Y represent data sets between different image domains, respectively, the purpose of the model is to generate samples different from each other during the style conversion process, which is also inspired by GMAN. But unlike the GMAN of a multiple generator, the model is to compare G: each branch of x-y is trained simultaneously, their goal being to learn a mapping G: x-Y, making the distribution of multiple images from G (Y) indistinguishable from the distribution of the real Y image by means of the antagonism loss, and parameter sharing at the partial level at the bottom of the generator, for its inverse mapping F: y-x, and introduces cycle consistency loss, and the other layers share parameters except the input layer of the first layer, so that the calculated amount of the model can be greatly reduced. The result is a new countermeasure structure, multiple branch generators, a maximum and minimum game is formed among the discriminators, the generators G (x-Y) generate image distribution of approaching and real Y domain, the reconstructor reconstructs G (Y) into image distribution of real x domain through inverse mapping, and the discriminator determines whether the generated sample is real data or generated by the generators, the model is called as the cyclic egan of generator tree. The model comprises two mapping functions in large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
L trees-GAN (G m ,D Y ,X,Y)=E y~pdata [logD Y(y) ]+E x~pdata(x) [log(1-D Y (G(x) m )]
for each image X in the X domain, this picture style loop needs to return X to the original image domain, for example: x → G (X) m → F (G (X) m) ≈ X, which is that one previously proposed 1 forward periodic consistency, similarly for each picture Y — > F (Y) → G (F (Y)) ≈ Y for the Y domain, this behavior is excited with a cyclic consistency loss, as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )
fig. 1 shows a generator and reconstructor part of a tree generator generation model based on a cycle consistency loss, and fig. 2 shows a discriminator part.
After a new set of gear pitting samples is obtained, the tooth surface needs to be further segmented to obtain an effective working tooth surface. And (3) transforming the color space of the image, transforming the RGB color space into the YcrCb color space to generate a clustered binary matrix, clustering the image, performing edge detection on the image by using differential operators such as Roberts and the like, positioning the approximate region of the tooth surface, performing image segmentation, and segmenting the partial image of the tooth surface from the background. This method uses an image segmentation algorithm based on edge detection. Usually, the gray values of the pixels on the boundary of different regions vary more sharply, and if the picture is fourier transformed from the spatial domain to the frequency domain, the edge corresponds to the high frequency part, which is the basic edge detection algorithm. The edge segmentation is performed using the Roberts differential operator, which approximates the gradient operator using the vertical and horizontal differences of the image, i.e.:
Figure SMS_10
where f (x, y) represents a certain pixel in the image matrix, and f (x-1, y) and f (x, y-1) are two pixels of its vertical and horizontal neighborhoods, respectively.
However, the edge detection algorithm cannot guarantee the continuity and the closeness of the edge, and a large number of broken edges exist in a high-detail area, so that a large area is difficult to form, but the high-detail area is not suitable to be divided into small fragments. The morphological processing method of the image is superior to an edge extraction algorithm based on differential operation in the edge information extraction processing, the edge extraction algorithm is not sensitive to noise like the differential algorithm, the extracted edge is smooth, and the extracted image skeleton has more continuous people and fewer breakpoints. Therefore, the segmentation of the effective working tooth surface of the gear is realized by combining an edge segmentation algorithm with morphological processing of an image.
After the gear tooth surface part image is obtained, the working tooth surface is further divided from the tooth surface image, and further optimized division of the tooth surface is realized by morphological processing of erosion and expansion. And carrying out binarization processing on the tooth surface partial image. Then, carrying out image morphological treatment, expanding and corroding, filling the small holes, and communicating the parts of the tooth surface image; and corroding and then expanding to eliminate small and meaningless objects in the binary image. And finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information. The effective working tooth surface segmentation is the premise of gear pitting calculation, the interference of background information is removed, automatic segmentation is realized, and the method is high in speed and high in precision.
The Convolutional Neural Network (CNN) has wide application in image classification and image detection, and CNN has the advantages that its multi-layer structure can automatically learn features, and can learn features of multiple layers: the sensing domain of the shallower convolutional layer is smaller, and the characteristics of some local regions are learned; deeper convolutional layers have larger perceptual domains and can learn more abstract features. These abstract features are less sensitive to the size, position, orientation, etc. of the object, thereby contributing to an improvement in recognition performance.
On the basis of CNN, jonathan Long et al of UC Berkeley proposed a full Convolutional neural network (FCN) for image segmentation, the FCN converting the Fully connected layers in the conventional CNN into Convolutional layers one by one. The full convolution neural network recovers the class to which each pixel belongs from the abstract features, i.e. from image-level classification extending further to pixel-level classification.
The U-Net neural network is an image segmentation network based on CNN, inherits the idea of FCN, and carries out optimization and improvement continuously, and the network structure shape is a U-shaped shape, so that the U-Net neural network is named as U-Net. The U-Net network is mainly used for medical image segmentation, is originally proposed to be used for cell wall segmentation, and has excellent performance in lung nodule detection, blood vessel extraction on fundus retina and the like. The basic U-Net network structure is shown in fig. 3, and mainly consists of a convolutional layer, a maximum pooling layer (downsampling), a deconvolution layer (upsampling), and a ReLU nonlinear activation function.
The U-Net network performs up-sampling for 4 times in total, and is connected in the same convolution stage and deconvolution stage, rather than directly performing supervision and loss function back transmission on high-level semantic features, so that the finally recovered feature map is ensured to be fused with more features of shallow information, and the features of different convolutional layers are fused, thereby performing multi-scale prediction and deep supervision, and the up-sampling for 4 times also enables information such as the recovery edge of the segmentation map to be more precise. According to the structure of the U-Net, the information of the bottom layer and the upper layer can be combined. Bottom layer (deep layer) information: and (4) low-resolution information after multiple downsampling. It is possible to provide contextual semantic information of the segmented object throughout the image, which can be understood as a feature reflecting the relationship between the object and its environment, which feature contributes to the class judgment of the object. High-layer (shallow layer) information: high resolution information is passed directly from the encoder to the co-altitude decoder via a linking operation. More refined features, such as gradients, etc., can be provided for segmentation.
In the process of obtaining the pitting part by dividing the effective working tooth surface of the gear, the image structure is relatively fixed, the distribution of the pitting divided target on the effective tooth surface is relatively regular due to relatively fixed stress distribution, the semantics is simple and clear, and the low-resolution characteristic can provide the information for identifying the target object. In addition, due to the influence of illumination information, shooting angles and the like, the gray scale of part of pitting pixels is greatly different from the normal gray scale, part of pitting is slightly different but has obvious boundaries, the gradient of pitting images is complex, and more high-resolution information is needed for accurate segmentation. The U-Net network combines low-resolution information to provide an object type identification basis and high-resolution information to provide a precise segmentation positioning basis, and is suitable for gear pitting image segmentation. In the experimental process, 1000 sample pictures are divided into a training set and a testing set, gear pitting parts in the training set pictures are marked out and input into a U-Net network for model training, and then the trained models are used for carrying out image segmentation on the pictures in the testing set.
On the other hand, the invention also provides a device for measuring the network gear pitting image of the tree-shaped generator based on the cyclean, as shown in fig. 4-9, wherein the element numbers in the drawings respectively represent: the device comprises an illumination module 101, a visual detection module 102, a gear clamp 103, a first slide rail 201, a first bracket 202, a second slide rail 203, a first support plate 204, a second rotating shaft 205, a second rotating block 206, a light shield 207, a second bracket 301, a fifth slide rail 302, a second support plate 303, a sliding plate 304, a second support plate 305, a first rotating block 306, a first rotating shaft 307, a CCD camera 308, a third support plate 401, a third slide rail 501, a rotating motor 502, an output shaft 503, a three-jaw chuck 504 and a gear 505.
The directions in the present embodiment are described using the X-axis, Y-axis, and Z-axis identified in fig. 4.
In this embodiment, referring to fig. 5, the whole system is fixed on the workbench by a first sliding rail 201, and the first sliding rail 201 has two rails, which can slide in the X-axis direction. The first support 202 of the whole illumination module 101 is mounted above the slide rail, and the second slide rail 203 is mounted on the first support 202, so that the illumination module 101 can move in the Z-axis direction. The rest part is a clamp part of the light source, which comprises a first support plate 204, a second rotating shaft 205 and a second rotating block 206, and the freedom degree of the light source in two directions, the movement and the rotation in the Y-axis direction are realized by adjusting the position and the angle of the second rotating block 206 on the second rotating shaft 205, and the two parts are locked by a nut on the second rotating block 206. The light source is arranged on the second rotating block 206, and the periphery of the second rotating block is wrapped by the light cover 207, so that light rays of the light source are more concentrated, and good illumination intensity is provided for the gear.
In this example, referring to fig. 6 and 7, the core of the entire visual inspection module is the CCD phase 308. The CCD camera 308 has four degrees of freedom including movement in XYZ-axis directions and rotation in Y-axis directions, which enables the CCD camera 308 to select an appropriate angle to acquire image information of the tooth surface. Structure of the vision inspection module 102 referring to fig. 6, the whole vision inspection module 102 is fixed on the work bench by a second bracket 301. The fifth slide rail 302 is mounted on the second bracket 301 to enable the fixture of the CCD camera 308 to move in the Z-axis direction. Next is a combined slide rail part, on which a second support plate 303 is mounted on a fifth slide rail 302, having a rail for enabling movement in the Y-axis direction of the camera, the rail part being referred to as a third support plate 401 in fig. 7. A sliding plate 304 is arranged above the track of the second supporting plate 303, and a track part of another sliding rail is arranged above the sliding plate 304 and is integrated with the sliding plate and the sliding rail. The slide plate 304 is provided with a second support plate 305, which forms a slide rail to realize the movement of the CCD camera 308 in the X-axis direction. At the end of the clamp of the CCD camera 308 are a first rotating block 306 and a first rotating shaft 307, which realize the rotation of the CCD camera 308 in the X-axis direction. The CCD camera 308 is mounted on the first rotating block 306, and adjusts a photographing angle by the first rotating shaft 307.
In this example, gear clamp 103 is partially shown in FIG. 8, where the gear clamp has two degrees of freedom, including movement and rotation in the X-axis direction. The gear clamp 103 is mounted on the workbench through a third slide rail 501, the distance adjustment in the X-axis direction can be performed in a small range, the third slide rail 501 is a rotating motor 502, the rotating motor 502 drives the three-jaw chuck 504 to rotate through an output shaft 503, and the gear 505 is clamped on the three-jaw chuck 504. The system requires to realize the function of collecting the tooth surface pitting information of each tooth surface of the gear, so that the gear 505 is required to be capable of rotating after being armored, and therefore, the image information collection is automated. The three-jaw chuck 504 belongs to a small three-jaw chuck, and referring to fig. 9, the three-jaw chuck mainly depends on the chuck jaws to grab the shaft hole of the gear outwards to fix the gear.
The overall structure of the visual inspection device in the embodiment of the present invention, the positions of the illumination module 101, the visual inspection module 102 and the gear clamp 103 are distributed as shown in fig. 4. The gear clamp 103 is basically fixed in position, good illumination conditions are obtained by adjusting the position and the angle of the illumination module 101, a good shooting angle is determined by adjusting the position and the angle of the visual detection module 102, and tooth surface pitting image information with rich information such as pixel distribution, brightness and color is guaranteed to be obtained.
Another important part of an embodiment of the invention is the image processing module.
The tooth surface pitting image information obtained by the visual inspection device inevitably contains background information, which is not necessary for the evaluation of the gear pitting level. The image processing module can realize the division of the required tooth surface part from the pitting corrosion image information. After the pitting corrosion image is obtained, image preprocessing is firstly carried out, the tooth surface is subjected to inclination correction, and the characteristic information of the tooth surface part is enhanced. And amplifying the gear pitting images based on a tree-shaped generator network of the cycle consistency loss cyclean to generate a plurality of gear pitting images, and detecting the plurality of gear pitting images through a gear pitting detection algorithm to obtain the gear pitting grade.
The image processing module amplifies the gear pitting image based on a tree-shaped generator network of a cycle consistency loss cyclegan to generate a plurality of gear pitting images, and specifically comprises the following steps:
x and Y represent data sets between different image domains, respectively, G: each branch of x-y is trained simultaneously, with the goal of learning a mapping G: x-Y, making the distribution of multiple images from G (Y) indistinguishable from the distribution of the real Y image by means of the antagonism loss, and parameter sharing at the partial level at the bottom of the generator, for its inverse mapping F: y-x, introducing cycle consistency loss, and sharing parameters in other layers except the input layer of the first layer to generate a new countermeasure structure, a plurality of branched generators, forming a maximum and minimum game among the discriminators, generating an image distribution approaching to a real Y domain by the generator G (x-Y), reconstructing the image distribution of the real x domain by a reconstructor through inverse mapping, and determining whether the generated sample is real data or generated by the generator by the discriminator;
the tree generator network based on the cyclic egan comprises two mapping functions with large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
L trees-GAN (G m ,D Y ,X,Y)=E y~pdata [logD Y(y) ]+E x~pdata(x) [log(1-D Y (G(x) m )]
for each image X in the X domain, this picture style loop requires returning X to the original image domain, i.e., X → G (X) m → F (G (X) m) ≈ X; y — > F (Y) → G (F (Y)) ≈ Y for each picture for the Y domain; this behavior is stimulated by a loss of cyclic consistency as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )。
the image processing module detects the multiple gear pitting images through a gear pitting detection algorithm to obtain gear pitting grades, and the method specifically comprises the following steps:
dividing the effective tooth surface of the gear: transforming the color space of the image, transforming the RGB color space into the YcrCb color space to generate a clustered binary matrix, clustering the image, performing edge detection on the image by using a Roberts differential operator, positioning an approximate region of a tooth surface, performing image segmentation, and segmenting a partial image of the tooth surface from a background; the edge segmentation is carried out by utilizing a Roberts differential operator, and a gradient operator is approximated by utilizing the vertical difference and the horizontal difference of an image, namely:
Figure SMS_11
wherein f (x, y) represents a certain pixel point in the image matrix, and f (x-1, y) and f (x, y-1) are two pixel points of vertical and horizontal neighborhoods of the image matrix respectively;
after obtaining the gear tooth surface part image, further dividing the working tooth surface from the tooth surface image, and further optimizing and dividing the tooth surface by using morphological processing of corrosion and expansion; carrying out binarization processing on the tooth surface partial image; then, carrying out image morphological treatment, expanding and then corroding, filling small holes, and communicating the tooth surface partial images; corroding and then expanding, and eliminating small and meaningless objects in the binary image; finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information;
carrying out pitting image segmentation by utilizing a U-Net network: the U-Net network comprises a convolution layer, a maximum pooling layer (down sampling), a deconvolution layer (up sampling) and a ReLU nonlinear activation function, the U-Net network performs up sampling for 4 times in total and is connected in the same convolution stage and the deconvolution stage, so that a pitting part in a working tooth surface is detected, the proportion of pitting accounting for the whole tooth surface area is calculated, and the pitting grade of the gear is determined; distinguishing a pitting part and other parts in the working tooth surface, and extracting the pitting part by means of self-adaptive threshold segmentation and clustering segmentation; and calculating the pixel number of the pitting corrosion part and the pixel number of the whole working area to calculate the pitting corrosion ratio.
As shown in fig. 10, it is a schematic diagram of the connection and function of the module of the device for measuring the network gear pitting image of the tree-shaped generator based on cyclegan according to the present embodiment.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (3)

1. A tree generator network gear pitting image measuring method based on cyclean is characterized in that: the method comprises the following steps:
s1: collecting gear pitting image information; in the step S1, tooth surface image information is acquired through gear pitting acquisition equipment, wherein the gear pitting acquisition equipment comprises a visual detection module, an image processing module, an illumination module, a gear clamp module and a workbench for mounting the modules;
the gear clamp module clamps a fixed gear through a three-jaw chuck, the visual detection module is used for acquiring pitting image information of each tooth surface of the gear, the image processing module is used for extracting effective working tooth surfaces from the pitting image information and obtaining pitting parts in the pitting image information through a threshold segmentation means, and the illumination module is used for providing a light source in the process of acquiring the pitting image information through the visual detection module;
s2: carrying out image preprocessing on an original image to eliminate environmental factors; in the step S2, the image preprocessing comprises image enhancement and tooth surface inclination correction;
the image enhancement includes: adopting a median filtering algorithm to count and sort the gray values of all pixel points in the tooth surface image in the field taking the point as the center, and determining the sorted median as the processed gray value of the point;
the tooth surface inclination correction includes: tooth surface inclination correction based on Radon transformation comprises the following specific implementation steps:
1) Carrying out reinforcement processing on horizontal lines in the image by utilizing edge detection;
2) Calculating Radon transformation of the image to obtain an inclination angle;
3) Correcting the inclination of the tooth surface image according to the inclination angle;
the Radon transform mathematical expression is:
Figure FDA0004083018740000011
wherein f (x, y) represents a certain pixel point in the image matrix, theta represents the angle of Radon transformation of the corrected image, and R represents the value in the transformation matrix;
s3: amplifying the gear pitting images based on a tree-shaped generator network of cycle consistency loss cyclean to generate a plurality of gear pitting images; in step S3, the tree generator network based on cycle consistency loss cyclean, X and Y respectively represent data sets between different image domains, and G: each branch of x-y is trained simultaneously, with the goal of learning a mapping G: x-Y, making the distribution of multiple images from G (Y) indistinguishable from the distribution of the real Y image by means of the antagonism loss, and parameter sharing at the partial level at the bottom of the generator, for its inverse mapping F: y-x, introducing cycle consistency loss, and sharing parameters in other layers except the input layer of the first layer to generate a new countermeasure structure, a plurality of branched generators, forming a maximum and minimum game among the discriminators, generating an image distribution approaching to a real Y domain by the generator G (x-Y), reconstructing the image distribution of the real x domain by a reconstructor through inverse mapping, and determining whether the generated sample is real data or generated by the generator by the discriminator;
the tree generator network based on the cyclic egan comprises mapping functions of two large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
Figure FDA0004083018740000021
for each image X in the X domain, this picture style loop requires returning X to the original image domain, i.e., X → G (X) m → F (G (X) m) ≈ X; y — > F (Y) → G (F (Y)) ≈ Y for each picture of the Y domain; this behavior is stimulated by a loss of cyclic consistency, as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )
s4: detecting the plurality of gear pitting images through a gear pitting detection algorithm to obtain gear pitting grades; the step S4 specifically includes the following steps:
s41: dividing the effective tooth surface of the gear: transforming the color space of the image, transforming the RGB color space into the YcrCb color space to generate a clustered binary matrix, clustering the image, performing edge detection on the image by using a Roberts differential operator, positioning the region range of the tooth surface, then performing image segmentation, and segmenting the partial image of the tooth surface from the background; the edge segmentation is carried out by utilizing a Roberts differential operator, and a gradient operator is approximated by utilizing the vertical difference and the horizontal difference of an image, namely:
Figure FDA0004083018740000022
wherein f (x, y) represents a certain pixel point in the image matrix, and f (x-1, y) and f (x, y-1) are two pixel points of vertical and horizontal neighborhoods of the image matrix respectively;
after obtaining the gear tooth surface part image, further dividing the working tooth surface from the tooth surface image, and further optimizing and dividing the tooth surface by using morphological processing of corrosion and expansion; carrying out binarization processing on the tooth surface partial image; then, carrying out image morphological treatment, expanding and then corroding, filling small holes, and communicating the tooth surface partial images; corroding and then expanding, and eliminating small and meaningless objects in the binary image; finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information;
s42: carrying out pitting image segmentation by utilizing a U-Net network: the U-Net network comprises a convolution layer, a maximum pooling layer, an deconvolution layer and a ReLU nonlinear activation function, the U-Net network performs up-sampling for 4 times in total and is connected in the same convolution stage and deconvolution stage, so that the pitting part in the working tooth surface is detected, the proportion of the pitting accounting for the whole tooth surface area is calculated, and the pitting grade of the gear is determined; distinguishing a pitting part and other parts in the working tooth surface, and extracting the pitting part by means of self-adaptive threshold segmentation and clustering segmentation; and calculating the pixel number of the pitting corrosion part and the pixel number of the whole working area to calculate the pitting corrosion ratio.
2. A tree-shaped generator network gear pitting image measuring device based on cyclean is characterized in that: the device comprises a visual detection module, an image processing module, an illumination module, a gear clamp module and a workbench for installing the modules;
the gear clamp module is used for clamping a fixed gear and comprises a third slide rail fixedly arranged on the workbench and a gear clamp arranged in the third slide rail in a sliding manner; the gear clamp comprises a rotating motor and a three-jaw chuck, the three-jaw chuck is arranged on an output shaft of the rotating motor, and a gear is clamped by the three-jaw chuck and driven to rotate by the torque of the output shaft of the rotating motor;
the visual detection module is used for acquiring pitting image information of each tooth surface of the gear and comprises a fourth slide rail fixedly arranged on the workbench and a CCD camera slidably arranged on the fourth slide rail; the vision detection module further comprises a fifth slide rail, one end of the fifth slide rail is arranged in the fourth slide rail in a sliding mode, and the CCD camera is arranged on the fifth slide rail through a first rotating block and a first rotating shaft;
the image processing module is used for preprocessing the acquired tooth surface pitting image information, amplifying the gear pitting image based on a tree generator network of a cycle consistency loss cyclean to generate a plurality of gear pitting images, and detecting the plurality of gear pitting images through a gear pitting detection algorithm to obtain a gear pitting grade;
the illumination module is used for providing a light source in the process that the visual detection module acquires the pitting corrosion image information, the illumination module and a rotating shaft of the gear are arranged in the same plane, the illumination module comprises a first slide rail fixedly arranged on the workbench, a light source fixture arranged in the first slide rail in a sliding manner, and a light source arranged on and clamped by the light source fixture, the illumination module further comprises a second slide rail, one end of the second slide rail is arranged on the first slide rail in a sliding manner, and the light source fixture is arranged on the second slide rail in a sliding manner;
the image processing module amplifies the gear pitting image based on a tree-shaped generator network of a cycle consistency loss cyclegan to generate a plurality of gear pitting images, and specifically comprises the following steps:
x and Y represent data sets between different image domains, respectively, G: each branch of x-y is trained simultaneously, with the goal of learning a mapping G: x-Y, the distribution of multiple images from G (Y) is indistinguishable from the distribution of the true Y image by the adversarial loss, and the partial layer at the bottom of the generator is parameter shared, for its inverse mapping F: y-x, introducing cycle consistency loss, and sharing parameters in other layers except the input layer of the first layer to generate a new countermeasure structure, a plurality of branched generators, forming a maximum and minimum game among the discriminators, generating an image distribution approaching to a real Y domain by the generator G (x-Y), reconstructing the image distribution of the real x domain by a reconstructor through inverse mapping, and determining whether the generated sample is real data or generated by the generator by the discriminator;
the tree generator network based on the cyclic egan comprises two mapping functions with large directions: g: x- > Yi, and F: y → Xi, so this penalty becomes for the generator and arbiter { (Gm, dx) (Fm, dy) }:
Figure FDA0004083018740000041
for each image X in the X domain, this picture style loop requires returning X into the original image domain, i.e., X → G (X) m → F (G (X) m) ≈ X; y — > F (Y) → G (F (Y)) ≈ Y for each picture of the Y domain; this behavior is stimulated by a loss of cyclic consistency as follows:
L cycle (G m ,F m )=E x~pdata(x) [||F(G(x) m )-x)|| 1 ]+E y~pdata(y) [||G(F(y) m )-y|| 1 ]
the proposed tree generator model is trained by maximum minimization:
L trees-cyclegan =L trees-GAN (G m ,D Y ,X,Y)+L trees-GAN (F m ,D X ,Y,X)+λL cyc (G m ,F m )
the image processing module detects multiple gear pitting images through a gear pitting detection algorithm to obtain gear pitting grades, and the method specifically comprises the following steps:
dividing the effective tooth surface of the gear: transforming the color space of the image, transforming the RGB color space into the YcrCb color space to generate a clustered binary matrix, clustering the image, performing edge detection on the image by using a Roberts differential operator, positioning the region range of the tooth surface, then performing image segmentation, and segmenting the partial image of the tooth surface from the background; the edge segmentation is carried out by utilizing a Roberts differential operator, and a gradient operator is approximated by utilizing the vertical difference and the horizontal difference of an image, namely:
Figure FDA0004083018740000042
wherein f (x, y) represents a certain pixel point in the image matrix, and f (x-1, y) and f (x, y-1) are two pixel points of vertical and horizontal neighborhoods of the image matrix respectively;
after obtaining the gear tooth surface part image, further dividing the working tooth surface from the tooth surface image, and further optimizing and dividing the tooth surface by using morphological processing of corrosion and expansion; carrying out binarization processing on the tooth surface partial image; then, carrying out image morphological treatment, expanding and then corroding, filling small holes, and communicating the tooth surface partial images; corroding and then expanding, and eliminating small and meaningless objects in the binary image; finding the maximum communication area of the rest part after morphological processing to form a rectangular frame, and segmenting the original image by using the position information of the rectangular frame so as to obtain the effective working tooth surface part in the tooth surface image information;
carrying out pitting image segmentation by utilizing a U-Net network: the U-Net network comprises a convolution layer, a maximum pooling layer, an deconvolution layer and a ReLU nonlinear activation function, the U-Net network performs up-sampling for 4 times in total and is connected in the same convolution stage and deconvolution stage, so that the pitting part in the working tooth surface is detected, the proportion of the pitting occupying the whole tooth surface area is calculated, and the pitting grade of the gear is determined; distinguishing a pitting part and other parts in the working tooth surface, and extracting the pitting part by means of self-adaptive threshold segmentation and clustering segmentation; and calculating the pixel number of the pitting corrosion part and the pixel number of the whole working area to calculate the pitting corrosion ratio.
3. The cyclegan-based tree-generator network gear pitting image measuring device of claim 2, wherein: the image processing module is used for preprocessing the acquired tooth surface pitting image information and comprises the following contents:
image enhancement: adopting a median filtering algorithm to count and sort the gray values of all pixel points in the tooth surface image in the field taking the point as the center, and determining the sorted median as the processed gray value of the point;
correcting the inclination of the tooth surface: tooth surface tilt correction based on Radon transform comprising:
1) Carrying out enhancement processing on horizontal lines in the image by utilizing edge detection;
2) Calculating Radon transformation of the image to obtain an inclination angle;
3) Correcting the inclination of the tooth surface image according to the inclination angle;
the Radon transform mathematical expression is:
Figure FDA0004083018740000051
wherein f (x, y) represents a certain pixel in the image matrix, θ represents the angle of Radon transformation of the corrected image, and R represents the value in the transformation matrix.
CN202010065989.XA 2020-01-13 2020-01-20 Tree generator network gear pitting image measuring method and device based on cyclean Active CN111260640B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020100326459 2020-01-13
CN202010032645 2020-01-13

Publications (2)

Publication Number Publication Date
CN111260640A CN111260640A (en) 2020-06-09
CN111260640B true CN111260640B (en) 2023-03-31

Family

ID=70954326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010065989.XA Active CN111260640B (en) 2020-01-13 2020-01-20 Tree generator network gear pitting image measuring method and device based on cyclean

Country Status (1)

Country Link
CN (1) CN111260640B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3916673A1 (en) * 2020-05-26 2021-12-01 Airbus (S.A.S.) Method for determining striation properties of fatigue striations and for determining the presence of fatigue damage
CN111915572B (en) * 2020-07-13 2023-04-25 青岛大学 Adaptive gear pitting quantitative detection system and method based on deep learning
CN111860782B (en) * 2020-07-15 2022-04-22 西安交通大学 Triple multi-scale CycleGAN, fundus fluorography generation method, computer device, and storage medium
CN111931684B (en) * 2020-08-26 2021-04-06 北京建筑大学 Weak and small target detection method based on video satellite data identification features
CN112287927B (en) * 2020-10-14 2023-04-07 中国人民解放军战略支援部队信息工程大学 Method and device for detecting inclination angle of text image
CN112529978B (en) * 2020-12-07 2022-10-14 四川大学 Man-machine interactive abstract picture generation method
CN112906769A (en) * 2021-02-04 2021-06-04 国网河南省电力公司电力科学研究院 Power transmission and transformation equipment image defect sample amplification method based on cycleGAN
CN114943869B (en) * 2022-03-30 2023-06-30 中国民用航空飞行学院 Airport target detection method with enhanced style migration
CN115392325B (en) * 2022-10-26 2023-08-18 中国人民解放军国防科技大学 Multi-feature noise reduction modulation identification method based on CycleGan
CN117474925B (en) * 2023-12-28 2024-03-15 山东润通齿轮集团有限公司 Gear pitting detection method and system based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108507787A (en) * 2018-06-28 2018-09-07 山东大学 Wind power gear speed increase box fault diagnostic test platform based on multi-feature fusion and method
CN110660057A (en) * 2019-11-01 2020-01-07 重庆大学 Binocular automatic gear pitting detection device based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007177677A (en) * 2005-12-27 2007-07-12 Ntn Corp Rocker arm and rocker shaft
CN105118044B (en) * 2015-06-16 2017-11-07 华南理工大学 A kind of wheel shape cast article defect automatic testing method
US20180284746A1 (en) * 2016-05-09 2018-10-04 StrongForce IoT Portfolio 2016, LLC Methods and systems for data collection optimization in an industrial internet of things environment
US20190174207A1 (en) * 2016-05-09 2019-06-06 StrongForce IoT Portfolio 2016, LLC Methods and systems for the industrial internet of things
CN109754442B (en) * 2019-01-10 2023-02-21 重庆大学 Gear pitting detection system based on machine vision
CN110210570A (en) * 2019-06-10 2019-09-06 上海延华大数据科技有限公司 The more classification methods of diabetic retinopathy image based on deep learning
CN110567985B (en) * 2019-10-14 2021-10-08 重庆大学 Self-adaptive gear pitting quantitative evaluation and detection device based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108507787A (en) * 2018-06-28 2018-09-07 山东大学 Wind power gear speed increase box fault diagnostic test platform based on multi-feature fusion and method
CN110660057A (en) * 2019-11-01 2020-01-07 重庆大学 Binocular automatic gear pitting detection device based on deep learning

Also Published As

Publication number Publication date
CN111260640A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111260640B (en) Tree generator network gear pitting image measuring method and device based on cyclean
Shaziya et al. Automatic lung segmentation on thoracic CT scans using U-net convolutional network
CN106886977B (en) Multi-image automatic registration and fusion splicing method
CN109242888B (en) Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
Gao et al. A deep learning based approach to classification of CT brain images
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN106940816A (en) Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN110060237A (en) A kind of fault detection method, device, equipment and system
CN108154519A (en) Dividing method, device and the storage medium of eye fundus image medium vessels
CN110826389B (en) Gait recognition method based on attention 3D frequency convolution neural network
CN109166102A (en) It is a kind of based on critical region candidate fight network image turn image interpretation method
CN110736747B (en) Method and system for positioning under cell liquid-based smear mirror
CN116188786B (en) Image segmentation system for hepatic duct and biliary tract calculus
CN114140373A (en) Switch defect detection method based on LabVIEW deep learning
CN116612106A (en) Method for detecting surface defects of optical element based on YOLOX algorithm
Huo et al. Tongue shape classification integrating image preprocessing and convolution neural network
CN117746119A (en) Ultrasonic image breast tumor classification method based on feature fusion and attention mechanism
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN113673396B (en) Spore germination rate calculation method, device and storage medium
CN114331961A (en) Method for defect detection of an object
CN118015031A (en) Target area automatic sketching method and system based on medical image
CN117994573A (en) Infrared dim target detection method based on superpixel and deformable convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant