CN113255847B - Tire wear degree prediction method based on generation of countermeasure network - Google Patents

Tire wear degree prediction method based on generation of countermeasure network Download PDF

Info

Publication number
CN113255847B
CN113255847B CN202110769828.3A CN202110769828A CN113255847B CN 113255847 B CN113255847 B CN 113255847B CN 202110769828 A CN202110769828 A CN 202110769828A CN 113255847 B CN113255847 B CN 113255847B
Authority
CN
China
Prior art keywords
tire
image
generator
loss function
forged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110769828.3A
Other languages
Chinese (zh)
Other versions
CN113255847A (en
Inventor
王涛
安士才
李腾
牟文青
韩伟
孙德宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jerei Digital Technology Co Ltd
Original Assignee
Shandong Jerei Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jerei Digital Technology Co Ltd filed Critical Shandong Jerei Digital Technology Co Ltd
Priority to CN202110769828.3A priority Critical patent/CN113255847B/en
Publication of CN113255847A publication Critical patent/CN113255847A/en
Application granted granted Critical
Publication of CN113255847B publication Critical patent/CN113255847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a tire wear degree prediction method based on a generation countermeasure network, which is characterized by comprising the following steps: s1: preprocessing the shot pictures of the side surfaces of the tires; s2: reconstructing the tire side image processed by the S1 into a tire front image by using an IST-GAN network model framework; s3: and predicting the tire wear degree of the converted front image of the tire by using a TWP prediction model frame to obtain a corresponding prediction conclusion. The invention does not need to carry out repeated large amount of manual measurement, thereby saving the labor cost; the method can predict the wear degree of the tire by only taking a side photo of the vehicle tire, realizes remote prediction of the wear condition of the tire, is convenient for making a plan for replacing the tire in advance, and saves time cost.

Description

Tire wear degree prediction method based on generation of countermeasure network
Technical Field
The invention relates to the technical field of pattern wear degree identification and measurement in the tire industry, in particular to a tire wear degree prediction method based on a generation countermeasure network.
Background
It is well known that the friction between the tire and the road surface is the source of vehicle drive, braking and steering. The design of patterns on the tread of the automobile tire can effectively improve the friction force between the tire and the ground and the water storage and drainage capacity of the tire, and is beneficial to the heat dissipation of the tire.
When the depth of the tyre pattern is lower than the critical value, the friction force between the tyre and the ground is obviously reduced, and the water storage and drainage capacity of the tyre is greatly influenced. When a water film exists on the road surface, the 'water slip phenomenon' is easy to generate, and great potential safety hazard is caused to traffic.
At present, two main detection methods for the pattern depth of the tire tread are available, the first method is shown in figure 1, and is to manually measure a plurality of main grooves on the same section of the tire tread through a tire pattern depth gauge or a vernier caliper and take an average value. The second method, as shown in fig. 2, is obtained by horizontally scanning the surface of the tire with a laser sensor, and has the disadvantage that the cost of the laser, processor and other devices is high, and thus the method cannot be generally applied. Also, neither technique is capable of remotely measuring the tire tread depth.
Under the normal use condition of a vehicle, when the vehicle tire is not detached, a user generally cannot shoot a strict front image of the vehicle tire, but can easily obtain a side photo of the tire through a mobile phone, but the side photo cannot clearly identify the linear pattern grooves of the tire.
In view of the above, the present invention provides an image transformation method, which can reconstruct a tire side image into a corresponding tire front image, thereby implementing a method for predicting the degree of tire wear remotely, quickly, and simply.
Disclosure of Invention
Based on the above technical background, the main object of the present invention is to provide a tire wear degree prediction method based on a generation countermeasure network, which can implement remote measurement, predict the tire wear degree through a shot tire side photo, further know the tire wear condition in time, and effectively reduce the investment of manpower and material resources.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
a tire wear degree prediction method based on a generation countermeasure network is characterized by comprising the following steps:
s1: preprocessing the shot pictures of the side surfaces of the tires;
s2: reconstructing the tire side image processed by the S1 into a tire front image by using an IST-GAN network model framework;
s3: and predicting the tire wear degree of the converted front image of the tire by using a TWP prediction model frame to obtain a corresponding prediction conclusion.
Further, the preprocessing of the tire side photograph in S1 means adjusting the format and pixels of the tire side photograph and performing a gradation process on the color of the photograph.
Further, the training process of the IST-GAN network model framework in S2 and the TWP prediction model framework in S3 both depend on the building of a tire sample data set, which includes:
taking a certain number of tires as samples, acquiring front photos (vertical to tire treads) and side photos (having a certain inclination angle with the tire treads) of the same position of each sample tire to obtain a series of front images and side images of the tires, and preprocessing the photos to obtain a data set for training an IST-GAN network model framework;
measuring and recording the depth of a linear groove of each sample tire, setting three threshold labels as classification bases according to the depth of the linear groove, classifying the three threshold labels into three classification data sets with different wear degrees, namely recommended tire replacement, good tire and excellent tire, and using the classification data sets for training a TWP prediction model frame;
further, in order to ensure that the tire side image in S2 can be accurately reconstructed into a tire front image, the IST-GAN network model framework designs two cyclic conversion branches based on the two generators G1, G2, and the two generators G1, G2 are trained and optimized by means of the two discriminators D1, D2 in the two cyclic conversion branches;
the two loop transition branches comprise a forward loop consistency transition branch and a reverse loop consistency transition branch;
in the forward cycle uniformity conversion branch, the generator G1 synthesizes a forged tire side image using the tire front image p as an input
Figure 246884DEST_PATH_IMAGE001
The generator G2 reconstructs a corresponding tire face image from the forged tire side image
Figure 226341DEST_PATH_IMAGE002
(ii) a The forward cycle consistency formula is
Figure 304281DEST_PATH_IMAGE003
In the reverse cyclic consistency conversion branch, the generator G2 synthesizes a forged tire face image using the tire side image s as an input
Figure 761807DEST_PATH_IMAGE004
The generator G1 reconstructs a corresponding tire side image from the forged tire face image
Figure 372917DEST_PATH_IMAGE005
(ii) a The reverse cycle consistency formula is
Figure 257696DEST_PATH_IMAGE006
In order to ensure that each picture can be mapped to a target in the bidirectional cyclic consistency conversion process, a cyclic consistency loss function is designed:
Figure 321467DEST_PATH_IMAGE007
wherein generator G1 is used to implement data set from tire front faceI P To tire side data setI s Mapping of (2); generator G2 is used to implement a tire side data setI s To tire face data setI P Mapping of (2);
Figure 818570DEST_PATH_IMAGE008
representing a function expectation;
Figure 549765DEST_PATH_IMAGE009
representing a data set from the front of a tyreI P A randomly sampled tire front image;
Figure 339867DEST_PATH_IMAGE010
representing a data set from the tire sideI s A randomly sampled tire side image;
Figure 156513DEST_PATH_IMAGE011
the L1 norm, representing the matrix, learns the corresponding generators G1 and G2 by minimizing the loss.
During the forward loop conversion, a forward loop antagonism loss function is designed between the generator G1 and the discriminator D1, so that the discriminator D1 can compare the synthesized forged tire side image with the input tire side image and select a better synthesized (i.e., least loss) forged tire side image, the forward loop antagonism loss function being:
Figure 690263DEST_PATH_IMAGE013
during the reverse cyclic conversion, a reverse cyclic antagonism loss function is designed between the generator G2 and the discriminator D2, so that the discriminator D2 can compare the synthesized forged tire front image with the input tire front image, and then select a better synthesized (i.e., least loss) forged tire front image, where the reverse cyclic antagonism loss function is:
Figure 777430DEST_PATH_IMAGE014
finally, a better IST-GAN network model is obtained by minimizing the overall target loss, and the target function of the IST-GAN network model framework is as follows:
Figure 4012DEST_PATH_IMAGE015
the importance of the discriminant loss function and the generator loss function is controlled by the parameter lambda, and the larger the value of the parameter lambda is, the higher the weight of the cycle consistency loss function of the generator is, so that the reduction of the cycle consistency loss is more meaningful, namely, the IST-GAN network model focuses more on reducing the loss of the generator in the training process.
And testing the trained IST-GAN network model framework by using a test set, and outputting a reconstructed tire front image.
Further, the learning rate of the IST-GAN network model framework during training is 2e-4, the batch size is 1, the iteration period is 300, an Intel (R) core i7-9700 CPU processor is used for iteration, and the whole training process takes about 9 hours.
Further, the TWP prediction model framework in S3 is composed of three input branches, three convolutional layers, a full-link layer, and a classification layer; the training process comprises the following steps:
s31, inputting three types of classified data sets which are labeled and represent different wear degrees by using three input branches;
s32, mapping the tire image reconstructed in the S2 to a hidden layer feature space through three layers of convolutional layers, and performing feature extraction;
s33, inputting the output feature tensor into the full connection layer, and mapping the learned distributed feature representation to a sample mark space;
and S34, classifying the tire wear degree of the tire into one of three different wear degrees through the classification layer.
Furthermore, in the training process, the model is corrected by recording the corresponding loss value and accuracy of each training until the training is finished.
Furthermore, the three convolutional layers are built by a convolution-pooling-activation rule, wherein the size of a convolution kernel is 5 x 5, the moving step length of the convolution kernel is 1, the number of the convolution kernels is 64, the pooling size is 2 x 2, the pooling step length is 2, and the pooling type is maximum pooling;
the convolutional layer and the fully-connected layer both utilize a nonlinear activation function LeakyReLU, and the classification layer utilizes a softmax activation function.
Further, using softmax classification as an output layer, selecting cross entropy as a loss function, and calculating the loss function value through softmax:
Figure 42375DEST_PATH_IMAGE016
where k is the number of categories, cable represents the tag value of the input data, and q represents the predicted value of the input data. Secondly, the effectiveness of the method is evaluated by the output accuracy:
Figure 379815DEST_PATH_IMAGE017
wherein,
Figure 321489DEST_PATH_IMAGE018
representing the actual value of the input data, i.e. the image of the front face of the tyre.
The tire wear degree prediction method based on the generation countermeasure network combines two network models of a GAN network and a deep learning network, wherein the IST-GAN network model reconstructs a sample, a sample set is expanded, and the image style is converted; meanwhile, the advantages of the two network models are combined, and the accuracy and the robustness of the models can be improved. The IST-GAN network model adopts mutual verification bidirectional cycle conversion, and compared with unidirectional cycle, the conversion from the front data to the side data of the tire can be more real, so that a better generator can be obtained through training. For the data set, the data set does not need to be marked manually, so that a large amount of labor cost is saved, and the efficiency is improved. For a user, the method of the invention can predict and know the wear condition of the tire by only taking a photo of the side surface of the tire, thereby greatly facilitating the user and simultaneously reducing the after-sale service cost of a tire company.
Drawings
FIG. 1 is a schematic view of manually measuring tire groove depth using a tire tread depth gauge;
FIG. 2 is a schematic view of measuring tire groove depth using a laser;
FIG. 3 is a captured tire front image and tire side image;
FIG. 4 is a flowchart of an image style conversion method framework IST-GAN;
fig. 5 is a flowchart of a tire wear level prediction method framework TWP.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
In daily life, under the condition that a vehicle is normally used, when the vehicle tire is not detached, a user generally cannot shoot a strict front image of the vehicle tire, but usually can obtain a side photo of the tire through a mobile phone easily, but the side photo cannot clearly identify the linear pattern grooves of the tire. Based on this, the tire wear degree prediction method based on the generation countermeasure network of the embodiment relates to an image style conversion method, by which reconstruction of a corresponding tire front image from a tire side surface image can be realized, thereby assisting in completing the prediction of the tire wear degree.
Furthermore, without specialized measuring tools, it is difficult to calculate the groove depth of a tire, and thus to know the degree of wear of the tire. Therefore, the tire wear degree prediction method based on the generation countermeasure network of the embodiment also relates to a simple and efficient tire wear degree prediction method, and the method can predict the tire wear degree of the reconstructed tire front image, so as to help the vehicle owner know the tire wear condition of the vehicle owner before going to the vehicle repair shop.
The image style conversion method described in this embodiment is mainly to construct a framework based on a generation countermeasure network, which is defined as an IST-GAN network model framework in this embodiment, and output a reconstructed tire front image by training the model framework.
The tire wear degree prediction method described in this embodiment is mainly to build a deep learning-based framework, that is, a TWP prediction model framework, train the model framework through three types of data sets with different wear degrees, and predict reconstructed tire front image input, so as to predict which of three types, i.e., recommended tire replacement, good tire and excellent tire, the tire belongs to.
Since the training of the above two model frames must depend on a certain number of data sets, before the training of the two model frames, a tire sample data set needs to be established first, and the establishing method of the tire sample data set includes:
taking a certain number of tires as samples, acquiring a front photo (vertical to the tire tread) and a side photo (having a certain inclination angle with the tire tread) of the same position of each tire sample, preprocessing the photos, wherein the preprocessing is to perform consistency processing on the formats and pixel sizes (in the embodiment, the front images and the side images of the tires are cut into 256 × 256 pixels in batch) of a series of collected tire front images and tire side images, and converting the colors of the processed images into gray levels so as to meet the training requirements of subsequent frames. In a series of processed tire front images and tire side images, 80% of image data is used as a training set, and the other 20% of image data is used as a test set for training an IST-GAN network model framework;
measuring and recording the linear groove depth of each sample tire, setting three threshold labels as classification bases according to the linear groove depth, classifying the three threshold labels into three classification data sets with different wear degrees, namely recommended replacement, good tire and excellent tire, and using the classification data sets for training an IST-GAN network model framework. In the embodiment, a car is taken as an example, a tire with the depth of a tire linear groove in the range of 1.6-3.5mm is defined as recommended to be replaced, a tire with the depth of the tire linear groove in the range of 3.5-6mm is defined as good, and a tire with the depth of the tire linear groove in the range of 6-8mm is defined as excellent; for example, the vehicle models such as trucks except cars can be classified by referring to the threshold set by the national standard.
The tire wear degree prediction method based on the generation countermeasure network of the embodiment comprises the following steps:
s1: preprocessing a shot tire side photo, wherein the preprocessing comprises adjusting the format and pixels of the photo, and converting the color of the processed image into gray;
s2: reconstructing the tire side image processed by the S1 into a tire front image by using an IST-GAN network model framework;
s3: and predicting the tire wear degree of the converted front image of the tire by using a TWP prediction model frame to obtain a corresponding prediction conclusion.
In this embodiment, in order to ensure that the tire side image in S2 can accurately reconstruct the tire front image, the IST-GAN network model framework designs two cyclic conversion branches based on the two generators G1 and G2, and the two generators G1 and G2 are trained and optimized by means of the two discriminators D1 and D2 in the two cyclic conversion branches;
the two loop transition branches comprise a forward loop consistency transition branch and a reverse loop consistency transition branch;
in the forward cycle uniformity conversion branch, the generator G1 synthesizes a forged tire side image using the tire front image p as an input
Figure 453393DEST_PATH_IMAGE001
The generator G2 reconstructs a corresponding tire face image from the forged tire side image
Figure 244631DEST_PATH_IMAGE002
(ii) a The forward cycle consistency formula is
Figure 120183DEST_PATH_IMAGE003
In the reverse cycle uniformity shift branch, generator G2 will turn the tire sideUsing the face image s as input, synthesizing a forged tire face image
Figure 680478DEST_PATH_IMAGE004
The generator G1 reconstructs a corresponding tire side image from the forged tire face image
Figure 750327DEST_PATH_IMAGE005
(ii) a The reverse cycle consistency formula is
Figure 763282DEST_PATH_IMAGE006
In order to ensure that each picture can be mapped to a target in the bidirectional cyclic consistency conversion process, a cyclic consistency loss function is designed:
Figure 708105DEST_PATH_IMAGE007
wherein generator G1 is used to implement data set from tire front faceI P To tire side data setI s Mapping of (2); generator G2 is used to implement a tire side data setI s To tire face data setI P Mapping of (2);
Figure 857326DEST_PATH_IMAGE008
representing a function expectation;
Figure 596612DEST_PATH_IMAGE009
representing a data set from the front of a tyreI P A randomly sampled tire front image;
Figure 598328DEST_PATH_IMAGE010
representing a data set from the tire sideI s A randomly sampled tire side image;
Figure 815683DEST_PATH_IMAGE011
l1 norm representing matrix, by minimumThe losses are normalized to learn the corresponding generators G1 and G2.
During the forward loop conversion, a forward loop antagonism loss function is designed between the generator G1 and the discriminator D1, so that the discriminator D1 can compare the synthesized forged tire side image with the input tire side image and select a better synthesized (i.e., least loss) forged tire side image, the forward loop antagonism loss function being:
Figure 84990DEST_PATH_IMAGE019
during the reverse cyclic conversion, a reverse cyclic antagonism loss function is designed between the generator G2 and the discriminator D2, so that the discriminator D2 can compare the synthesized forged tire front image with the input tire front image, and then select a better synthesized (i.e., least loss) forged tire front image, where the reverse cyclic antagonism loss function is:
Figure 260757DEST_PATH_IMAGE021
finally, a better IST-GAN network model is obtained by minimizing the overall target loss, and the target function of the IST-GAN network model framework is as follows:
Figure 749769DEST_PATH_IMAGE015
the importance of the discriminant loss function and the generator loss function is controlled by the parameter lambda, and the larger the value of the parameter lambda is, the higher the weight of the cycle consistency loss function of the generator is, so that the reduction of the cycle consistency loss is more meaningful, namely, the IST-GAN network model focuses more on reducing the loss of the generator in the training process.
And testing the trained IST-GAN network model framework by using a test set, and outputting a reconstructed tire front image.
The IST-GAN network model framework of this embodiment has a training learning rate of 2e-4, a batch size of 1, an iteration cycle of 300, and uses an Intel (R) core i7-9700 CPU processor for iteration, and the whole training process takes about 9 hours.
In this embodiment, the TWP prediction model framework related to S3 is composed of three input branches, three convolutional layers, a full connection layer, and a classification layer; the model frame is trained through three types of classified data sets with different wear degrees, and reconstructed tire front image input is predicted, so that the tire is predicted to belong to three types of recommended replacement, good tire and excellent tire.
The training process comprises the following steps:
s31, inputting three types of classified data sets with different wear degrees and labels thereof by using three input branches;
s32, mapping the tire image reconstructed in the S2 to a hidden layer feature space through three layers of convolutional layers, and performing feature extraction;
s33, inputting the output feature tensor into the full connection layer, and mapping the learned distributed feature representation to a sample mark space;
s34, classifying the image into one of three different wear degrees through a classification layer, if the image belongs to the tire linear groove depth range of 1.6-3.5mm, the tire is recommended to be replaced, if the image belongs to the tire linear groove depth range of 3.5-6mm, the tire is good, if the image belongs to the tire linear groove depth range of 6-8mm, the tire is good, and therefore the tire wear degree of the tire is obtained.
Furthermore, in the training process, the model is corrected by recording the corresponding loss value and accuracy of each training until the training is finished.
Furthermore, the three convolutional layers are built by a convolution-pooling-activation rule, wherein the size of a convolution kernel is 5 x 5, the moving step length of the convolution kernel is 1, the number of the convolution kernels is 64, the pooling size is 2 x 2, the pooling step length is 2, and the pooling type is maximum pooling;
the convolutional layer and the fully-connected layer both utilize a nonlinear activation function LeakyReLU, and the classification layer utilizes a softmax activation function.
Further, using softmax classification as an output layer, selecting cross entropy as a loss function, and calculating the loss function value through softmax:
Figure 770815DEST_PATH_IMAGE016
where k is the number of categories, cable represents the tag value of the input data, and q represents the predicted value of the input data. Secondly, the effectiveness of the method is evaluated by the output accuracy:
Figure 629049DEST_PATH_IMAGE017
wherein,
Figure 975717DEST_PATH_IMAGE018
representing the actual value of the input data, i.e. the image of the front face of the tyre.
According to the tire wear degree prediction method based on the generation countermeasure network, a user only needs to shoot a tire side picture to obtain the tire wear degree, so that a plan for replacing the tire is made conveniently in advance, the labor and material cost can be greatly saved, and the time, the labor and the worry are saved.
In summary, although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes and modifications can be made by those skilled in the art without departing from the spirit and scope of the invention.

Claims (5)

1. A tire wear degree prediction method based on a generation countermeasure network is characterized by comprising the following steps:
s1: preprocessing the shot pictures of the side surfaces of the tires;
s2: reconstructing the tire side image processed by the S1 into a tire front image by using an IST-GAN network model framework;
s3: predicting the tire wear degree of the converted front image of the tire by using a TWP prediction model frame to obtain a corresponding prediction conclusion;
in the S2, the IST-GAN network model framework designs two loop conversion branches based on the two generators G1, G2, and the two generators G1, G2 are trained and optimized by means of the two discriminators D1, D2 in the two loop conversion branches;
the two loop transition branches comprise a forward loop consistency transition branch and a reverse loop consistency transition branch;
in the forward cycle uniformity conversion branch, the generator G1 synthesizes a forged tire side image using the tire front image p as an input
Figure 959789DEST_PATH_IMAGE001
The generator G2 reconstructs a corresponding tire face image from the forged tire side image
Figure 142509DEST_PATH_IMAGE002
(ii) a The forward cycle consistency formula is
Figure 453405DEST_PATH_IMAGE003
In the reverse cyclic consistency conversion branch, the generator G2 synthesizes a forged tire face image using the tire side image s as an input
Figure 379772DEST_PATH_IMAGE004
The generator G1 reconstructs a corresponding tire side image from the forged tire face image
Figure 725303DEST_PATH_IMAGE005
(ii) a The reverse cycle consistency formula is
Figure 78924DEST_PATH_IMAGE006
In order to ensure that each picture can be mapped to a target in the bidirectional cyclic consistency conversion process, a cyclic consistency loss function is designed:
Figure 877116DEST_PATH_IMAGE007
wherein generator G1 is used to implement data set from tire front faceI P To tire side data setI s Mapping of (2); generator G2 is used to implement a tire side data setI s To tire face data setI P Mapping of (2);
Figure 607174DEST_PATH_IMAGE008
representing a function expectation;
Figure 807212DEST_PATH_IMAGE009
representing a data set from the front of a tyreI P A randomly sampled tire front image;
Figure 331734DEST_PATH_IMAGE010
representing a data set from the tire sideI s A randomly sampled tire side image;
Figure 617222DEST_PATH_IMAGE011
an L1 norm representing a matrix, the corresponding generators G1 and G2 being learned by minimizing the loss;
in the forward cycle conversion process, a forward cycle antagonism loss function is designed between the generator G1 and the discriminator D1, so that the discriminator D1 can compare the synthesized and forged tire side image with the input tire side image and further select a forged tire side image with the minimum synthesis loss, the forward cycle antagonism loss function being:
Figure 885392DEST_PATH_IMAGE013
in the reverse cyclic conversion process, a reverse cyclic antagonism loss function is designed between the generator G2 and the discriminator D2, so that the discriminator D2 can compare the synthesized and forged tire front image with the input tire front image, and further select a forged tire front image with the minimum synthesis loss, wherein the reverse cyclic antagonism loss function is as follows:
Figure 939936DEST_PATH_IMAGE015
finally, an IST-GAN network model is obtained by minimizing the overall target loss, the objective function of the IST-GAN network model framework:
Figure 900938DEST_PATH_IMAGE016
the importance of the arbiter loss function and the generator loss function is controlled by the parameter lambda, and the larger the value of the parameter lambda is, the higher the weight of the cycle consistency loss function of the generator is;
the training process of the IST-GAN network model framework in S2 and the TWP prediction model framework in S3 both depend on the establishment of a tire sample data set, which includes:
taking a certain number of tires as samples, acquiring front photos and side photos of the same position of each sample tire to obtain a series of front images and side images of the tires, and preprocessing the photos to obtain a data set for training an IST-GAN network model framework;
measuring and recording the depth of a linear groove of each sample tire, setting three threshold labels as classification bases according to the depth of the linear groove, classifying the three threshold labels into three classification data sets with different wear degrees, namely recommended tire replacement, good tire and excellent tire, and using the classification data sets for training a TWP prediction model frame;
the TWP prediction model framework of S3 is composed of three input branches, three convolutional layers, a full connection layer and a classification layer; the training process comprises the following steps:
s31, inputting three types of classified data sets which are labeled and represent different wear degrees by using three input branches;
s32, mapping the tire image reconstructed in the S2 to a hidden layer feature space through three layers of convolutional layers, and performing feature extraction;
s33, inputting the output feature tensor into the full connection layer, and mapping the learned distributed feature expression to a sample mark space;
and S34, classifying the tire wear degree of the tire into one of three different wear degrees through the classification layer.
2. The method of predicting the degree of tire wear based on the generation of a countermeasure network of claim 1,
in S1, the preprocessing of the captured tire side image means adjusting the format and pixels of the tire side image and performing gradation processing on the color of the image.
3. The method of predicting the degree of tire wear based on the generation of a countermeasure network of claim 1,
and in the training process, the model is corrected by recording the corresponding loss value and accuracy of each training until the training is finished.
4. The method of predicting the degree of tire wear based on the generation of a countermeasure network of claim 1,
the three convolutional layers are built by a convolution-pooling-activation rule, wherein the size of a convolution kernel is 5 x 5, the moving step length of the convolution kernel is 1, the number of the convolution kernels is 64, the pooling size is 2 x 2, the pooling step length is 2, and the pooling type is maximum pooling;
the convolutional layer and the fully-connected layer both utilize a nonlinear activation function LeakyReLU, and the classification layer utilizes a softmax activation function.
5. The method of predicting the degree of tire wear based on the generation of a countermeasure network of claim 1,
using softmax classification as the output layer, selecting cross entropy as the loss function, and calculating the loss function value by softmax:
Figure 673722DEST_PATH_IMAGE017
wherein k is the number of types, cable represents the label value of the input data, and q represents the predicted value of the input data; secondly, the effectiveness of the method is evaluated by the output accuracy:
Figure DEST_PATH_IMAGE018
wherein,
Figure DEST_PATH_IMAGE019
representing the actual value of the input data, i.e. the image of the front face of the tyre.
CN202110769828.3A 2021-07-08 2021-07-08 Tire wear degree prediction method based on generation of countermeasure network Active CN113255847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110769828.3A CN113255847B (en) 2021-07-08 2021-07-08 Tire wear degree prediction method based on generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110769828.3A CN113255847B (en) 2021-07-08 2021-07-08 Tire wear degree prediction method based on generation of countermeasure network

Publications (2)

Publication Number Publication Date
CN113255847A CN113255847A (en) 2021-08-13
CN113255847B true CN113255847B (en) 2021-10-01

Family

ID=77190851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110769828.3A Active CN113255847B (en) 2021-07-08 2021-07-08 Tire wear degree prediction method based on generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN113255847B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663386B (en) * 2022-03-21 2024-08-06 东南大学 Water film removing method for airport pavement disease image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101040637B1 (en) * 2008-12-04 2011-06-13 한국타이어 주식회사 Tire abrasion virtual test method and apparatus thereof
JP7132701B2 (en) * 2017-08-10 2022-09-07 株式会社ブリヂストン Tire image recognition method and tire image recognition device
US10730352B2 (en) * 2018-02-22 2020-08-04 Ford Global Technologies, Llc System and method for tire wear prognostics
CN110059751A (en) * 2019-04-19 2019-07-26 南京链和科技有限公司 A kind of tire code and tire condition recognition methods based on machine learning
CN111976389B (en) * 2020-08-03 2021-09-21 清华大学 Tire wear degree identification method and device
CN112270402A (en) * 2020-10-20 2021-01-26 山东派蒙机电技术有限公司 Training method and system for tire wear identification model

Also Published As

Publication number Publication date
CN113255847A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN107239730B (en) Quaternion deep neural network model method for intelligent automobile traffic sign recognition
CN106529419B (en) The object automatic testing method of saliency stacking-type polymerization
CN110163069B (en) Lane line detection method for driving assistance
CN113902915A (en) Semantic segmentation method and system based on low-illumination complex road scene
CN110598564B (en) OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method
CN111160481B (en) Adas target detection method and system based on deep learning
CN111767874B (en) Pavement disease detection method based on deep learning
CN105139385A (en) Image visual saliency region detection method based on deep automatic encoder reconfiguration
CN111310592B (en) Detection method based on scene analysis and deep learning
CN109919921B (en) Environmental impact degree modeling method based on generation countermeasure network
CN113255847B (en) Tire wear degree prediction method based on generation of countermeasure network
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
CN113269224A (en) Scene image classification method, system and storage medium
CN104036242B (en) The object identification method of Boltzmann machine is limited based on Centering Trick convolution
CN116797787A (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN110910347A (en) Image segmentation-based tone mapping image no-reference quality evaluation method
CN112861617A (en) Slope disaster identification system based on monitoring image
CN115272224A (en) Unsupervised pavement damage detection method for smart city construction
CN116309228A (en) Method for converting visible light image into infrared image based on generation of countermeasure network
CN113298065B (en) Eye melanin tumor identification method based on self-supervision learning
CN108711150B (en) End-to-end pavement crack detection and identification method based on PCA
CN103106663B (en) Realize the method for SIM card defects detection based on image procossing in computer system
CN116664431B (en) Image processing system and method based on artificial intelligence
CN117351360A (en) Remote sensing image road extraction method based on attention mechanism improvement
CN107273793A (en) A kind of feature extracting method for recognition of face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant