CN117237725A - Image-based tire wear degree rapid detection method - Google Patents
Image-based tire wear degree rapid detection method Download PDFInfo
- Publication number
- CN117237725A CN117237725A CN202311212612.2A CN202311212612A CN117237725A CN 117237725 A CN117237725 A CN 117237725A CN 202311212612 A CN202311212612 A CN 202311212612A CN 117237725 A CN117237725 A CN 117237725A
- Authority
- CN
- China
- Prior art keywords
- tire
- image
- gradient
- wear
- tread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 239000013598 vector Substances 0.000 claims abstract description 96
- 238000005299 abrasion Methods 0.000 claims abstract description 89
- 238000000034 method Methods 0.000 claims abstract description 56
- 239000011159 matrix material Substances 0.000 claims abstract description 36
- 238000007637 random forest analysis Methods 0.000 claims abstract description 16
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 230000004927 fusion Effects 0.000 claims abstract description 11
- 238000005457 optimization Methods 0.000 claims abstract description 10
- 230000009467 reduction Effects 0.000 claims abstract description 10
- 241000283153 Cetacea Species 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 24
- 238000012795 verification Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 17
- 238000003066 decision tree Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000005315 distribution function Methods 0.000 claims description 4
- 239000011295 pitch Substances 0.000 claims description 4
- 108010063499 Sigma Factor Proteins 0.000 claims description 3
- 230000003014 reinforcing effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 abstract description 4
- 238000010276 construction Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 230000010354 integration Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 4
- 238000007906 compression Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000124033 Salix Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a tire abrasion degree rapid detection method based on images, which comprises the steps of obtaining tire images of different types through tire factories or automobile manufacturers, selecting one image from five different abrasion degrees of each type of tire image, cutting the selected tire image, simulating dirt and image facula construction noise of an actual tire, extracting a mean value and a root mean square value of gray level co-occurrence matrix characteristics, improved gradient histogram characteristics and local binary pattern characteristics after data enhancement through adding noise, respectively performing dimension reduction and splicing fusion on the extracted three types of characteristic vectors, then optimizing important parameters of a classifier by adopting a whale optimization algorithm to obtain a trained random forest optimization classifier model, and testing the tire abrasion degree image to be detected. The method is used for estimating the residual service life of the tire and reminding the vehicle owner of the tire wear degree information more quickly and conveniently.
Description
Technical Field
The invention relates to the technical field of tire wear degree detection, in particular to a rapid tire wear degree detection method based on images.
Background
The tyre is used as a main part of an automobile running system, and is in direct contact with the ground, so that the running safety, smoothness and comfort are related. In order to ensure sufficient driving force in the running process of the automobile, patterns are designed on the tread of the tire, so as to improve the friction coefficient between the tire and the ground, increase the driving force and reduce the tire slip. However, after the automobile travels for a certain mileage, the depth of the tire pattern becomes shallow due to abrasion, if the depth of the pattern is too shallow, the grip force and the drainage capacity of the tire are reduced, and great hidden danger is brought to the safety of the running, so that the method has great significance for detecting the abrasion degree of the tire.
Currently, the degree of tire wear can be obtained by sensor detection, laser detection, and tire image recognition detection. The sensor has high detection cost, the sensor needs to be implanted in the tire, and the detection result is easily affected by the implanted sensor. The laser detection is carried out by collecting the light rays irradiated by the laser emitter at the tire pattern groove through the industrial camera, and the detection process is complex. The tire image detection does not need an in-tire sensor or an external laser emitter, and the tire pattern can be identified only by collecting the tread image, so that the method is a simple and practical detection method. The published chinese patent application No. CN202211037254.1, year 2022, month 09, and day 23 discloses a "method for identifying tire wear" in which an RGB camera is used to collect a tire image, the image is preprocessed, the characteristics of the gray scale, area, width, and sharpness similarity of the groove region and the normal groove region of the preprocessed image are weighted and summed to obtain a structural wear level, the preprocessed image is subjected to energy, contrast, inverse matrix, and entropy of a gray scale co-occurrence matrix to obtain a material wear level, and the structural wear level and the material wear level are combined to obtain the tire wear level. However, the abrasion change caused by texture features only uses the features of the gray level co-occurrence matrix, the number of features is small, more images are needed in the training stage, the realization difficulty is high, in addition, the method can only identify the abrasion degree of the tire as light abrasion, medium abrasion and heavy abrasion, provided abrasion degree information is limited, and the identification accuracy is not high.
Disclosure of Invention
The invention aims to solve the problem that the existing method for identifying the tire wear degree based on the tire image is low in identification accuracy, and provides a method for rapidly detecting the tire wear degree based on the image.
In order to solve the problems, the invention is realized by the following technical scheme:
a tire wear degree rapid detection method based on images comprises the following steps:
step 1, shooting and collecting the front surface of each type of tire through a camera to obtain tread images under different wear degrees, at least obtaining one image of each type of tire, and establishing an original wear image library of the type of tire by utilizing the tread images;
step 2, preprocessing each original tread image in an original abrasion image library of each type of tire, namely uniformly cutting, graying and compressing each original tread image to obtain a plurality of preprocessed tread images, and further obtaining a preprocessed abrasion image library of each type of tire;
step 3, reinforcing each preprocessed tread image in a preprocessed wear image library of each type of tire, namely, obtaining a plurality of reinforced tread images by adding noise to each preprocessed tread image and expanding, so as to obtain a reinforced wear image library of each type of tire;
Step 4, extracting features of each enhanced tread image in the enhanced wear image library of each type of tire, namely extracting gray level co-occurrence matrix feature vectors, improved gradient histogram feature vectors and local binary pattern feature vectors of each enhanced tread image, performing dimension reduction and splicing fusion on the three features to obtain final feature vectors, and further obtaining a feature wear image library of each type of tire;
step 5, training a random forest classifier model by utilizing a characteristic abrasion image library of each type of tire, selecting a training set and a verification set by a k-fold cross verification method in the training process, optimizing 2 parameters of the number of decision trees and the minimum leaf node size of the random forest classifier by adopting a whale optimization algorithm, and selecting an optimized objective function as the opposite number of the verification set accuracy, wherein the accuracy is the average accuracy of the model after the k-fold cross verification on the training set, so as to obtain an abrasion detection model of each type of tire; wherein k is a set value;
step 6, shooting automobile license plates and different tire treads through a plurality of cameras respectively, and sending the shot automobile license plate images and different tire tread images into a cloud server;
Step 7, the cloud server identifies license plate numbers of the automobiles according to the license plate images of the automobiles, and the tire types are obtained by searching on the cloud server based on the identified license plate numbers;
step 8, the cloud server firstly preprocesses each tire tread image by adopting the method of step 2 to obtain a plurality of preprocessed tread images, and respectively extracting final feature vectors of the preprocessed tread images by adopting the method of step 4 to obtain a plurality of final feature vectors; then the final feature vectors are sent into a wear detection model of the corresponding type of tire for wear degree identification, and a plurality of wear degree identification results are obtained; and integrating a plurality of abrasion degree identification results by a voting method to obtain a final abrasion degree detection result.
In the above step 1, the wear degree of the tire includes five wear degrees, namely, zero wear, quarter wear, half wear, three-quarters wear, and full wear.
In the step 3, the specific process of enhancing each preprocessed tread image is as follows:
step 3.1, randomly selecting a pixel point (m, n) from the preprocessed tread image, taking the pixel point (m, n) as a center point of two-dimensional normal distribution, and respectively extending the same pixel point from the upper, lower, left and right directions of the center point, thereby selecting a square noise adding range S in the processed tread image;
Step 3.2, for each pixel point (a, b) of the square noise adding range S, scaling the pixel point by adopting a scaling formula to obtain scaled pixel points (x, y); wherein the scaling formula is:
step 3.3, adding noise to each scaling pixel point (x, y) by using a two-dimensional normal distribution function, wherein the adding noise value f (x, y) of each scaling pixel point (x, y) is as follows:
wherein (m, n) is a randomly selected pixel point in the preprocessed tread image, (a, b) is a current pixel point of a square noise adding range S obtained by taking the pixel point (m, n) as a center, (x, y) is a scaled pixel point corresponding to the current pixel point (a, b),for scaling factor, sigma 1 Sum sigma 2 For 2 covariances, ρ is the correlation coefficient.
In the step 4, the specific process of extracting the features of each enhanced tread image is as follows:
step 4.1, calculating 4 texture classification features of the gray level co-occurrence matrix of each enhanced tread image, namely an angular second moment, entropy, contrast and contrast score matrix, and forming a gray level co-occurrence matrix feature vector by using the mean value and the mean square error of the 4 texture classification features in different gradient directions under different pitches;
step 4.2, calculating an improved gradient histogram feature vector for each enhanced tread image;
Step 4.2.1, uniformly dividing each enhanced tread image into cell units;
step 4.2.2, calculating gradient values of 9 gradient directions of each cell unit, and further obtaining an original gradient vector of each cell unit; the gradient directions corresponding to the gradient values in the original gradient vector are 0 degree, 20 degree, 40 degree, 60 degree, 80 degree, 100 degree, 120 degree, 140 degree and 160 degree in sequence;
step 4.2.3, summing the gradient values of all the cell units corresponding to the gradient directions to obtain gradient sum of 9 gradient directions, and selecting the gradient direction with the maximum gradient sum from the gradient sum;
step 4.2.4, circularly right-shifting all gradient values in the original gradient vector to enable the gradient value corresponding to the gradient direction with the largest gradient sum to be located at the first position, thereby obtaining a reconstructed gradient vector of each cell unit;
step 4.2.5, carrying out normalized sliding operation on the cell units of each enhanced tread image to obtain an improved gradient histogram feature vector;
step 4.3, calculating a local binary pattern of each enhanced tread image to obtain a local binary pattern feature vector;
and 4.4, analyzing the gray level co-occurrence matrix feature vector, the improved gradient histogram feature vector and the local binary pattern feature vector through a principal component, reducing the dimension, and then obtaining a final feature vector through splicing and fusion.
Compared with the prior art, the training of the tire wear degree detection model can be completed by simply shooting tire tread images with different wear degrees. Firstly, the required image data set is very few, only one image is required for each abrasion degree, dirt of an actual tire and image facula construction noise are simulated by cutting the image, and more dimensional information of the image can be obtained by adding noise to expand the image, so that the number of images required by model training is reduced. Secondly, extracting the mean value and the root mean square value of the gray level co-occurrence matrix characteristic, the improved gradient histogram characteristic and the local binary pattern characteristic after data enhancement by adding noise, respectively carrying out dimension reduction and splicing fusion on the three extracted characteristic vectors, selecting more image characteristics to more comprehensively acquire image information, and reducing the input of a classifier model by characteristic dimension reduction and fusion, thereby improving the real-time performance and accuracy of tire wear degree identification; finally, model parameter optimization is carried out through a k-fold cross validation method, so that the accuracy and generalization performance of the model are better. During actual detection, only images of the license plate of the automobile and the tread of the tire are shot, the images are cut, enhanced and extracted and then sent to the abrasion detection model of the corresponding type of tire, the rapid and accurate identification of the abrasion degree of the tire is realized through a voting integration method, the identification precision is higher, and the generalization capability of detecting the abrasion degrees of different types of tires is enhanced. The method is used for estimating the residual service life of the tire and reminding the vehicle owner of the tire wear degree information more quickly and conveniently.
Drawings
FIG. 1 is a flow chart of a method for rapidly detecting the wear level of a tire based on an image.
Fig. 2 is an image of a tire at various degrees of wear, (a) brand new, i.e., zero wear, (b) quarter wear, (c) half wear, (d) three-quarters wear, and (e) full wear.
FIG. 3 is a pre-processed tread image of a half ground tire.
Fig. 4 is an enhanced tread image of a half ground tire.
Fig. 5 is a graph of contribution degrees of the first 30 principal components of three types of feature vectors, (a) is a graph of contribution degrees of the first 30 principal components, and (b) is a partial enlarged view of contribution degrees of the first 30 principal components of 10% or less.
Fig. 6 is a test image confusion matrix result.
Detailed Description
The invention will be further described in detail below with reference to specific examples and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the invention more apparent.
The invention provides a rapid detection method for the abrasion degree of a tire based on an image, which is suitable for occasions such as automobile detection places, automobile entering and exiting parking lots or passing through traffic intersections, and the like, and is characterized in that the image of the tire is shot through a camera, then the abrasion degree of the tire is rapidly detected by using the detection method, finally the result of detecting the abrasion degree is displayed, the result of detecting the abrasion degree is used for estimating the residual life of the tire, and a vehicle owner is reminded of timely replacing the tire when the abrasion degree of the tire is serious.
Obtaining different types of tire images through a tire factory or an automobile manufacturer, wherein each type of tire image at least comprises five types of tire front images with different wear degrees, each type of tire image is provided with one image, the selected tire image is cut and noise data are enhanced, then the image is expanded, a noise adding function is constructed by considering the influence of dirt blocks and image light spots on an actual tire by the noise adding method, the noise adding range is determined, then the average value and the root mean square value of gray level co-occurrence matrix GLCM characteristics of each image in different gradient directions, improved gradient histogram HOG characteristics and local binary pattern characteristics are extracted, the three extracted characteristic vectors are subjected to dimension reduction compression by adopting PCA main component analysis respectively, then the three characteristic vectors subjected to dimension reduction compression are spliced and fused to obtain a characteristic vector finally used for training, a random forest classifier is trained by utilizing the fused characteristic vector, the important parameters of the classifier are optimized by adopting a whale optimizing algorithm, and a random forest optimizing classifier model is obtained.
After the model is trained, a license plate image and a tire tread image to be detected are acquired through a camera, the license plate image identifies a license plate number through a license plate number identification algorithm, vehicle information is searched through the license plate number to obtain a tire type, a tire wear detection model corresponding to the tire type is matched with the trained model, and the tire wear detection model can be used for detecting the tire wear degree according to the tire image shot by the camera. Preprocessing images by adopting the same method in the model training process, then obtaining 16 expanded images through cutting, carrying out feature extraction, degradation and feature fusion on the obtained 16 images, carrying out classification detection on the tire wear degree by using a selected tire wear detection model to obtain 16 detection results, and carrying out voting method integration on the obtained 16 detection results, wherein the integrated results are the final wear degree identification results.
Referring to fig. 1, a method for rapidly detecting the abrasion degree of a tire based on an image comprises the following specific steps:
step 1, shooting and collecting the front surface of each type of tire through a camera, acquiring tread images under different wear degrees, acquiring at least one image of each tire with different wear degrees, and establishing an original wear image library of the type of tire by utilizing the tread images.
In the present invention, for each type of tire, tread images are obtained at five different degrees of wear, the five degrees of wear being divided into [0,1] intervals, wherein 0 means that the tire pattern is completely worn and 1 means that the tire pattern is not worn. The abrasion degree value of the brand new zero abrasion tire is [0,0.2], the abrasion degree value of the quarter abrasion tire is [0.2,0.4], the abrasion degree value of the half abrasion tire is [0.4,0.6], the abrasion degree value of the three-quarter abrasion tire is [0.6,0.8], and the abrasion degree value of the full abrasion tire is [0.8,1]. As shown in fig. 2. Different types of tires, i.e., different models of tires, having different sizes and/or patterns, each type of tire creates a tire wear image library.
And 2, preprocessing each original tread image in the original abrasion image library of each type of tire, namely uniformly cutting, graying and compressing each original tread image to obtain a plurality of preprocessed tread images, and further obtaining the preprocessed abrasion image library of each type of tire.
And 2.1, uniformly cutting the original tread image, and cutting one original tread image into n times n images (n is more than 3).
In this embodiment, the images are cut, all the images are uniformly cut into 4*4 sizes, the sizes of each cut image are ensured to be equivalent, the sizes of the photographed images are 4096×3072, and the sizes of the cut images are 1024×768.
And 2.2, carrying out graying on each cut tread image, and converting the cut tread image into a gray tread image.
In this embodiment, the batch of 16 images obtained by clipping is subjected to graying processing, and converted into a gray image, and the graying obtains the gray value of each pixel of the image by an algorithm of weighted average of three components of red, green and blue of each pixel in the image, and the calculation formula is as follows:
Gray=0.299R+0.587G+0.114B
where Gray is the pixel Gray value and R, G, B is the red, green and blue component value of each pixel of the image.
And 2.3, compressing each gray tread image.
In this embodiment, all gray-scale images with 1024×768 are compressed in batches to 256×256, and the compression process uses BiCubic interpolation, that is, each compressed pixel value of the image is a weighted average of pixels in the nearest 4*4 neighborhood, and the weighted weight value is obtained by constructing the BiCubic function.
The raw tire tread image in the semi-worn state as shown in fig. 3 is preprocessed to obtain a preprocessed tread image.
And 3, enhancing each preprocessed tread image in the preprocessed wear image library of each type of tire, namely, obtaining a plurality of enhanced tread images by adding noise to each preprocessed tread image and expanding the preprocessed tread images, so as to obtain the enhanced wear image library of each type of tire.
In the actual process, part of dirt blocks exist on the tire, part of light spots exist on the tire image, the added noise can simulate the influence of the dirt blocks and the light spots, the noise adding method simulates the influence to add proper noise at local random, and meanwhile, the characteristics that the two-dimensional normal distribution has a bell shape, the middle is high, the periphery is low and the integral in the whole space is 0 are considered.
And 3.1, randomly selecting a pixel point (m, n) from the preprocessed tread image, taking the pixel point (m, n) as a center point of two-dimensional normal distribution, and respectively extending the same pixel point from the upper, lower, left and right directions of the center point, so that a square noise adding range S is selected in the processed tread image.
Step 3.2, for each pixel point (a, b) of the square noise adding range S, scaling the pixel point by adopting a scaling formula to obtain scaled pixel points (x, y); wherein the scaling formula is:
step 3.3, adding noise to each scaling pixel point (x, y) by using a two-dimensional normal distribution function, wherein the adding noise value f (x, y) of each scaling pixel point (x, y) is as follows:
wherein (m, n) is a randomly selected pixel point in the preprocessed tread image, (a, b) is a current pixel point of a square noise adding range S obtained by taking the pixel point (m, n) as a center, (x, y) is a scaled pixel point corresponding to the current pixel point (a, b),for scaling factor, sigma 1 Sum sigma 2 For 2 covariances, ρ is the correlation coefficient.
In this embodiment, a pixel point (m, n) is randomly selected from a preprocessed tread image with 256×256 sizes, m and n are the abscissa of the pixel point on the preprocessed tread image, the coordinate unit is a pixel point, and m and n are taken as two average values μ of two-dimensional normal distribution 1 Sum mu 2 . Random in [0.5,2 according to image size]Selecting two covariances sigma within a range 1 Sum sigma 2 The correlation coefficient ρ is within the interval [ -1,1]And (5) inner random selection. The noise adding range takes randomly selected pixel points (m, n) as the center, and 50 pixel points extend up, down, left and right respectively to form a square range S with the side length of 100 pixel points.
And expanding each pixel point (a, b) of the square range S to obtain a scaled pixel point (x, y). The 1-step distance in the transverse and longitudinal directions of each pixel point (a, b) and the pixel point (m, n) in the square range S corresponds to 1/20 unit (i.e. scaling factor) of the coordinate axis on the two-dimensional normal distribution function) The expression is:
the scaling pixel points (x, y) are subjected to noise adding, and the noise adding value f (x, y) of each scaling pixel point (x, y) is as follows:
the noise value of each point in the S range can be obtained through the method, for each gray level image, all gray level values are normalized, noise in the corresponding S range is added, the noisy gray level image can be obtained, and finally, 16 gray level images obtained through preprocessing of each image are expanded to 32 gray level images.
The pre-processed tire tread image in the semi-worn state as shown in fig. 4 is reinforced to obtain a reinforced tread image.
And 4, extracting features of each enhanced tread image in the enhanced wear image library of each type of tire, namely extracting gray level co-occurrence matrix feature vectors, improved gradient histogram feature vectors and local binary pattern feature vectors of each enhanced tread image, performing dimension reduction and splicing fusion on the three features to obtain final feature vectors, and further obtaining the feature wear image library of each type of tire.
And 4.1, calculating 4 texture classification features of the gray level co-occurrence matrix of each enhanced tread image, namely an angular second moment, entropy, contrast and contrast score matrix, and forming a gray level co-occurrence matrix feature vector by using the mean value and the mean square error of the 4 texture classification features in different gradient directions under different pitches.
The gray level co-occurrence matrix calculation method comprises the following steps: taking any point (x, y) in the image (256 x 256) and the other point (x+a, y+b) deviating from the point, and setting the gray value of the point as (g 1, g 2). When the point (x, y) is moved over the entire screen, various values of (g 1, g 2) are obtained, and when the number of gray scale levels is set to t, the combination of (g 1, g 2) shares t 2 The original gray image is 256 gray levels, the combination type is too large, p is taken as 8, and 256 gray levels are obtainedThe level is converted into 8 gray levels, i.e., every 32 gray levels into 1 gray level. For the whole picture, the number of occurrences of each (g 1, g 2) value is counted, and then arranged into a square matrix, and the total number of occurrences of (g 1, g 2) is normalized to the probability of occurrence P (g 1, g 2), such square matrix is called gray level co-occurrence matrix. The distance difference values (a, b) are combined by different values, so that joint probability matrixes under different conditions can be obtained. The values of (a, b) are selected according to the characteristic of the texture periodic distribution, and when (a, b) = (x, 0), the pixel pairs are horizontal, namely 0-degree scanning; when (a, b) = (0, x), the pixel pair is vertical, i.e., scanned by 90 degrees; when (a, b) = (x, x), the pixel pair is right diagonal, i.e., 45 degree scan; when (a, b) = (-x, -x), the pixel pair is the left diagonal, i.e. 135 degree scan; wherein x is >0, x is the step distance in the horizontal or vertical direction between two pixels.
Because of the large dimensions of the gray level co-occurrence matrix, it is not generally used directly as a feature for distinguishing textures, but rather some statistics constructed based on it are used as texture classification features. The invention firstly selects 4 texture classification features of angular second moment, entropy, contrast and contrast subarrays of the gray level co-occurrence matrix, and then selects the mean value and the mean square error of each texture classification feature in different gradient directions under different pitches to form the feature vector of the gray level co-occurrence matrix.
The angular second moment of the gray level co-occurrence matrix is a measure of the uniformity of gray level distribution and the thickness of textures of an image, and the formula is as follows:
ASM=∑ i ∑ j P(i,j) 2
the entropy of the gray level co-occurrence matrix measures the randomness of the information content of the image, and the formula is as follows:
ENT=-∑ i ∑ j P(i,j)log(P(i,j))
the contrast of the gray level co-occurrence matrix reflects the definition of the image and the groove depth of the texture, and the formula is as follows:
Com=∑ i ∑ j (i-j) 2 P(i,j)
the contrast score matrix of the gray level co-occurrence matrix is also called inverse variance, reflects the definition degree and the rule degree of textures, and has the formula:
where P (i, j) represents the pixel value of pixel (i, j), and i, j represents the row and column of pixel (i, j), respectively.
In this embodiment, for the four texture classification features, the step distance x of each texture classification feature is respectively selected to be 1 to 10, and the mean value and root mean square value features of each texture classification feature in four gradient directions of 0 degree, 45 degrees, 90 degrees and 135 degrees are extracted to form a feature vector with the length of 80.
Step 4.2, computing an improved gradient histogram feature vector for each enhanced tread image.
8 x 8 cell regions are selected from the conventional gradient histogram, and the gradient vector of each cell includes 9 gradient values in the gradient direction, and the gradient directions of the 9 gradient values are sequentially 0 °, 20 °, 40 °, 60 °, 80 °, 100 °, 120 °, 140 ° and 160, that is, the gradient vector of each cell is [0 °, 20 °, 40 °, 60 °, 80 °, 100 °, 120 °, 140 °, 160 °. And taking 2 x 2 cell units as a sliding block (block), and obtaining the gradient histogram feature vector through the sliding block normalization sliding operation.
Considering that the deviation of different gradient directions can be generated due to the deviation of the shooting direction in the acquisition of the tire image, and the difference between the feature vectors extracted from the same abrasion image can be increased due to the fact that the gradient direction interval in the traditional gradient histogram vector starts from 0 degree, the invention provides an improved gradient histogram feature, which comprises the following specific processes:
1) Each reinforced tread image is uniformly segmented into cell units.
2) And calculating gradient values of 9 gradient directions of each cell unit, and further obtaining an original gradient vector of each cell unit. The gradient directions corresponding to the gradient values in the original gradient vector are 0 °, 20 °, 40 °, 60 °, 80 °, 100 °, 120 °, 140 °, and 160 ° in sequence, that is, the original gradient vector of each cell unit is [0 °, 20 °, 40 °, 60 °, 80 °, 100 °, 120 °, 140 °, 160 °.
3) And summing the gradient values of all the cell units corresponding to the gradient directions to obtain gradient sum of 9 gradient directions, and selecting the gradient direction with the maximum gradient sum.
4) And (3) circularly right-shifting all gradient values in the original gradient vector to enable the gradient value corresponding to the gradient direction with the largest gradient sum to be positioned at the first position, thereby obtaining the reconstruction gradient vector of each cell unit.
If the gradient direction with the maximum gradient sum is 60 degrees, all gradient values in the original gradient vector are circularly shifted to the right, so that the gradient value corresponding to the gradient direction with the maximum gradient sum, namely 60 degrees, is positioned at the first position, and the obtained reconstructed gradient vector of each cell unit is [60 degrees gradient value, 80 degrees gradient value, 100 degrees gradient value, 120 degrees gradient value, 140 degrees gradient value, 160 degrees gradient value, 0 degrees gradient value, 20 degrees gradient value, 40 degrees gradient value ].
If the gradient direction with the maximum gradient sum is 140 degrees, all gradient values in the original gradient vector are circularly shifted to the right, so that the gradient value corresponding to the gradient direction with the maximum gradient sum, namely 140 degrees, is positioned at the first position, and the reconstructed gradient vector of each cell unit obtained by the method is [140 degrees, 160 degrees, 0 degrees, 20 degrees, 40 degrees, 60 degrees, 80 degrees, 100 degrees and 120 degrees ].
The directions of the pattern channels of the same type of tire are basically consistent, so that the maximum gradient value in the cell unit block is mostly at the junction of the pattern channel and the tire tread.
5) The normalized sliding operation is performed on the cell units of each enhanced tread image to obtain an improved gradient histogram feature vector.
One sliding block can be spliced to obtain a feature vector with the length of 36, 256 x 256 images are divided into 4*4 cell units, the number of the sliding blocks obtained through sliding of the cell unit is 9, and finally one 256 x 256 image can be used for extracting the length of the gradient histogram feature vector to be 324.
And 4.3, calculating a local binary pattern of each enhanced tread image to obtain a local binary pattern feature vector.
The local binary pattern is defined as that in the window of 3*3, the gray value of 8 adjacent pixels is compared with the gray value of the central pixel of the window, if the surrounding pixel value is larger than the central pixel value, the position of the pixel point is marked as 1, otherwise, the position of the pixel point is marked as 0. Therefore, 8 points in the 3*3 neighborhood can be compared to generate 8-bit binary numbers, namely, the local binary pattern value of the window center pixel point is obtained, the local binary pattern value is taken as the LBP value of the window center pixel, and the mathematical expression is as follows:
Wherein, (x) c ,y c ) Is the coordinate of the central pixel, p is the p pixel, i of the neighborhood p I is the gray value of the neighborhood pixel c The gray value for the center pixel, s, is a sign function as follows:
then dividing the 256 x 256 image into 16 cells with 64 x 64 size, calculating the histogram of LBP values of all pixels in the cells, normalizing by L2, dividing each value of the vector by the square sum of the vectors, and finally splicing the feature vectors of the 16 cells to obtain the local binary pattern feature vector with the length of 160.
And 4.4, analyzing the gray level co-occurrence matrix feature vector, the improved gradient histogram feature vector and the local binary pattern feature vector through a principal component, reducing the dimension, and then obtaining a final feature vector through splicing and fusion.
Principal Component Analysis (PCA) works by sequentially finding a set of mutually orthogonal axes from the original space, the selection of which is closely related to the data itself. The 1 st new coordinate axis selection is the direction of maximum variance in the original data, the 2 nd new coordinate axis selection is that the maximum variance is in a plane orthogonal to the 1 st coordinate axis, and the 3 rd axis is that the maximum variance is in a plane orthogonal to the 1 st and 2 nd axes. By analogy, n (n is the dimension of the feature) such coordinate axes can be obtained. The new axes obtained in this way have a majority of variances contained in the former r (r < n) axes and the latter axes have variances of almost 0. Therefore, the rest coordinate axes can be ignored, only the previous r coordinate axes containing most variance are reserved, which is equivalent to only reserving dimension features containing most variance, and feature dimensions containing variance almost 0 are ignored, so that dimension reduction processing of the data features is realized. After the n indexes are reduced to r principal components, the principal components are ordered according to the variance, and are called principal component 1, principal component 2 and principal component r. And the proportion of the variance of each principal component in the total variance in the set of variables is the contribution of the principal component. The selection of r is to reserve the characteristic dimension with the contribution degree of more than 98% to the main component, analyze the main component of PCA, reduce the gray level co-occurrence matrix characteristic vector length to be 15, improve the gradient histogram characteristic vector length to be 30, the local binary pattern characteristic vector length to be 15, splice and fuse the three types of characteristic vector to obtain a characteristic vector with the final length of 60, namely, extract a characteristic vector with the length of 60 from one image. As shown in fig. 5, (a) is a top 30 principal component contribution degree map, and (b) is a partial enlarged map in which the top 30 principal component contribution degrees are 10% or less.
And 5, training a random forest classifier model by using a characteristic abrasion image library of each type of tire, selecting a training set and a verification set by a k-fold cross verification method in the training process, optimizing 2 parameters of the number of decision trees and the minimum leaf node size of the random forest classifier by using a whale optimization algorithm, and selecting an optimized objective function as the opposite number of the verification set accuracy, wherein the accuracy is the average accuracy of the model after the k-fold cross verification on the training set, and thus the abrasion detection model of each type of tire is obtained. Where k is a set value.
The random forest classifier is an algorithm for integrating a plurality of trees through an ensemble idea of ensemble learning: its basic unit is a decision tree, and classifying an input sample requires inputting it into each tree for classification. Voting selection is carried out on classification results of a plurality of weak classifiers, so that a strong classifier is formed, and the idea of random forest bagging is achieved. The decision tree is a model for decision judgment based on a tree structure, the data set is classified through a plurality of condition judgment processes, a required result is finally obtained, the starting point of the decision tree is a root node, the middle decision flow is an internal node, and the classification result is a leaf node. The decision tree node division is based on the CART decision tree, which can be used for classification as well as regression. The random forest base classifier adopts a CART decision tree, the CART decision tree uses a base Ni index as a division basis, the base Ni index Gini (D) reflects the probability that two samples are randomly extracted from a data set and category marks of the samples are inconsistent, and the lower the base Ni index is, the higher the purity of the data set D is represented. Among the attribute set a, the attribute with the smallest genie index is generally selected. When a new input sample enters, each decision tree in the forest is judged at one time, which class the sample belongs to is observed, which class is selected most, the sample is predicted to be the class, and the finally obtained classification result is the classification result of the random forest.
The whale optimization algorithm is a novel group intelligent optimization algorithm simulating whale predation behaviors in the nature, and the number of decision trees in a random forest and the minimum leaf node size are optimized through the whale optimization algorithm. The optimized objective function is selected as the opposite number of the accuracy of the verification set, wherein the accuracy is the average accuracy of the model after cross verification through k (k > 5) fold on the training set, and k is selected as 8 according to the number of the verification set. The 8-fold cross verification is carried out by dividing 32 images of each type of abrasion degree of a training set into 8 parts, taking one part of each 4 images as a verification set in sequence during training, taking the rest 7 parts as the training set to participate in model training, finally obtaining an average judgment optimization effect of verification results of 8 times, and finally obtaining a classifier of an optimal result as a trained random forest classifier, wherein different classifier models, namely abrasion detection models, are obtained after different types of tires are trained.
And 6, shooting automobile license plates and different tire treads through a plurality of cameras respectively, and sending the shot automobile license plate images and different tire tread images into a cloud server.
The cloud server stores various tire type information databases of automobiles with different license plates and tire wear degree detection models of different tire types. In the actual detection process, different types of tires need to correspond to different tire wear detection models. Considering that in practice, the number of tire types is large, the detection precision of the same model for different types of tire wear is low, and after the photographed license plate numbers are identified, vehicle information and tire type information are queried on line through a cloud server, and a corresponding tire wear detection model is determined according to the queried tire types.
And 7, the cloud server identifies license plate numbers of the automobiles according to the automobile license plate images, and the tire types are obtained by searching on the cloud server based on the identified license plate numbers.
Each license plate corresponds to a vehicle type, the vehicle type is provided with the same type of tire when leaving the factory, and the corresponding relation between the license plates and the tire types is stored in a database of the cloud server in advance. If the user changes the type of the tire used by the automobile at a later stage, the tire type needs to be uploaded to a server, and the server updates the corresponding relation between the license plate and the tire type. The cloud server can inquire tire type information corresponding to the license plate and a tire wear detection model corresponding to the tire type according to the license plate information.
Step 8, the cloud server firstly preprocesses each tire tread image by adopting the method of step 2 to obtain a plurality of preprocessed tread images, and respectively extracting final feature vectors of the preprocessed tread images by adopting the method of step 4 to obtain a plurality of final feature vectors; then the final feature vectors are sent into a wear detection model of the corresponding type of tire for wear degree identification, and a plurality of wear degree identification results are obtained; and integrating a plurality of abrasion degree identification results by a voting method to obtain a final abrasion degree detection result.
A tread image of an actual tire is acquired by using a camera, and the abrasion identification only needs to include image information of the channels, but the abrasion degree can be identified without complete image information, so that the image is sent into an abrasion detection model of the tire of the corresponding type to detect the mode degree after the image is preprocessed and the characteristics are extracted, the data is not required to be enhanced, and the actual tire abrasion degree is identified by adopting a voting method in the integrated learning.
The model calls the abrasion detection model corresponding to the automobile tire type in the abrasion detection models of all types. An acquired image can be expanded into n Zhang Caijian images through data preprocessing, each cut image is subjected to feature extraction to obtain a feature vector (n×n Zhang Caijian images correspond to n×n feature vectors), each feature vector is sent into the same abrasion detection model to identify an abrasion degree identification result (n×n feature vectors correspond to n×n abrasion degree identification results), and a plurality of abrasion degree identification results are integrated through a voting method, namely: taking the result with the largest occurrence number among the n x n abrasion degree identification results as the finally identified tire abrasion degree; if a plurality of abrasion degree identification results appear in the same frequency, after the different abrasion degree identification results are sequenced, averaging all the abrasion degree identification results, and then selecting the abrasion degree identification result closest to the average value as the final abrasion result.
The invention is further illustrated by the following specific examples:
the test and verification of the model are carried out by shooting images of 6 tires with different wear degrees of the dragon H5 truck of Dongfeng willow, and the model of the tire equipped by the truck is better through 275/80R22.5. During testing, 50 images are acquired for each type of tire with abrasion degree, 250 images are taken in total, and 16 images are obtained by cutting each image through image preprocessing. The image preprocessing process is the same as the training process preprocessing method, and the final feature vectors are obtained after feature extraction, reduction and fusion are carried out through the same steps as the training process, namely, 16 feature vectors can be calculated by collecting one image. The feature vectors are classified through the trained random forest classifier model, 16 classification results can be obtained from each acquired image, the voting method is adopted for integration, and the 16 results are finally integrated into one abrasion result. Voting method integration is an integrated learning model which follows a few majority-compliant principles, and a final result is determined on a plurality of images expanded in one image by adopting a voting method, namely the abrasion degree of each image is the abrasion degree with the largest abrasion degree among the abrasion degrees of all the expanded images; if a plurality of abrasion degrees appear in the same frequency, sorting the different abrasion degrees, wherein the serial numbers of the brand new, quarter abrasion, half abrasion, three-quarter abrasion and full abrasion are respectively 1, 2, 3, 4 and 5, a plurality of characteristic vectors extracted after expanding a single image are processed, classified and detected to obtain a plurality of results, the abrasion results closest to the average value are selected after the serial numbers of the plurality of results are averaged, the abrasion results are selected from the results with the largest occurrence frequency, the selected abrasion results are used as the final abrasion degree identification results of the image, and the nearest distance dis is calculated as follows:
Wherein dis is the nearest distance, N i Wear number for i-th expanded image, W j The j-th abrasion serial number in 5-class abrasion, W mode The final class detection accuracy is 89.6% for the mode number of 16 images in 5 abrasion serial numbers, and the final j is the final abrasion degree result serial number obtained by each acquired image, and the confusion matrix result is shown in fig. 6.
It should be noted that, although the examples described above are illustrative, this is not a limitation of the present invention, and thus the present invention is not limited to the above-described specific embodiments. Other embodiments, which are apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein, are considered to be within the scope of the invention as claimed.
Claims (4)
1. The rapid detection method for the tire wear degree based on the image is characterized by comprising the following steps:
step 1, shooting and collecting the front surface of each type of tire through a camera to obtain tread images under different wear degrees, at least obtaining one image of each type of tire, and establishing an original wear image library of the type of tire by utilizing the tread images;
Step 2, preprocessing each original tread image in an original abrasion image library of each type of tire, namely uniformly cutting, graying and compressing each original tread image to obtain a plurality of preprocessed tread images, and further obtaining a preprocessed abrasion image library of each type of tire;
step 3, reinforcing each preprocessed tread image in a preprocessed wear image library of each type of tire, namely, obtaining a plurality of reinforced tread images by adding noise to each preprocessed tread image and expanding, so as to obtain a reinforced wear image library of each type of tire;
step 4, extracting features of each enhanced tread image in the enhanced wear image library of each type of tire, namely extracting gray level co-occurrence matrix feature vectors, improved gradient histogram feature vectors and local binary pattern feature vectors of each enhanced tread image, performing dimension reduction and splicing fusion on the three features to obtain final feature vectors, and further obtaining a feature wear image library of each type of tire;
step 5, training a random forest classifier model by utilizing a characteristic abrasion image library of each type of tire, selecting a training set and a verification set by a k-fold cross verification method in the training process, optimizing 2 parameters of the number of decision trees and the minimum leaf node size of the random forest classifier by adopting a whale optimization algorithm, and selecting an optimized objective function as the opposite number of the verification set accuracy, wherein the accuracy is the average accuracy of the model after the k-fold cross verification on the training set, so as to obtain an abrasion detection model of each type of tire; wherein k is a set value;
Step 6, shooting automobile license plates and different tire treads through a plurality of cameras respectively, and sending the shot automobile license plate images and different tire tread images into a cloud server;
step 7, the cloud server identifies license plate numbers of the automobiles according to the license plate images of the automobiles, and the tire types are obtained by searching on the cloud server based on the identified license plate numbers;
step 8, the cloud server firstly preprocesses each tire tread image by adopting the method of step 2 to obtain a plurality of preprocessed tread images, and respectively extracting final feature vectors of the preprocessed tread images by adopting the method of step 4 to obtain a plurality of final feature vectors; then the final feature vectors are sent into a wear detection model of the corresponding type of tire for wear degree identification, and a plurality of wear degree identification results are obtained; and integrating a plurality of abrasion degree identification results by a voting method to obtain a final abrasion degree detection result.
2. The method for rapidly detecting the wear level of the tire based on the image according to claim 1, wherein in the step 1, the wear level of the tire includes five wear levels, namely, zero wear, quarter wear, half wear, three-quarters wear and full wear.
3. The method for rapidly detecting the wear degree of a tire based on images according to claim 1, wherein in the step 3, the specific process of reinforcing each preprocessed tread image is as follows:
step 3.1, randomly selecting a pixel point (m, n) from the preprocessed tread image, taking the pixel point (m, n) as a center point of two-dimensional normal distribution, and respectively extending the same pixel point from the upper, lower, left and right directions of the center point, thereby selecting a square noise adding range S in the processed tread image;
step 3.2, for each pixel point (a, b) of the square noise adding range S, scaling the pixel point by adopting a scaling formula to obtain scaled pixel points (x, y); wherein the scaling formula is:
step 3.3, adding noise to each scaling pixel point (x, y) by using a two-dimensional normal distribution function, wherein the adding noise value f (x, y) of each scaling pixel point (x, y) is as follows:
wherein (m, n) is a randomly selected pixel point in the preprocessed tread image, (a, b) is a current pixel point of a square noise adding range S obtained by taking the pixel point (m, n) as a center, (x, y) is a scaled pixel point corresponding to the current pixel point (a, b),for scaling factor, sigma 1 Sum sigma 2 For 2 covariances, ρ is the correlation coefficient.
4. The method for rapidly detecting the wear degree of a tire based on images according to claim 1, wherein in step 4, the specific process of extracting the features of each enhanced tread image is as follows:
step 4.1, calculating 4 texture classification features of the gray level co-occurrence matrix of each enhanced tread image, namely an angular second moment, entropy, contrast and contrast score matrix, and forming a gray level co-occurrence matrix feature vector by using the mean value and the mean square error of the 4 texture classification features in different gradient directions under different pitches;
step 4.2, calculating an improved gradient histogram feature vector for each enhanced tread image;
step 4.2.1, uniformly dividing each enhanced tread image into cell units;
step 4.2.2, calculating gradient values of 9 gradient directions of each cell unit, and further obtaining an original gradient vector of each cell unit; the gradient directions corresponding to the gradient values in the original gradient vector are 0 degree, 20 degree, 40 degree, 60 degree, 80 degree, 100 degree, 120 degree, 140 degree and 160 degree in sequence;
step 4.2.3, summing the gradient values of all the cell units corresponding to the gradient directions to obtain gradient sum of 9 gradient directions, and selecting the gradient direction with the maximum gradient sum from the gradient sum;
Step 4.2.4, circularly right-shifting all gradient values in the original gradient vector to enable the gradient value corresponding to the gradient direction with the largest gradient sum to be located at the first position, thereby obtaining a reconstructed gradient vector of each cell unit;
step 4.2.5, carrying out normalized sliding operation on the cell units of each enhanced tread image to obtain an improved gradient histogram feature vector;
step 4.3, calculating a local binary pattern of each enhanced tread image to obtain a local binary pattern feature vector;
and 4.4, analyzing the gray level co-occurrence matrix feature vector, the improved gradient histogram feature vector and the local binary pattern feature vector through a principal component, reducing the dimension, and then obtaining a final feature vector through splicing and fusion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311212612.2A CN117237725A (en) | 2023-09-19 | 2023-09-19 | Image-based tire wear degree rapid detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311212612.2A CN117237725A (en) | 2023-09-19 | 2023-09-19 | Image-based tire wear degree rapid detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117237725A true CN117237725A (en) | 2023-12-15 |
Family
ID=89094399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311212612.2A Pending CN117237725A (en) | 2023-09-19 | 2023-09-19 | Image-based tire wear degree rapid detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117237725A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117571341A (en) * | 2024-01-16 | 2024-02-20 | 山东中亚轮胎试验场有限公司 | System and method for detecting omnibearing wear of tire |
CN117876382A (en) * | 2024-03-13 | 2024-04-12 | 咸阳黄河轮胎橡胶有限公司 | System and method for detecting tread pattern defects of automobile tire |
CN118132793A (en) * | 2024-04-30 | 2024-06-04 | 湖北军缔悍隆科技发展有限公司 | Tire wear detection method and system |
-
2023
- 2023-09-19 CN CN202311212612.2A patent/CN117237725A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117571341A (en) * | 2024-01-16 | 2024-02-20 | 山东中亚轮胎试验场有限公司 | System and method for detecting omnibearing wear of tire |
CN117571341B (en) * | 2024-01-16 | 2024-05-14 | 山东中亚轮胎试验场有限公司 | System and method for detecting omnibearing wear of tire |
CN117876382A (en) * | 2024-03-13 | 2024-04-12 | 咸阳黄河轮胎橡胶有限公司 | System and method for detecting tread pattern defects of automobile tire |
CN117876382B (en) * | 2024-03-13 | 2024-06-18 | 咸阳黄河轮胎橡胶有限公司 | System and method for detecting tread pattern defects of automobile tire |
CN118132793A (en) * | 2024-04-30 | 2024-06-04 | 湖北军缔悍隆科技发展有限公司 | Tire wear detection method and system |
CN118132793B (en) * | 2024-04-30 | 2024-08-09 | 湖北军缔悍隆科技发展有限公司 | Tire wear detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117237725A (en) | Image-based tire wear degree rapid detection method | |
CN107729818B (en) | Multi-feature fusion vehicle re-identification method based on deep learning | |
Eisenbach et al. | How to get pavement distress detection ready for deep learning? A systematic approach | |
CN106127747B (en) | Car surface damage classifying method and device based on deep learning | |
CN108960055B (en) | Lane line detection method based on local line segment mode characteristics | |
CN106295124B (en) | The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts | |
CN109740478A (en) | Vehicle detection and recognition methods, device, computer equipment and readable storage medium storing program for executing | |
CN116758059B (en) | Visual nondestructive testing method for roadbed and pavement | |
CN111191628B (en) | Remote sensing image earthquake damage building identification method based on decision tree and feature optimization | |
CN104182763A (en) | Plant type identification system based on flower characteristics | |
CN103034838A (en) | Special vehicle instrument type identification and calibration method based on image characteristics | |
CN1322471C (en) | Comparing patterns | |
CN111735524A (en) | Tire load obtaining method based on image recognition, vehicle weighing method and system | |
KR101941043B1 (en) | Method for Object Detection Using High-resolusion Aerial Image | |
CN114170418B (en) | Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching | |
CN110188828A (en) | A kind of image sources discrimination method based on virtual sample integrated study | |
CN115937518A (en) | Pavement disease identification method and system based on multi-source image fusion | |
CN109190451B (en) | Remote sensing image vehicle detection method based on LFP characteristics | |
CN116596428A (en) | Rural logistics intelligent distribution system based on unmanned aerial vehicle | |
Girish et al. | Tire Imprint Identification and Classification using VGG19 | |
Sugiharto et al. | Comparison of SVM, Random Forest and KNN Classification By Using HOG on Traffic Sign Detection | |
EP2380110B1 (en) | A method for evaluating quality of image representing a fingerprint pattern | |
Wang et al. | A line-based skid mark segmentation system using image-processing methods | |
US20240046625A1 (en) | De-biasing datasets for machine learning | |
CN115359346B (en) | Small micro-space identification method and device based on street view picture and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |