CN110060240B - Tire contour measurement method based on image pickup - Google Patents

Tire contour measurement method based on image pickup Download PDF

Info

Publication number
CN110060240B
CN110060240B CN201910282290.6A CN201910282290A CN110060240B CN 110060240 B CN110060240 B CN 110060240B CN 201910282290 A CN201910282290 A CN 201910282290A CN 110060240 B CN110060240 B CN 110060240B
Authority
CN
China
Prior art keywords
tire
camera
point
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910282290.6A
Other languages
Chinese (zh)
Other versions
CN110060240A (en
Inventor
向卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Lian He Technology Co ltd
Original Assignee
Nanjing Lian He Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Lian He Technology Co ltd filed Critical Nanjing Lian He Technology Co ltd
Priority to CN201910282290.6A priority Critical patent/CN110060240B/en
Publication of CN110060240A publication Critical patent/CN110060240A/en
Application granted granted Critical
Publication of CN110060240B publication Critical patent/CN110060240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tire contour measuring method based on image pickup, which comprises the following steps: firstly, extracting a sparse tire pattern contour image and key point characteristic values from the same tire pattern image shot at different positions in a single two-dimensional image; secondly, establishing a key point characteristic matrix of the gray value of the image pixel and the change direction according to the tire tread, and calculating a plurality of same key point characteristic values in 2 or more sparse images of the tire tread; then, a three-dimensional tire tread contour space curve is fitted by comparing and calculating a predicted value of the variation of the camera internal parameter and the pose external parameter of the three-dimensional space camera with an actual observed value of pixel values projected in 2 or more two-dimensional spaces; finally, by comparing the actual three-dimensional space change data of the tire tread obtained between any time periods, the wear of the tire in use during the time period can be calculated. The tire tread measurement feedback method provided by the invention can provide calculation basis for the use payment of the tire while ensuring the safe running of the tire.

Description

Tire contour measurement method based on image pickup
Technical Field
The invention discloses a tire wear measurement method, and particularly relates to the technical field of graphic processing measurement.
Background
The tire wear is mainly generated by friction force generated by sliding between the tire and the ground, and is closely related to running conditions such as starting, turning and braking of the vehicle, and the faster the turning speed, the faster the starting and the faster the braking of the vehicle, the faster the tire wear.
The tire wear is serious or the tire surface becomes very smooth, and certain hidden danger is caused to driving safety, so that a real-time, convenient and low-cost tire pattern measurement feedback method is needed for the tire pattern wear and the use state of the tire, the tire is ensured to run safely, a calculation basis is provided for the tire to pay according to the use, and a real-time detection method for the tire pattern wear and the use state is not available at present.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the tire contour measuring method based on the camera shooting can conveniently realize feedback of tire tread measurement in real time and judge the abrasion loss of the tire.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides a tire contour measuring method based on image pickup, which specifically comprises the following steps:
step 1, shooting the same tire pattern image at different positions by using a camera of a mobile terminal, and extracting a sparse tire pattern contour image and key point characteristic values from each single two-dimensional image;
step 2, establishing a key point characteristic matrix of the gray value of the image pixel and the change direction according to the tire tread, and calculating a plurality of same key point characteristic values in 2 or more sparse images of the tire tread;
step 3, fitting a three-dimensional tire tread contour space curve by comparing and calculating a change predicted value of a camera internal parameter and a three-dimensional space camera pose external parameter with an actual observed value projected on 2 or more two-dimensional space pixel values;
and 4, calculating the wear loss of the tire in the time period by comparing the actual three-dimensional space change data of the tire tread obtained in any time period.
Furthermore, in the tire contour measuring method provided by the invention, step 1 is to extract a sparse tire contour image by adopting SIFT features.
Furthermore, the step 2 of the tire contour measurement method provided by the invention is to carry out similarity matching on the extracted tire image feature description vector based on a feature matching algorithm of scale space, image scaling and rotation radiation invariance, and the specific steps are as follows:
2.1, verifying a scale transformation unique linear kernel by adopting a Gaussian convolution, and convolving a two-dimensional image of a single tire with the Gaussian kernel in a scale space under different scales to obtain the following steps:
L(u,v,σ)=G(u,v,σ)*I(u,v)
wherein G (u, v, sigma) is a scale variable Gaussian function, sigma is a scale space factor, and represents the smoothness of the image; i (u, v) is the gray value of the pixels of the two-dimensional image of the tire tread, u is the X-axis value of the pixels, and v is the Y-axis value of the pixels; l (u, v, sigma) is a pattern image which is blurred by a Gaussian function;
2.2, precisely positioning SIFT key points by adopting DOG operators:
for the difference of the adjacent scales L (u, v, σ), it is expressed by the DOG operator:
D(u,v,σ)=(G(u,v,kσ)-G(u,v,σ))*I(u,v)=L(u,v,kσ)-L(u,v,σ)
where k is a constant value and where,
let (u, v, σ) be the three-dimensional value M, then the D value Taylor expansion at the keypoint is:
order theFind the extremum->
If M' is greater than 0.5 in either direction, then the keypoint is close to another sampling point, then the keypoint location is replaced with interpolation;
2.3, eliminating unstable points:
order theTo measure the contrast of the feature points if D (M')<Beta, eliminating the key point; wherein β is an adjustment threshold;
2.4, determining the main direction of the key points:
if the direction of the maximum local image gradient characteristic of the key point is taken as the main direction of the key point, the following steps are provided:
2.5, determining key point descriptors:
using 4*4 =16 seed points for each keypoint to describe, 128 data are generated, forming a 128-bit SIFT feature vector.
Further, in the tire contour measurement method provided by the invention, in step 3, three-dimensional contour dimensions of the tire tread are calculated by matching SIFT descriptors of each key point in 2 or more tire tread images, and the method specifically comprises the following steps:
3.1, establishing a camera pose equation of 2 tire images:
let P be the same point in 2 patterns, C 1 And C 2 Pi is the center point of the optical axis of a camera in 2 tire images 1 And pi 2 An imaging plane under 2 images for the camera; c (C) 1 C 2 The P plane is C 1 、C 2 And an outer pole plane formed by the P three points; epipolar line m 1 e 1 And m 2 e 2 Is the intersection line of the outer electrode plane and the imaging plane; external pole e 1 And e 2 Is the center point C of the optical axis of the camera in 2 images 1 And C 2 An intersection point with the imaging plane; baseline C 1 C 2 A connecting line for the center of the optical axis of the camera in 2 pictures;
3.2, establishing a matching relation of the characteristic points through the block searching polar lines, wherein the matching relation is specifically as follows:
3.2.1, taking the key point p in the first tire pattern 1 And build p 1 Peripheral small block pixel point matrix A epsilon R W*W Searching matching point small blocks corresponding to SIFT descriptors of the 2 nd drawing and the 1 st drawing along polar lines, and leading the polar line e 2 m 2 The upper n small blocks are denoted as B i I=1, …, n; r is a pixel image point set formed by small blocks, and w is an abscissa value of the set;
3.2.2 calculating the NCC normalized correlation S of A, B:
taking the threshold value of S as alpha, and representing the point p corresponding to the small block A when S is larger than the threshold value alpha 1 Point p corresponding to patch B on the epipolar line 2 Matching points are eliminated, and matching points which are not on the contour line are eliminated;
3.3, establishing a camera epipolar constraint equation of different images:
the camera adopts a 4-parameter model, and the position of a space point P is set as P= |X, Y and Z| T
The normalized coordinate position of the spatial point P under the camera coordinates and the coordinate position under the world coordinate system have the following equations:
wherein k is x The amplification factor in the X-axis direction; k (k) y Is the amplification factor in the Y-axis direction; (u) 0 ,v 0 ) Image coordinates for the optical axis center point; r is a rotation matrix in the outer parameters; p is the displacement vector of the external parameter; (u, v) is the image point coordinates of the spatial point P; (x) w ,y w ,z w ) Is the world coordinate of point P on the tire tread; m is a basic matrix;
and 3.4, calculating the pose of the two images and internal parameters of the camera through the key points, and calculating the world coordinates of the space points, wherein the method specifically comprises the following steps:
3.4.1, camera 2 pose-to-epipolar geometry constraint:
let the camera normalized plane coordinates of P points in 2 images of the tire pattern be x 1 ,x 2 .
According to the imaging principle of a single-hole camera, there are s 1 p 1 =KP,s 2 p 2 =K(RP+t)
Wherein K is an internal reference of the camera, R is a rotation matrix of 2 positions when the camera shoots, t is a displacement matrix of 2 positions, s 1 ,s 2 Is a parameter;
2 pose epipolar geometric constraints of camera are C 1 C 2 P coplanarity
I.e.
The camera essence matrix e=tζr is a 3X3 matrix
SVD decomposition of E into U ΣV T
Wherein U, V is an orthogonal solution, and Sigma is a singular value matrix;
3.4.2 and an estimated essential matrix E,
e is [ u ] 1 u 2 ,u 1 v 2 ,u 1 ,v 1 u 2 ,v 1 v 2 ,v 1 ,u 2 ,v 2 ,1]*e=0
Wherein, (u) 1 ,v 1 )、(u 2 ,v 2 ) Pixel coordinates of the image point which is the space point P in 2 normalized planes;
similarly, for i key points p i All have:
[u i 1 u i 2 ,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
by settling a system of linear equations [ u ] i 1 u i 2 ,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
Calculating an essential matrix E of the camera;
3.4.3, restoring the camera to the rotation and displacement pose R, t by the essence matrix:
wherein U, V is an orthogonal matrix, Σ is a singular value matrix, and Σ=diag (σ) 123 );
3.4.4, calculating the three-dimensional size of the profile of the tire tread by comparing two images
Taking 2n pictures through continuous shooting of a camera, comparing the 1 st picture with the (n+1) th picture, comparing the (i) th picture with the (i+n) th picture, and calculating the space point size P i (i=1,n);
Order theFinal contour mean size ∈>
Further, in the tire profile measuring method provided by the invention, in step 4, the used abrasion loss of the tire is calculated by comparing the three-dimensional space size difference of the tire tread between any 2 time periods, and the method concretely comprises the following steps:
4.1, connecting adjacent endpoints of adjacent contour lines of the three-dimensional space to form contour line corner characteristic values;
4.2, searching closed curves and straight line contours in the three-dimensional space of the tire tread, and connecting corner points of adjacent contour lines to form a semi-closed contour;
and 4.3, projecting the contour line of the tire pattern on a plane with the rotation center line of the tire by utilizing the tire pattern on the surface of the rotating body of the tire, and calculating the depth dimension of the tire pattern in the plane two-dimensional image as the dimension basis of abrasion of the tire pattern.
Further, the tire profile measuring method provided by the invention, step 4.3 specifically comprises:
in the tire image, the projection of the profile line on the tire rotation axis plane is approximately understood as a straight line segment, and the average value of the wear difference profile line distance d between 2 time segments is calculated:wherein D is i Is the tire tread depth d when the tire is used for the first time i Is the second time period tread depth.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
the invention can rapidly and conveniently measure the wear use condition of the tire in real time, and provides a calculation basis for the use payment of the tire while ensuring the safe running of the tire.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a SIFT feature vector diagram.
Fig. 3 is a schematic diagram of a camera pose equation for creating 2-ply images.
Fig. 4 is a schematic view of a projection of an approximately understood contour line in a tire image onto a plane passing through the axis of rotation of the tire.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
According to the invention, the same tire tread image shot at different positions is utilized by a camera of a mobile terminal, a tread profile sparse image and key point characteristic values are extracted from a single two-dimensional image, a change rate of image pixel gray values and a change direction key point characteristic matrix are established according to the tire tread, a plurality of same key point characteristic values in 2 or more tire tread sparse images are calculated, camera internal parameters and three-dimensional space camera pose external parameter change predicted values are calculated by comparison with actual observed values projected on 2 or more two-dimensional space pixel values, and a three-dimensional tire tread profile space curve is fitted. By comparing the actual three-dimensional space variation data of the tire tread obtained between any time periods at any place, the amount of wear of the tire in use during the time period can be calculated.
Referring to fig. 1, the specific steps of the present invention are as follows:
1. and installing an APP application platform on a mobile phone or a mobile terminal (C end for short) of a tire user or a measurer. The APP can call the C-end camera, irradiate a light source and carry out mobile communication, and has the functions of shooting a tire pattern image, transmitting the tire pattern image to the C-end with calculation resources or transmitting the tire pattern image to a cloud background to calculate the three-dimensional space of the tire pattern, and feeding back a calculation result to a C-end display port.
2. And extracting a tire contour sparse image on the single two-dimensional image by adopting SIFT features.
SIFT (scale invariant feature transform) similarity matching is performed on the extracted fetal pattern image feature description vector based on a feature matching algorithm of scale space, image scaling and rotational radiation invariance.
2.1 verify the unique linear kernel of the scale transformation by adopting a Gaussian convolution, if the two-dimensional Gaussian function with variance sigma is G (u, v, sigma), the two-dimensional image of a single tire pattern and the Gaussian kernel are convolved into a scale space under different scales:
L(u,v,σ)=G(u,v,σ)*I(u,v)
wherein I (u, v) is the gray value of the pixels of the two-dimensional image of the tire tread, u is the X-axis value of the pixels, and v is the Y-axis value of the pixels;
g (u, v, σ) is a scale-variable gaussian function, where σ is a scale space factor representing the smoothness of the image;
l (u, v, sigma) is the pattern image blurred by the Gaussian function.
2.2 adopting DOG operator to accurately position SIFT key point,
for the difference of the adjacent scales L (u, v, σ), it is expressed by the DOG operator:
D(u,v,σ)=(G(u,v,kσ)-G(u,v,σ))*I(u,v)=L(u,v,kσ)-L(u,v,σ)
where k is a constant value and where,
let (u, v, σ) be the three-dimensional value M, then the D value Taylor expansion at the keypoint is:
order theFind the extremum->
If M' is greater than 0.5 in either direction, then the keypoint is close to another sampling point, then interpolation is used to replace the keypoint location.
2.3 eliminating unstable points
Order theTo measure the contrast of the feature points if D (M')<And beta, eliminating the key point. Where β is the adjustment threshold, typically 0.03.
2.4 determination of the principal direction of the keypoints
If the direction of the maximum local image gradient characteristic of the key point is taken as the main direction of the key point, the following steps are provided:
2.5 determination of keypoint descriptors
For each keypoint, using 4*4 =16 seed points to describe, 128 data were generated, forming 128 SIFT feature vectors, see fig. 2.
3. And (3) calculating the three-dimensional outline size of the tire tread through SIFT descriptor matching of each key point in 2 or more tire tread images.
3.1, establishing a camera pose equation of 2 tire images:
in FIG. 3, P is the same point in the 2 taken images of the sipes, C 1 And C 2 Pi is the center point of the optical axis of a camera in 2 tire images 1 And pi 2 An imaging plane under 2 images for the camera; c (C) 1 C 2 The P plane is C 1 、C 2 And an outer pole plane formed by the P three points; epipolar line m 1 e 1 And m 2 e 2 Is the intersection line of the outer electrode plane and the imaging plane; external pole e 1 And e 2 Is the center point C of the optical axis of the camera in 2 images 1 And C 2 An intersection point with the imaging plane; baseline C 1 C 2 Is the line connecting the centers of the optical axes of the cameras in 2 figures.
3.2, establishing a matching relation of the feature points through the block search polar lines:
3.2.1 taking the keypoint p in the first ply 1 And build p 1 Peripheral small block pixel point matrix A epsilon R W*W Searching the matching point small block corresponding to the SIFT descriptor of the No. 2 drawing and the No. 1 drawing along the polar line, and leading the polar line e 2 m 2 The upper n small blocks are denoted as B i ,i=1,…,n。
3.2.2 calculating the NCC normalized correlation S of A, B:
taking the threshold value of S as alpha (0.8 can be taken), and when S is larger than the threshold value alpha, representing the point p corresponding to the small block A 1 Point p corresponding to patch B on the epipolar line 2 Is a matching point, and simultaneously eliminates matching points that are not on the contour line.
3.3, establishing a camera epipolar constraint equation of different images:
the camera adopts a 4-parameter model, and the position of a space point P is set as P= |X, Y and Z| T
The normalized coordinate position of the spatial point P under the camera coordinates and the coordinate position under the world coordinate system have the following equations:
wherein k is x The amplification factor in the X-axis direction; k (k) y Is the amplification factor in the Y-axis direction; (u) 0 ,v 0 ) Image coordinates for the optical axis center point; r is a rotation matrix in the outer parameters; p is the displacement vector of the external parameter; (u, v) is the image point coordinates of the spatial point P; (x) w ,y w ,z w ) Is the world coordinate of point P on the tire tread; m is a basic matrix.
3.4 calculating the pose of the two images and the internal parameters of the camera through the key points, and calculating the world coordinates of the space points
3.4.1 camera 2 pose-to-epipolar geometry constraint
Let the camera normalized plane coordinates of P points in 2 images of the tire pattern be x 1 ,x 2 .
According to the imaging principle of a single-hole camera, there are s 1 p 1 =KP,s 2 p 2 =K(RP+t)
Wherein K is an internal reference of the camera, R is a rotation matrix of 2 positions when the camera shoots, t is a displacement matrix of 2 positions, s 1 ,s 2 Is a parameter.
2 pose epipolar geometric constraints of camera are C 1 C 2 P coplanarity
I.e.
The camera essence matrix e=tζr is a 3X3 matrix
SVD decomposition of E into U ΣV T
Wherein U, V is an orthogonal solution, and Sigma is a singular value matrix.
3.4.2 the essential matrix E is evaluated and,
e is [ u ] 1 u 2 ,u 1 v 2 ,u 1 ,v 1 u 2 ,v 1 v 2 ,v 1 ,u 2 ,v 2 ,1]*e=0
Wherein, (u) 1 ,v 1 )、(u 2 ,v 2 ) The pixel coordinates of the image point at the spatial point P at 2 normalized planes.
Similarly, for i key points p i All have:
[u i 1 u i 2,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
by settling a system of linear equations [ u ] i 1 u i 2 ,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
The essential matrix E of the camera can be solved.
3.4.3 restoring the camera to the rotational and displacement pose R, t:
wherein U, V is an orthogonal matrix, Σ is a singular value matrix, and Σ=diag (σ) 123 )
3.4.4 calculating the three-dimensional size of the profile of the tire by comparing two images
Taking 2n pictures continuously, comparing the 1 st picture with the n+1 th picture, comparing the i (i=1, n) th picture with the i+n th picture, and calculating the space point size P i (i=1,n)。
Order theFinal contour mean size ∈>
4. Comparing the three-dimensional space size differences of the tire tread between any 2 time periods, and calculating the use abrasion loss of the tire:
4.1 connecting the adjacent endpoints of the adjacent contour lines of the three-dimensional space to form the characteristic value of the corner point of the contour line
4.2 searching closed curve and straight line contour in the three-dimensional space of the tire tread, connecting the corner points of adjacent contour lines to form a semi-closed contour
And 4.3, projecting the contour line of the tire pattern on a plane with the rotation center line of the tire by utilizing the tire pattern on the surface of the rotating body of the tire, and calculating the depth size of the tire pattern in the planar two-dimensional image as the size basis of abrasion of the tire pattern.
As in the tire image shown in fig. 4, it is approximately understood that the projection of the contour line onto the plane passing through the axis of rotation of the tire is a straight line segment,
calculating the wear difference profile distance d between 2 time periods
Average value of
Wherein D is i Is the tire tread depth d when the tire is used for the first time i Is the second time period tread depth.
By comparing the actual three-dimensional space variation data of the tire tread obtained between any time periods at any placeThe amount of wear of the tire in use during this period can be calculated.
The invention does not involve parts that are all the same as or can be implemented using existing technology.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (5)

1. The method for measuring the tire profile based on image pickup is characterized by comprising the following steps:
step 1, shooting the same tire pattern image at different positions by using a camera of a mobile terminal, and extracting a sparse tire pattern contour image and key point characteristic values from each single two-dimensional image;
step 2, establishing a key point characteristic matrix of the gray value change rate and change direction of the pixels of the image according to the tire patterns, and calculating a plurality of same key point characteristic values in 2 or more sparse tire patterns; comprising the following steps: verifying a scale transformation unique linear kernel by adopting a Gaussian convolution kernel, precisely positioning SIFT key points by adopting a DOG operator, eliminating unstable points, determining a main direction of the key points and determining a key point descriptor;
step 3, fitting a three-dimensional tire tread contour space curve by comparing and calculating a change predicted value of a camera internal parameter and a three-dimensional space camera pose external parameter with an actual observed value projected on 2 or more two-dimensional space pixel values; and (3) calculating the three-dimensional outline size of the tire tread through SIFT descriptor matching of each key point in 2 or more tire tread images, wherein the three-dimensional outline size is specifically as follows:
3.1, establishing a camera pose equation of 2 tire images;
3.2, establishing a matching relation of the feature points through the block searching polar lines;
3.3, establishing a camera polar constraint equation of different images;
calculating the pose of 2 images and internal parameters of the camera through the key points, and calculating world coordinates of the space points;
step 4, calculating the wear loss of the tire by comparing the three-dimensional space size differences of the tire tread between any 2 time periods, wherein the wear loss is specifically as follows:
4.1, connecting adjacent endpoints of adjacent contour lines of the three-dimensional space to form contour line corner characteristic values;
4.2, searching closed curves and straight line contours in the three-dimensional space of the tire tread, and connecting corner points of adjacent contour lines to form a semi-closed contour;
and 4.3, projecting the contour line of the tire pattern on a plane with the rotation center line of the tire by utilizing the tire pattern on the surface of the rotating body of the tire, and calculating the depth dimension of the tire pattern in the plane two-dimensional image as the dimension basis of abrasion of the tire pattern.
2. The method according to claim 1, wherein step 1 is to extract a sparse image of the tread profile using SIFT features.
3. The method according to claim 2, characterized in that the specific steps of step 2 are as follows:
2.1, verifying a scale transformation unique linear kernel by adopting a Gaussian convolution, and convolving a two-dimensional image of a single tire with the Gaussian kernel in a scale space under different scales to obtain the following steps:
L(u,v,σ)=G(u,v,σ)*I(u,v)
wherein G (u, v, sigma) is a scale variable Gaussian function, sigma is a scale space factor, and represents the smoothness of the image; i (u, v) is the gray value of the pixels of the two-dimensional image of the tire tread, u is the X-axis value of the pixels, and v is the Y-axis value of the pixels; l (u, v, sigma) is a pattern image which is blurred by a Gaussian function;
2.2, precisely positioning SIFT key points by adopting DOG operators:
for the difference of the adjacent scales L (u, v, σ), it is expressed by the DOG operator:
D(u,v,σ)=(G(u,v,kσ)-G(u,v,σ))*I(u,v)=L(u,v,kσ)-L(u,v,σ)
where k is a constant value and where,
let (u, v, σ) be the three-dimensional value M, then the D value Taylor expansion at the keypoint is:
order theFind the extremum->
If M max If the position of the key point is larger than 0.5 in any direction, the key point is close to another sampling point, and interpolation is used for replacing the position of the key point;
2.3, eliminating unstable points:
order theTo measure the contrast of the feature points if D (M max )<Beta, eliminating the key point; wherein β is an adjustment threshold;
2.4, determining the main direction of the key points:
if the direction of the maximum local image gradient characteristic of the key point is taken as the main direction of the key point, the following steps are provided:
2.5, determining key point descriptors:
using 4*4 =16 seed points for each keypoint to describe, 128 data are generated, forming a 128-bit SIFT feature vector.
4. A method according to claim 3, wherein in step 3, the three-dimensional contour dimension of the tire is calculated by SIFT descriptor matching for each key point in 2 or more images of the tire, specifically as follows:
3.1, establishing a camera pose equation of 2 tire images:
let P be the same point in 2 patterns, C 1 And C 2 Pi is the center point of the optical axis of a camera in 2 tire images 1 And pi 2 An imaging plane under 2 images for the camera; c (C) 1 C 2 The P plane is C 1 、C 2 And an outer pole plane formed by the P three points; epipolar line m 1 e 1 And m 2 e 2 Is the intersection line of the outer electrode plane and the imaging plane; external pole e 1 And e 2 Is the center point C of the optical axis of the camera in 2 images 1 And C 2 An intersection point with the imaging plane; baseline C 1 C 2 A connecting line for the center of the optical axis of the camera in 2 pictures;
3.2, establishing a matching relation of the characteristic points through the block searching polar lines, wherein the matching relation is specifically as follows:
3.2.1, taking the key point p in the first tire pattern 1 And build p 1 Peripheral small block pixel point matrix A epsilon R W*W Searching matching point small blocks corresponding to SIFT descriptors of the 2 nd drawing and the 1 st drawing along polar lines, and leading the polar line e 2 m 2 The upper n small blocks are denoted as B i I=1, …, n; r is a pixel image point set formed by small blocks, and w is an abscissa value of the set;
3.2.2 calculating the NCC normalized correlation S of A, B:
taking the threshold value of S as alpha, and representing the point p corresponding to the small block A when S is larger than the threshold value alpha 1 Point p corresponding to patch B on the epipolar line 2 Matching points are eliminated, and matching points which are not on the contour line are eliminated;
3.3, establishing a camera epipolar constraint equation of different images:
the camera adopts a 4-parameter model, and the position of a space point P is set as P= |X, Y and Z| T
Normalized coordinate position z of spatial point P at camera coordinates c The following equation is applied to the coordinate position under the world coordinate system:
wherein k is x The amplification factor in the X-axis direction; k (k) y Is the amplification factor in the Y-axis direction; (u) 0 ,v 0 ) Image coordinates for the optical axis center point; r is a rotation matrix in the outer parameters; t is the displacement vector of the external parameter; (u, v) is the image point coordinates of the spatial point P; (x) w ,y w ,z w ) Is the world coordinate of point P on the tire tread; m' is a basic matrix;
and 3.4, calculating the pose of the two images and internal parameters of the camera through the key points, and calculating the world coordinates of the space points, wherein the method specifically comprises the following steps:
3.4.1, camera 2 pose-to-epipolar geometry constraint:
let the camera normalized plane coordinates of P points in 2 images of the tire pattern be x 1 ,x 2 .
According to the imaging principle of a single-hole camera, there are s 1 p 1 =KP,s 2 p 2 =K(RP+t)
Wherein K is an internal reference of the camera, R is a rotation matrix of 2 positions when the camera shoots, t is a displacement matrix of 2 positions, s 1 ,s 2 Is a parameter;
2 pose epipolar geometric constraints of camera are C 1 C 2 P coplanarity
I.e.
The camera essence matrix e=tζr is a 3X3 matrix
SVD decomposition of E into U ΣV T
Wherein U, V is an orthogonal solution, and Sigma is a singular value matrix;
3.4.2 and an estimated essential matrix E,
e is [ u ] 1 u 2 ,u 1 v 2 ,u 1 ,v 1 u 2 ,v 1 v 2 ,v 1 ,u 2 ,v 2 ,1]*e=0
Wherein, (u) 1 ,v 1 )、(u 2 ,v 2 ) Pixel coordinates of the image point which is the space point P in 2 normalized planes;
similarly, for i key points p i All have:
[u i 1 u i 2 ,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
by solving a system of linear equations [ u ] i 1 u i 2 ,u i 1 v i 2 ,u i 1 ,v i 1 u i 2 ,v i 1 v i 2 ,v i 1 ,u i 2 ,v i 2 ,1]*e=0
Calculating an essential matrix E of the camera;
3.4.3, restoring the camera to the rotation and displacement pose R, t by the essence matrix:
wherein U, V is an orthogonal matrix, Σ is a singular value matrix, and Σ=diag (σ) 123 );
3.4.4, calculating the three-dimensional size of the profile of the tire tread by pairwise comparison of a plurality of images
Taking 2n pictures through continuous shooting of a camera, comparing the 1 st picture with the (n+1) th picture, comparing the (i) th picture with the (i+n) th picture, and calculating the space point size P i ,i=1,...,n;
Order theFinal contour mean size ∈>
5. The method according to claim 1, wherein step 4.3 is specifically:
in the tire image, the projection of the profile line on the tire rotation axis plane is approximately understood as a straight line segment, and the average value of the wear difference profile line distance d between 2 time segments is calculated:wherein D is i Is the tire tread depth d when the tire is used for the first time i Is the second time period tread depth.
CN201910282290.6A 2019-04-09 2019-04-09 Tire contour measurement method based on image pickup Active CN110060240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282290.6A CN110060240B (en) 2019-04-09 2019-04-09 Tire contour measurement method based on image pickup

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282290.6A CN110060240B (en) 2019-04-09 2019-04-09 Tire contour measurement method based on image pickup

Publications (2)

Publication Number Publication Date
CN110060240A CN110060240A (en) 2019-07-26
CN110060240B true CN110060240B (en) 2023-08-01

Family

ID=67317545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282290.6A Active CN110060240B (en) 2019-04-09 2019-04-09 Tire contour measurement method based on image pickup

Country Status (1)

Country Link
CN (1) CN110060240B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062912B (en) * 2019-11-20 2023-05-26 杭州睿眼科技有限公司 Feature extraction, detection and positioning method for key targets of tire section
CN110942460B (en) * 2019-12-12 2023-01-31 湖南省鹰眼在线电子科技有限公司 Tire pattern depth measuring method, system and storage medium
CN111862048B (en) * 2020-07-22 2021-01-29 浙大城市学院 Automatic fish posture and length analysis method based on key point detection and deep convolution neural network
CN112150527B (en) * 2020-08-31 2024-05-17 深圳市慧鲤科技有限公司 Measurement method and device, electronic equipment and storage medium
CN114485527A (en) * 2020-10-26 2022-05-13 王运斌 Tire inner contour measuring method
CN112907568A (en) * 2021-03-22 2021-06-04 上海眼控科技股份有限公司 Tire wear condition determination method and apparatus, computer device, and storage medium
CN114046746B (en) * 2021-12-08 2024-08-13 北京汇丰隆智能科技有限公司 Vehicle tire abrasion 3D scanning on-line optical detection device and detection method
CN115147616B (en) * 2022-07-27 2024-08-20 安徽清洛数字科技有限公司 Road surface ponding depth detection method based on vehicle tire key points
CN117291893A (en) * 2023-09-28 2023-12-26 广州市西克传感器有限公司 Tire tread wear degree detection method based on 3D image
CN118096759B (en) * 2024-04-26 2024-07-12 深圳市二郎神视觉科技有限公司 Method and device for detecting tire tread pattern and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN103456022A (en) * 2013-09-24 2013-12-18 中国科学院自动化研究所 High-resolution remote sensing image feature matching method
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106996750A (en) * 2017-03-15 2017-08-01 山东交通学院 A kind of pattern depth measurement apparatus and pattern depth computational methods
CN107657130A (en) * 2017-10-18 2018-02-02 安徽佳通乘用子午线轮胎有限公司 A kind of reverse modeling method towards tyre tread parameter of structure design
CN108765298A (en) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN109523595A (en) * 2018-11-21 2019-03-26 南京链和科技有限公司 A kind of architectural engineering straight line corner angle spacing vision measuring method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008022879A1 (en) * 2007-05-10 2008-11-20 Atmel Germany Gmbh Wheel electronics and tire control system for measuring a measured variable

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN103456022A (en) * 2013-09-24 2013-12-18 中国科学院自动化研究所 High-resolution remote sensing image feature matching method
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN104778721A (en) * 2015-05-08 2015-07-15 哈尔滨工业大学 Distance measuring method of significant target in binocular image
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106996750A (en) * 2017-03-15 2017-08-01 山东交通学院 A kind of pattern depth measurement apparatus and pattern depth computational methods
CN107657130A (en) * 2017-10-18 2018-02-02 安徽佳通乘用子午线轮胎有限公司 A kind of reverse modeling method towards tyre tread parameter of structure design
CN108765298A (en) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN109523595A (en) * 2018-11-21 2019-03-26 南京链和科技有限公司 A kind of architectural engineering straight line corner angle spacing vision measuring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于双目相机的车道线检测研究";刘燕东;《中国优秀硕士学位论文全文数据库信息科技辑》;20190215;全文 *
"计算机视觉测量结构件尺寸技术";侯跃谦;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120415;全文 *

Also Published As

Publication number Publication date
CN110060240A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110060240B (en) Tire contour measurement method based on image pickup
He et al. Sparse template-based 6-D pose estimation of metal parts using a monocular camera
Yu et al. Learning dense facial correspondences in unconstrained images
Leroy et al. Shape reconstruction using volume sweeping and learned photoconsistency
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
EP2234064B1 (en) Method for estimating 3D pose of specular objects
CN110634161A (en) Method and device for quickly and accurately estimating pose of workpiece based on point cloud data
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN107274483A (en) A kind of object dimensional model building method
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110866969A (en) Engine blade reconstruction method based on neural network and point cloud registration
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN109272577B (en) Kinect-based visual SLAM method
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
CN104331907A (en) Method for measuring carrier speed based on ORB (Object Request Broker) character detection
Han et al. Target positioning method in binocular vision manipulator control based on improved canny operator
Chen et al. A comparative analysis between active structured light and multi-view stereo vision technique for 3D reconstruction of face model surface
Ventura et al. Structure and motion in urban environments using upright panoramas
Verma et al. Vision based object follower automated guided vehicle using compressive tracking and stereo-vision
CN110647925A (en) Rigid object identification method and device based on improved LINE-MOD template matching
CN106056599B (en) A kind of object recognition algorithm and device based on Object Depth data
Kallasi et al. Object detection and pose estimation algorithms for underwater manipulation
WO2018057082A1 (en) Curvature-based face detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant