CN115147616A - Method for detecting depth of surface accumulated water based on key points of vehicle tire - Google Patents

Method for detecting depth of surface accumulated water based on key points of vehicle tire Download PDF

Info

Publication number
CN115147616A
CN115147616A CN202210896536.0A CN202210896536A CN115147616A CN 115147616 A CN115147616 A CN 115147616A CN 202210896536 A CN202210896536 A CN 202210896536A CN 115147616 A CN115147616 A CN 115147616A
Authority
CN
China
Prior art keywords
point
tire
ellipse
positive
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210896536.0A
Other languages
Chinese (zh)
Inventor
汪应山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Qingluo Digital Technology Co ltd
Original Assignee
Anhui Qingluo Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Qingluo Digital Technology Co ltd filed Critical Anhui Qingluo Digital Technology Co ltd
Priority to CN202210896536.0A priority Critical patent/CN115147616A/en
Publication of CN115147616A publication Critical patent/CN115147616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a method for detecting depth of water accumulated on a road surface based on key points of a vehicle tire, which takes the differences of the vehicle tire in a monitored image into full consideration, processes the three conditions of a perfect circle, a perfect ellipse and a partial ellipse, and improves the accuracy of detecting the depth of the water accumulated.

Description

Method for detecting depth of surface accumulated water based on key points of vehicle tire
Technical Field
The invention relates to the technical field of depth detection of surface water, in particular to a depth detection method of surface water based on key points of a vehicle tire.
Background
In recent years, the process of urbanization in China is continuously promoted, and the problem of waterlogging of part of cities in flood season brings serious threats to the property and personal safety of people. Therefore, urban waterlogging early warning systems are established in cities and counties, monitoring cameras are fully utilized, and whether ponding occurs in monitoring points or not is observed through videos.
The invention patent application CN114299457A published by the national intellectual property office in 2022, 4, 8 discloses a water accumulation depth monitoring method, which is characterized in that a color image and a depth image of a region to be detected are collected by a shooting device, and the water accumulation depth is judged by an image depth difference value between a conventional moment and a water accumulation moment.
The invention patent application CN109741391A published by the national intellectual property office in 2019, 5, month and 10 also discloses a method for detecting the depth of water accumulated on a road surface. There is a high possibility of erroneous determination depending on whether or not there is a straight edge in the edge information of the wheel because there are many straight marks on the road surface, such as lane lines, curbs, adjacent cars, and the like.
The invention patent CN109632037B issued by the national intellectual property office at 6/5/2020 discloses an urban waterlogging depth detection method based on image intelligent identification. The disadvantage of this scheme is that training of the Faster RCNN network needs to cover comprehensive vehicle wading pictures, including that the tire size is complete, wading depth is complete, if submerge grade division is careful, then required vehicle wading pictures increase by times, simultaneously the submerge grade of true vehicle wading pictures also needs to be subdivided, leads to the sample to be difficult to obtain, if submerge grade division not careful, then the ponding depth calculation result is directly influenced. Further, although the tire is generally a perfect circle, the images captured by the monitoring camera are not all perfect circles, but are mostly elliptical.
Disclosure of Invention
The invention provides a method for detecting depth of water accumulation on a road surface based on key points of a vehicle tire, aiming at the technical problems of the existing method for detecting the depth of water accumulation on the road surface.
A method for detecting depth of surface gathered water based on key points of a vehicle tire comprises the following steps:
step 1, extracting frame images at intervals from a real-time video acquired by a monitoring camera;
step 2, preprocessing an image;
step 3, tire detection is carried out on the preprocessed image by utilizing a tire identification network, and the preprocessed image is subjected to frame selection;
step 4, intercepting a tire picture, extracting a part of tire contour above the water accumulation surface, and fitting the tire contour into a complete tire outer contour;
step 5, judging whether the outer contour of the tire is a perfect circle, if so, skipping to step 9, otherwise, sequentially executing step 6;
step 6, judging whether the outer contour of the tire is a positive ellipse, if so, skipping to step 8, otherwise, sequentially executing step 7;
step 7, the outer contour of the tire is a partial ellipse, and the tire is corrected to be a positive ellipse;
step 8, correcting the positive ellipse into a positive circle;
step 9, determining the position of the circle center according to the perfect circle data;
and step 10, calculating the depth of the accumulated water by combining the position of the circle center.
Further, the specific operation of step 9 is: selecting any point A on the perfect circle, and connecting two intersection points B and C of the ponding surface and the perfect circle with the any point A to form two chord line segments AB and AC of the perfect circle; and respectively making perpendicular bisectors of the chord line segments AB and AC, wherein a point O of intersection of the two perpendicular bisectors is the center of the circle of the perfect circle.
Further, the specific operation of step 10 is: and calculating the included angle between the line segment OA and the horizontal line or the vertical line, and then combining the included angle with the diameter of the tire to calculate the depth of the accumulated water.
Further, the tire diameter is determined based on a tire size classification network, and the tire size classification network is obtained by training a large number of vehicle samples with labeled tire sizes; the tire size classification network inputs a picture containing a single vehicle and outputs a vehicle tire size type.
Further, the step 7 of correcting the partial ellipse into the positive ellipse specifically includes the following steps:
step 7.1, determining intersection points F and G of the tire and the water accumulation surface, wherein the point F is closer to the shooting lens;
step 7.2, taking a point H adjacent to the point F on the outline of the partial elliptic tire, and comparing a straight line passing through the point H and being parallel to the water accumulation surface with the outline of the partial elliptic tire with the point I;
7.3, estimating coordinates of the transformed points F ', G', H 'and I' of the points F, G, H and I based on the perspective principle and the transformed water accumulation surface level;
7.4, based on the four-point coordinates of the source image and the location coordinates of the target image, obtaining a perspective transformation matrix;
and 7.5, obtaining a corrected positive ellipse according to the perspective transformation matrix.
Further, the step 8 of correcting the positive ellipse into the positive circle specifically includes the following steps:
step 8.1, determining the central point O' of the positive ellipse, which comprises the following steps:
step 8.1.1, randomly taking three points J, K and L on the regular ellipse, respectively making tangent lines of the ellipse passing through the points J, K and L, comparing the tangent line passing through the point J with the tangent line passing through the point K with the point M, comparing the tangent line passing through the point J with the tangent line passing through the point L with the point N,
step 8.1.2, connecting point M with midpoint P of segment JK, and connecting point N with midpoint Q of segment JL,
step 8.1.3, the intersection point of the extension lines of the line segment MP and the line segment NQ is the central point O' of the positive ellipse,
step 8.2, intersecting the ellipse with the point R from the point O 'vertically upwards, wherein the point O' R is the short radius of the ellipse;
step 8.3, the fixed point O 'laterally compresses the positive ellipse so that the length of 2O' r approximates the diameter closest to the tire size;
step 8.4, judging whether the transversely compressed positive ellipse approaches to a positive circle, if so, finishing correction, and skipping to step 8.6, otherwise, sequentially executing step 8.5;
step 8.5, further transversely compressing to enable the length of 2 x O' R to approach the diameter of the larger primary tire size, and then skipping step 8.4;
and 8.6, ending.
Further, if the tire is not detected from the current frame image in the step 3, continuing to detect the next frame image; and if the tires are not detected in N continuous frames, judging that the water accumulation is serious and giving an early warning.
Further, the ponding surface is displayed as a straight line in the image, and the straight line is obtained by utilizing a Hough straight line detection method.
The method fully considers the difference of the vehicle tires in the monitoring image, carries out condition-based processing, and improves the accuracy of the accumulated water depth detection; by combining the deep learning network, the plane geometry theory and the perspective basic theory, compared with the method which only depends on the deep learning network, the method greatly saves hardware resources because the coordinate conversion based on the plane geometry theory and the perspective basic theory can be realized through simple codes.
Drawings
FIG. 1 is a flow chart of a method for detecting depth of surface water accumulation based on key points of a vehicle tire;
FIG. 2 is a schematic diagram of a sample image preprocessing operation;
FIG. 3 is a schematic view of fitting the outer contour of a complete tire;
FIG. 4 is a schematic diagram of correction of a partial ellipse to a positive ellipse;
FIG. 5 is a schematic view of the center of a positive ellipse;
fig. 6 is a schematic diagram of water depth calculation.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description. The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Example 1
A method for detecting depth of surface gathered water based on key points of a vehicle tire is shown in figure 1 and comprises the following steps:
1. extracting a frame image
The camera shoots the road surface in real time to form a video stream, but because the change of the surface water basically does not have the situation of instantaneous surge, only one frame of image is extracted at fixed intervals in the real-time video stream collected by the monitoring camera, for example, one frame of image is extracted at intervals of 5 minutes and 8 minutes.
2. Image pre-processing
The monitored images at different moments may have large differences, such as pixel distribution of the images, and these differences are not favorable for convergence of the neural network in the training phase and are also not favorable for accurate identification in the detection phase. Therefore, both the sample image and the image to be detected need to be subjected to a preprocessing operation.
A flowchart of the sample image preprocessing operation in this embodiment is shown in fig. 2, and the specific contents are as follows:
size normalization of dimensions
The sizes of all the monitoring images are normalized to 224 multiplied by 224, if the image sizes are overlarge, the video memory required by neural network training is increased, and parameter training of the neural network is not facilitated.
Magnitude normalization of numerical values
After the size normalization is completed, the image is also subjected to numerical size normalization. For sample image A i I is more than or equal to 1 and less than or equal to n, wherein n is the total number of monitoring images, and an image A is acquired i The maximum and minimum values of the middle pixel are respectively marked as
Figure BDA0003768659870000041
And
Figure BDA0003768659870000042
for image A i Each pixel point of (1) is subjected to numerical value normalization, specifically as shown in formula (1). In the formula (1), the first and second groups,A i (x, y) is shown in image A i The pixel value, A, at the coordinate point (x, y) of the middle position i ' (x, y) is the pixel value at the point after numerical normalization.
Figure BDA0003768659870000051
Normalization of pixel distribution
In order to accelerate the convergence rate of the neural network in the training stage process, the image is subjected to distribution normalization to obtain the mean value mu (x, y) and the variance sigma of each pixel point (x, y) in the image 2 (x, y), and the calculation formulas (2) and (3). Pixel normalization operates according to equation (4), where A " i And (x, y) is the pixel value of the point after pixel normalization.
Figure BDA0003768659870000052
Figure BDA0003768659870000053
Figure BDA0003768659870000054
Note that:
A. when the neural network is used for tire detection, the neural network must be trained first, and then the neural network can achieve a good tire detection effect;
B. in the neural network training, the passing road surface detection image is fully collected as a sample to train neural network parameters;
C. the preprocessing operation of the steps II and III is only performed during the training of the neural network, the preprocessing operation of the steps is only performed during the preprocessing of the image to be detected, and meanwhile, the pixel value (the pixel value range is 0-255) of each pixel point in the image is divided by 255, so that the pixel value range is 0-1.
3. Tire target detection
The tire target detection is carried out by adopting a classical YOLO-V3 neural network in the embodiment. Inputting the preprocessed frame image into a trained YOLO-V3 neural network, outputting a plurality of rectangular frame areas by the neural network, and simultaneously marking coordinate points of the current rectangular frame area, wherein the area framed by the rectangular frame is the vehicle tire.
4. Complete fitting of the outer contour of the tire
And (3) cutting a tire picture, extracting a part of tire outline above the water accumulation surface, and fitting to form complete tire outline fitting, wherein the step is shown in a figure 3. The actual shape of the tire is very close to a perfect circle, but due to the problem of the shooting angle, an ellipse is often shown, and further, the ellipse is further classified into a case of a perfect ellipse and a partial ellipse, and fig. 3 shows partial ellipse fitting. As the prior art of circle fitting and ellipse fitting has been discussed and the solution has been provided, the details are not repeated herein.
The ponding surface is displayed as a straight line in the image, the straight line is obtained by utilizing a Hough straight line detection method, the detection technology of the part is mature, and the detection principle is simply described below.
In cartesian space, the representation of a straight line is y = kx + b, where k and b represent the slope and intercept, respectively. All points on the same straight line have the same slope and intercept, but because the straight line may be perpendicular to the positive axis X and the slope cannot be represented, the straight line is converted into a polar coordinate system space, and the representation form is r = xcos θ + ysin θ, wherein (X, y) represents the coordinate position of a certain point, (r, θ) represents the distance from the straight line passing through the point to the origin, and θ represents the included angle between r and the positive axis X, so that all the points on the same straight line have the same r and θ in the polar coordinate system.
For image G i In other words, a set of (r, θ) is obtained by traversing pixel by pixel and mapping each pixel to a polar coordinate system. When all coordinate points are mapped, a plurality of (r, theta) groups are obtained, wherein repetition exists, and the two groups (r, theta) with the highest repetition frequency can obtain linear equations under two Cartesian coordinate systems, so that the linear equations in the images can be obtained through the linear equationsAnd acquiring a plurality of groups of line segments (including transverse line segments and longitudinal line segments), and screening out a horizontal straight line connected with the tire area from the line segments to form a boundary of the accumulated water.
5. Judging whether the outer contour of the tire is a perfect circle, if so, skipping to the step 9, otherwise, sequentially executing the step 6
And judging whether the outer contour of the tire is a perfect circle or not, rotating the picture by 45 degrees, and judging whether the maximum value of the outer contour of the tire in the horizontal direction and the vertical direction changes or not, if not, judging that the outer contour of the tire is a perfect circle, otherwise, judging that the outer contour of the tire is an ellipse.
6. Judging whether the outer contour of the tire is a positive ellipse or not, if so, skipping to the step 8, otherwise, sequentially executing the step 7
And judging whether the outer contour of the tire is a positive ellipse or not by judging whether the ordinate of the maximum value in the horizontal direction is consistent or not, if so, judging the tire to be a positive ellipse, and otherwise, judging the tire to be a partial ellipse.
7. The outer contour of the tire is partial ellipse and is corrected to be a positive ellipse
Since the shape presented in the image is not a perfect circle or a perfect ellipse under normal conditions of the tire, it is necessarily a partial ellipse, and thus it is corrected to a perfect ellipse. Other very special cases are not considered here.
The correction of the partial ellipse into the positive ellipse specifically includes the following steps (see fig. 4):
the method includes determining points F and G of intersection of a tire and a waterlogging surface, wherein the point F is closer to a shooting lens.
And secondly, taking a point H adjacent to the point F on the outline of the partial elliptic tire, and comparing a straight line passing through the point H and being parallel to the water accumulation surface with the outline of the partial elliptic tire to the point I.
Thirdly, estimating coordinates of the transformed points F ', G', H 'and I' of the points F, G, H and I based on the perspective principle and the transformed water accumulation surface level.
Since the closer the point of the perspective image to the camera, the smaller the influence of the perspective on the point, the coordinates of the point F 'and the point H' are considered to be consistent with the coordinates of the point F and the point H, and therefore, the coordinates of the point G 'and the point I' are determined.
Since the water surface is horizontal after the image correction, the ordinate of the point G 'and the point I' coincides with the point F 'and the point H', respectively. Further, the abscissa of the point G' and the abscissa of the point G are also coincident based on the perspective characteristic.
Assuming that the difference between the horizontal coordinates of the point F and the point H is d1, and the difference between the horizontal coordinates of the point I and the point G is d2, before correction, d1 is obviously unequal to d 2; however, after correction, the difference between the abscissa of the point F ' and the abscissa of the point H ' is equal to the difference between the abscissa of the point I ' and the abscissa of the point G ', and is denoted as d3, based on symmetry, thereby determining the abscissa of the point I '.
And fourthly, obtaining a perspective transformation matrix based on the four-point coordinates of the source image and the location coordinates of the target image. This part is the existing plane geometry transformation theory and is not described here.
And fifthly, obtaining a corrected positive ellipse according to the perspective transformation matrix.
8. Correct the positive ellipse into a positive circle (refer to FIG. 5)
Determining a central point O' of a positive ellipse;
(1) randomly taking three points J, K and L on the regular ellipse, respectively passing the points J, K and L to make tangent lines of the ellipse, comparing the tangent line passing the point J with the tangent line passing the point K with the point M, and comparing the tangent line passing the point J with the tangent line passing the point L with the point N;
(2) connecting point M with midpoint P of segment JK, and connecting point N with midpoint Q of segment JL;
(3) the intersection point of the extension lines of the line segment MP and the line segment NQ is the central point O' of the positive ellipse;
vertically and upwards intersecting with the ellipse from the point O 'to a point R, wherein the O' R is the short radius of the ellipse;
c, compressing the positive ellipse transversely at fixed point O ', such that the length of 2O' r approximates the diameter closest to the tire size;
fourth, whether the transversely compressed positive ellipse approaches to a true circle or not is judged, if yes, correction is completed, the sixth step is skipped, and otherwise, the fifth step is executed in sequence;
fifthly, further transversely compressing to enable the length of 2 star O' R to approach to the diameter of the tire size of the first-level tire, and then jumping to step four;
sixthly, ending the step six.
The correction process in steps 7 and 8 is completely based on perspective characteristics and geometric theory, although there is a partial approximate estimation, because there is a constraint of tire size (although the tire size is not the same size, there are several fixed sizes), there is no large influence on the final detection result, and it is almost negligible.
9. Determining the position of the circle center according to the circle data
Determining the circle center position based on a geometric theory: selecting any point A on the perfect circle, and connecting two intersection points B and C of the ponding surface and the perfect circle with the any point A to form two chord line segments AB and AC of the perfect circle; and respectively making vertical bisectors of the chord line segments AB and AC, wherein a point O of intersection of the two vertical bisectors is the center of a circle, which is shown in figure 6.
10. Calculating the depth of the accumulated water by combining the position of the circle center
Calculating an included angle theta between a line segment OA and a horizontal line or a vertical line
Because the angle is fixed and unchangeable, the embodiment adopts the angle pictures corresponding to 0-90 degrees stored in advance, the angle in each picture can be separated by 1 degree or 0.5 degree, then the picture with the maximum similarity is obtained by comparing the angle size of the included angle theta with the angle pictures stored in advance, and the angle of the included angle theta is further obtained by the matching method.
Referring to fig. 6, OD = OB + cos θ, DE = OD + OE = OE (1 + cos θ) according to the trigonometric function theorem. Therefore, in the case where the angle of the included angle θ and the tire diameter are known, the water accumulation depth DE can be calculated.
The size of the tires on the market is not uniform, but is fixed, such as 15 inches, 16 inches, 17 inches and the like, and the size of the tires is often closely related to the vehicle type. Thus, tire diameter is determined based on a tire size classification network trained from a large number of tire size labeled vehicle samples; the tire size classification network inputs a picture containing a single vehicle and outputs a vehicle tire size type.
It should be added here that if no tire is detected from the current frame image in step 3, the next frame image is detected continuously; and if the tires are not detected in N continuous frames, judging that the water accumulation is serious and early warning.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art and related arts based on the embodiments of the present invention without any creative effort, shall fall within the protection scope of the present invention.

Claims (9)

1. A method for detecting depth of surface gathered water based on key points of a vehicle tire is characterized by comprising the following steps:
step 1, extracting frame images at intervals from a real-time video acquired by a monitoring camera;
step 2, preprocessing an image;
step 3, tire detection is carried out on the preprocessed image by utilizing a tire identification network, and the preprocessed image is subjected to frame selection;
step 4, intercepting a tire picture, extracting a part of tire contour above the water accumulation surface, and fitting the tire contour into a complete tire outer contour;
step 5, judging whether the outer contour of the tire is a perfect circle, if so, skipping to step 9, otherwise, sequentially executing step 6;
step 6, judging whether the outer contour of the tire is a positive ellipse, if so, skipping to step 8, otherwise, sequentially executing step 7;
step 7, the outer contour of the tire is a partial ellipse, and the tire is corrected to be a positive ellipse;
step 8, correcting the positive ellipse into a positive circle;
step 9, determining the circle center position according to the circle data;
and step 10, calculating the depth of the accumulated water by combining the position of the circle center.
2. The method for detecting the depth of the surface water according to claim 1, wherein the step 9 comprises the following specific operations: selecting any point A on the perfect circle, and connecting two intersection points B and C of the ponding surface and the perfect circle with the any point A to form two chord line segments AB and AC of the perfect circle; and respectively making vertical bisectors of the chord line segments AB and AC, wherein the intersection point O of the two vertical bisectors is the center of the circle.
3. The method for detecting the depth of the surface water according to claim 2, wherein the step 10 specifically comprises the following operations: and calculating the included angle between the line segment OA and the horizontal line or the vertical line, and then calculating the water accumulation depth by combining the included angle and the tire diameter.
4. The method of claim 3, wherein the tire diameter is determined based on a tire size classification network trained from a plurality of tire size labeled vehicle samples; the tire size classification network inputs a picture containing a single vehicle and outputs a vehicle tire size type.
5. The method for detecting the depth of the surface water according to claim 1, wherein the step 7 of correcting the partial ellipse into the positive ellipse specifically includes the steps of:
step 7.1, determining intersection points F and G of the tire and the water accumulation surface, wherein the point F is closer to the shooting lens;
step 7.2, taking a point H adjacent to the point F on the outline of the partial elliptic tire, and comparing a straight line passing through the point H and being parallel to the water accumulation surface with the outline of the partial elliptic tire with the point I;
7.3, estimating coordinates of the transformed points F ', G', H 'and I' of the points F, G, H and I based on the perspective principle and the transformed water accumulation surface level;
7.4, based on the four-point coordinates of the source image and the location coordinates of the target image, obtaining a perspective transformation matrix;
and 7.5, obtaining the corrected positive ellipse according to the perspective transformation matrix.
6. The method for detecting depth of standing water according to claim 1, wherein the correction of the positive ellipse to the positive circle in step 8 specifically includes the steps of:
step 8.1, determining the central point O' of the positive ellipse;
step 8.2, intersecting the ellipse with the point R from the point O 'vertically upwards, wherein the point O' R is the short radius of the ellipse;
step 8.3, the fixed point O 'transversely compresses the positive ellipse so that the length of 2 x O' r approximates the diameter closest to the tyre size;
step 8.4, judging whether the transversely compressed positive ellipse approaches to a positive circle, if so, finishing correction, and skipping to step 8.6, otherwise, sequentially executing step 8.5;
step 8.5, further transversely compressing to enable the length of 2 x O' R to approach the diameter of the larger primary tire size, and then skipping step 8.4;
and 8.6, ending.
7. The method for detecting depth of ponding water according to claim 6, wherein the step 8.1 of determining the center point O' of the positive ellipse specifically includes the steps of:
step 8.1.1, randomly taking three points J, K and L on the positive ellipse, respectively making tangent lines of the ellipse passing through the points J, K and L, comparing the tangent line passing through the point J with the tangent line passing through the point K with a point M, and comparing the tangent line passing through the point J with the tangent line passing through the point L with a point N;
step 8.1.2, connecting point M with midpoint P of segment JK, and connecting point N with midpoint Q of segment JL;
and 8.1.3, the intersection point of the extension lines of the line segment MP and the line segment NQ is the central point O' of the positive ellipse.
8. The method for detecting the depth of the water accumulation in the road surface according to claim 1, wherein if no tire is detected from the current frame image in the step 3, the next frame image is continuously detected; and if the tires are not detected in N continuous frames, judging that the water accumulation is serious and giving an early warning.
9. The method for detecting the depth of the water accumulation according to claim 1, wherein the water accumulation surface is displayed as a straight line in the image, and the straight line is acquired by a hough straight line detection method.
CN202210896536.0A 2022-07-27 2022-07-27 Method for detecting depth of surface accumulated water based on key points of vehicle tire Pending CN115147616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210896536.0A CN115147616A (en) 2022-07-27 2022-07-27 Method for detecting depth of surface accumulated water based on key points of vehicle tire

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210896536.0A CN115147616A (en) 2022-07-27 2022-07-27 Method for detecting depth of surface accumulated water based on key points of vehicle tire

Publications (1)

Publication Number Publication Date
CN115147616A true CN115147616A (en) 2022-10-04

Family

ID=83414723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210896536.0A Pending CN115147616A (en) 2022-07-27 2022-07-27 Method for detecting depth of surface accumulated water based on key points of vehicle tire

Country Status (1)

Country Link
CN (1) CN115147616A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161572A1 (en) * 2015-12-03 2017-06-08 GM Global Technology Operations LLC Vision-based wet road surface condition detection using tire tracks
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN111241950A (en) * 2020-01-03 2020-06-05 河海大学 Urban ponding depth monitoring method based on deep learning
CN114792416A (en) * 2021-01-08 2022-07-26 华为技术有限公司 Target detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161572A1 (en) * 2015-12-03 2017-06-08 GM Global Technology Operations LLC Vision-based wet road surface condition detection using tire tracks
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN111241950A (en) * 2020-01-03 2020-06-05 河海大学 Urban ponding depth monitoring method based on deep learning
CN114792416A (en) * 2021-01-08 2022-07-26 华为技术有限公司 Target detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUANG, SY: "Measurement of Tire Tread Depth with Image Triangulation", 《 2016 INTERNATIONAL SYMPOSIUM ON COMPUTER, CONSUMER AND CONTROL (IS3C)》, 30 March 2016 (2016-03-30) *

Similar Documents

Publication Publication Date Title
CN107679520B (en) Lane line visual detection method suitable for complex conditions
CN107045629B (en) Multi-lane line detection method
CN103258432B (en) Traffic accident automatic identification processing method and system based on videos
CN108491498B (en) Bayonet image target searching method based on multi-feature detection
CN108052904B (en) Method and device for acquiring lane line
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN106778551B (en) Method for identifying highway section and urban road lane line
CN107563330B (en) Horizontal inclined license plate correction method in surveillance video
CN110197185B (en) Method and system for monitoring space under bridge based on scale invariant feature transform algorithm
CN104933398A (en) vehicle identification system and method
CN111027446A (en) Coastline automatic extraction method of high-resolution image
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN116824516B (en) Road construction safety monitoring and management system
CN113469201A (en) Image acquisition equipment offset detection method, image matching method, system and equipment
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN111652033A (en) Lane line detection method based on OpenCV
CN107977608B (en) Method for extracting road area of highway video image
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
CN115147616A (en) Method for detecting depth of surface accumulated water based on key points of vehicle tire
CN106886609B (en) Block type rural residential area remote sensing quick labeling method
CN109886120B (en) Zebra crossing detection method and system
Zhao et al. A robust lane detection algorithm based on differential excitation
CN108090479B (en) Lane detection method for improving Gabor conversion and updating vanishing point
CN113516121A (en) Multi-feature fusion non-motor vehicle license plate region positioning method
CN112710632A (en) Method and system for detecting high and low refractive indexes of glass beads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination