CN115984772A - Road ponding detection method and terminal based on video monitoring - Google Patents
Road ponding detection method and terminal based on video monitoring Download PDFInfo
- Publication number
- CN115984772A CN115984772A CN202211696566.3A CN202211696566A CN115984772A CN 115984772 A CN115984772 A CN 115984772A CN 202211696566 A CN202211696566 A CN 202211696566A CN 115984772 A CN115984772 A CN 115984772A
- Authority
- CN
- China
- Prior art keywords
- area
- ponding
- accumulated water
- road
- judging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A50/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention discloses a road ponding detection method and a terminal based on video monitoring, wherein a road monitoring picture is obtained, and rectangular frame information of pedestrians and vehicles is obtained by detecting the road monitoring picture; classifying the water accumulation grade at the central position of the bottom of the rectangular frame according to the positions of the pedestrians or vehicles submerged by the water accumulation in the rectangular frame information of the pedestrians or vehicles to obtain water accumulation depth reference information at the position of the rectangular frame; carrying out water surface area identification on the road monitoring picture to obtain ponding profile information; transforming the ponding profile information and the ponding depth reference information by adopting an inverse perspective transformation relation matrix to obtain ponding profile information and ponding depth reference information on an inverse perspective transformation image; and according to the ponding profile information and the ponding depth reference information on the inverse perspective transformation image, counting the profile, the maximum reference depth and the real area of each block of ponding as ponding information.
Description
Technical Field
The invention relates to the technical field of road traffic, in particular to a method and a terminal for detecting road ponding based on video monitoring.
Background
The urban road is easy to form surface accumulated water under the condition of short-term strong rainfall or pipeline breakage, the influence on traffic is caused, the condition of reporting the accumulated water through real-time identification is favorable for reminding relevant maintenance personnel to process in time, the accurate assessment on the degree of the accumulated water is favorable for a manager to provide decision-making basis for influence of rainfall and waterlogging on road traffic, and the number of times of false alarm can be reduced.
At present, road monitoring probes are dense, most of road surfaces can be covered by zooming and machine position adjustment, and accumulated water identification by using pictures of monitoring pictures is the most reasonable method.
In the prior art, an image recognition technology is used for identifying waterlogging, for example, in patent application document CN202111139709, an infrared image is used for identifying a waterlogging area, but special camera equipment is needed, and the quantity advantage of the existing road monitoring cameras cannot be well utilized.
Patent application documents CN201811403004 and CN202111668158 need to manually calibrate a reference object for each scene to judge the flooding degree of the surface water, and have no feasibility of large-scale deployment.
Therefore, patent application documents CN201811574435 and CN202210192200 and the like use a training mode to identify reference objects, and the reference objects are also common objects on streets, but the number of real flooded samples is small, and the reference objects cannot be ensured to appear when identification is needed. The nature of the convolutional neural network is a statistical learning method, and the accuracy and reliability are directly affected by the number of samples. Patent application document
CN201811514535, CN201910101996, using tires as reference, the number of real waterlogging scene samples is large, but the shooting angle needs to be at the side of the vehicle, and the more submerged the tire features are, the less.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the road ponding detection and terminal based on video monitoring can better detect the ponding condition.
In order to solve the technical problems, the invention adopts the technical scheme that:
a road ponding detection method based on video monitoring comprises the following steps:
s1, acquiring a road monitoring picture, and detecting the road monitoring picture to acquire rectangular frame information of pedestrians and vehicles;
s2, expanding the rectangular frame information of the pedestrians and the vehicles to a set size, classifying the water accumulation grade of the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the accumulated water in the rectangular frame information of the pedestrians to obtain water accumulation depth reference information of the central position of the bottom of the rectangular frame, and classifying the water accumulation grade of the central position of the rectangular frame according to the position of the vehicle submerged by the accumulated water in the rectangular frame information of the vehicles to obtain water accumulation depth reference information of the central position of the rectangular frame;
s3, identifying the water surface area of the road monitoring picture to obtain ponding profile information;
s4, transforming the ponding profile information and the ponding depth reference information by adopting an inverse perspective transformation relation matrix to obtain ponding profile information and ponding depth reference information on an inverse perspective transformation image;
and S5, counting the outline, the maximum reference depth and the real area of each water block as water accumulation information according to the water accumulation outline information and the water accumulation depth reference information on the inverse perspective transformation image.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a road ponding detection terminal based on video monitoring comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the method.
The invention has the beneficial effects that: the utility model provides a road ponding detection method and terminal based on video monitoring, use pedestrian and vehicle as the reference thing of degree of depth judgement to do not use and carry out the degree of depth judgement like side modes such as sample generation, key point judgement, the method of using is modelled under normal color image, do not rely on infrared equipment, do not use special reference thing, have fine universality, be fit for the large tracts of land and promote, the condition of sheltering from need not be considered in nature, compare prior art, avoid submerging the problem that the reference is less more, can be better detect the ponding condition.
Drawings
FIG. 1 is a schematic flow chart of stage A according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of stage B according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a method for detecting road ponding based on video monitoring according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a relationship between a road monitoring screen and a world coordinate system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a road ponding detection end based on video monitoring according to an embodiment of the present invention.
Description of reference numerals:
1. a road ponding detection terminal based on video monitoring; 2. a processor; 3. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 to 4, a method for detecting road ponding based on video monitoring includes the steps of:
s1, acquiring a road monitoring picture, and detecting the road monitoring picture to acquire rectangular frame information of pedestrians and vehicles;
s2, expanding the rectangular frame information of the pedestrians and the vehicles to a set size, classifying the water accumulation grade of the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the accumulated water in the rectangular frame information of the pedestrians to obtain water accumulation depth reference information of the central position of the bottom of the rectangular frame, and classifying the water accumulation grade of the central position of the rectangular frame according to the position of the vehicle submerged by the accumulated water in the rectangular frame information of the vehicles to obtain water accumulation depth reference information of the central position of the rectangular frame;
s3, identifying the water surface area of the road monitoring picture to obtain ponding profile information;
s4, transforming the ponding profile information and the ponding depth reference information by adopting an inverse perspective transformation relation matrix to obtain ponding profile information and ponding depth reference information on an inverse perspective transformation image;
and S5, counting the outline, the maximum reference depth and the real area of each water block as water accumulation information according to the water accumulation outline information and the water accumulation depth reference information on the inverse perspective transformation image.
As can be seen from the above description, the beneficial effects of the present invention are: a method and a terminal for detecting road ponding based on video monitoring use pedestrians and vehicles as reference objects for depth judgment, do not use side modes such as sample generation and key point judgment to perform depth judgment, are modeled under a normal color image, do not depend on infrared equipment, do not use special reference objects, have good universality, are suitable for large-area popularization, and compared with the prior art, the shielding condition does not need to be considered naturally, and the problem that the reference is less when the vehicle is submerged more is avoided.
Further, the method also comprises the following steps:
s6, obtaining a sidewalk area with accumulated water and a roadway area according to the sidewalk area, the roadway area and the accumulated water information, and judging the accumulated water early warning level according to the area and the accumulated water depth of the sidewalk area with the accumulated water and the roadway area.
According to the above description, the early warning grade is judged by adopting the water accumulation area and the water accumulation depth, so that the influence of water accumulation can be judged more accurately.
Further, the roadway area comprises an intersection area and a one-way road area, and the accumulated water early warning level is judged according to the area and the accumulated water depth of the sidewalk area with accumulated water and the roadway area, wherein the accumulated water early warning level specifically comprises the following steps:
when the water accumulation early warning device is used for a sidewalk area, judging whether the area of the sidewalk area with water accumulation is smaller than or equal to a first set area, if so, judging that the early warning level of the water accumulation is 'water accumulation without influence', otherwise, judging whether the level of the depth of the water accumulation is larger than or equal to a first set depth level, if so, judging that the early warning level of the water accumulation is 'risk exist', otherwise, judging whether the area of the sidewalk area with the water accumulation is smaller than a second set area, if so, judging that the early warning level of the water accumulation is 'influence to pass', otherwise, judging that the early warning level of the water accumulation is 'block to pass';
for the intersection area, judging whether the area of the roadway area with accumulated water is smaller than a second set area, if so, judging that the accumulated water early warning level is 'water accumulation without influence', otherwise, judging whether the level of the accumulated water depth is larger than or equal to a first set depth level, if so, judging that the accumulated water early warning level is 'risk exist', otherwise, judging whether the area of the roadway area with accumulated water is smaller than a third set area, if so, judging that the accumulated water early warning level is 'influence passage', otherwise, judging that the accumulated water early warning level is 'block passage';
for the one-way lane area, calculating a centroid coordinate of the lane area with the accumulated water, calculating the length of the lane area with the accumulated water in the direction perpendicular to the one-way lane, judging whether the length of the lane area with the accumulated water in the direction perpendicular to the one-way lane is smaller than a first set length, if so, judging that the accumulated water early warning level is 'no influence of accumulated water', otherwise, judging that the level of the accumulated water depth is larger than or equal to a first set depth level, if so, judging that the accumulated water early warning level is 'risk exists', otherwise, judging that the length of the lane area with the accumulated water in the direction perpendicular to the one-way lane is smaller than a second set length, if so, judging that the accumulated water early warning level is 'influence', and otherwise, judging that the accumulated water early warning level is 'blocked passage'.
From the above description, the intersection is similar to the pedestrian area in that the pedestrian has no fixed traveling direction, and the intersection area needs to be also responsible for vehicles in multiple directions, and the traffic will be affected as long as the area is affected by the accumulated water. For a lane in a single direction, the influence of the length of accumulated water in the direction perpendicular to the lane can be judged only.
Further, the method also comprises the following steps of:
s01, acquiring a road monitoring picture video stream, and performing pedestrian target detection and vehicle target detection on each frame of image of the road monitoring picture video stream to obtain a pedestrian target and a vehicle target of each frame of image;
s02, carrying out target tracking on the pedestrian target and the vehicle target to obtain a pedestrian track point sequence and a vehicle track point sequence;
s03, constructing an inverse perspective transformation relation matrix from a road monitoring picture to a world coordinate system;
and S04, acquiring a sidewalk area and a roadway area according to the pedestrian track point sequence and the vehicle track point sequence, wherein the roadway area comprises a one-way roadway area and an intersection area.
Further, the step S03 includes:
s031, acquiring a frame of image of a road monitoring picture video stream, and acquiring the monitoring installation height, downward pitch angle and equivalent focal length of a camera;
s032, calculating a longitudinal half field angle and a transverse half field angle of the camera according to the equivalent focal length of the camera;
s033, calculating the relation between the road monitoring picture image and a world coordinate system according to the width of the road monitoring picture image, the height of the road monitoring picture image, the longitudinal half field angle and the transverse half field angle of the camera, the monitoring installation height of the camera and the downward pitch angle;
s034, selecting four points on the road monitoring picture image, and establishing an inverse perspective transformation relation matrix according to the selected four points.
According to the description, the vehicle and pedestrian tracking of the road section where the video monitoring is located in the daily waterless time is realized, the image areas are counted, the road traffic heat information graph is formed, and different types of traffic areas are found.
Further, selecting four points on the road monitoring picture image, specifically, transforming the road monitoring picture image through the relation between the road monitoring picture image and a world coordinate system to obtain an inverse perspective transformation image;
selecting two points at two sides of the bottom of the inverse perspective transformation image, calculating the width of the bottom of the inverse perspective transformation image, if the width of the inverse perspective transformation image is one line of the width 2-4 of the bottom of the inverse perspective transformation image, selecting two points at two ends of the line, and if the width of the inverse perspective transformation image is not the same, selecting two points in the first line of the inverse perspective transformation image.
From the above description, it can be known that the line where the real length of the last line is 2-4 times is selected as the boundary, which means that when the perspective effect of the near-large and far-small lines is obvious, the real area represented by the far-point is large, the image segmentation is not accurate, and the positioning error is also large, so that the line is directly discarded.
Further, the relationship between the road monitoring picture image and the world coordinate system is calculated according to the following formula:
wherein X and y are coordinates of the road monitoring image, X P And Y P The coordinate of a world coordinate system, h is the height of the monitoring installation of the camera, theta is a downward pitch angle, alpha is the longitudinal half field angle of the camera under the equivalent focal length f, and beta is the transverse half field angle of the camera under the equivalent focal length f.
As can be seen from the above description, a formula for calculating the relationship between the road monitoring screen image and the world coordinate system is given.
Further, the step S4 includes the steps of:
s041, transforming the pedestrian track point sequence and the vehicle track point sequence according to the inverse perspective transformation relation matrix to obtain a pedestrian track point sequence and a vehicle track point sequence on an inverse perspective transformation image;
s042, creating a heat map of the pedestrian activity area according to the pedestrian track point sequence on the inverse perspective transformation image;
s043, according to the vehicle track point sequence on the inverse perspective transformation image, a vehicle heat map containing the driving direction is created;
and S044, binarizing the heat map of the pedestrian activity area and the heat map of the vehicle, and filtering an area with the heat lower than a set value and an overlapping area to obtain a sidewalk area and a roadway area.
As can be seen from the above description, the acquisition of the sidewalk area and the roadway area is achieved.
Further, the step of classifying the water accumulation grade of the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the accumulated water in the rectangular frame information to obtain the water accumulation depth reference information of the central position of the bottom of the rectangular frame specifically comprises the following steps:
dividing the positions of the pedestrians submerged by accumulated water into six accumulated water grades of no accumulated water, no ankle, no middle calf, no knee, no middle thigh and no waist according to the rectangular frame information of the pedestrians;
the method comprises the following steps of classifying the water accumulation grade at the central position of the rectangular frame according to the position of the vehicle submerged by accumulated water in the rectangular frame information of the vehicle to obtain water accumulation depth reference information at the central position of the rectangular frame, wherein the water accumulation depth reference information is specifically as follows:
and dividing the positions of the vehicles submerged by accumulated water in the rectangular frame information of the vehicles into six water accumulation grades of no water accumulation, middle and lower tires, middle and upper tires, tire tops and engine covers.
From the above description, the water accumulation grade classification is realized.
Referring to fig. 5, a road ponding detection terminal based on video monitoring includes a memory, a processor and a computer program stored in the memory and capable of running on the processor, and the processor implements the method when executing the computer program.
The method is used for detecting the accumulated water on the road so as to judge the influence of the accumulated water on the traffic.
Referring to fig. 1 to 4, a first embodiment of the present invention is:
a road accumulated water detection method based on video monitoring comprises a stage A and a stage B, wherein the stage A is a traffic flow monitoring stage, vehicles and pedestrians are tracked on a road section where the video monitoring is located in daily accumulated water-free time, image areas are counted, a road traffic heat information graph is formed, and different types of traffic areas are found.
Specifically, the method comprises the following steps:
s01, acquiring a road monitoring picture video stream, and performing pedestrian target detection and vehicle target detection on each frame of image of the road monitoring picture video stream to obtain a pedestrian target and a vehicle target of each frame of image.
And accessing the video stream to obtain a road monitoring picture. And detecting the target of the pedestrian and the vehicle for each frame of picture. The used model is a yolov5 neural network structure, and the output is a rectangular frame of the identified pedestrian and the identified vehicle. And (6) intercepting the roi image in the rectangular frame. Obtaining the coordinate of the central point of the rectangular frame and the width and height information (x) c ,y c, w, h), class label c, and rectangular in-frame image img roi 。
And S02, carrying out target tracking on the pedestrian target and the vehicle target to obtain a pedestrian track point sequence and a vehicle track point sequence.
And tracking the target of the pedestrian and the vehicle by using a deepsort method. I.e. the img of the feature extractor pair of the convolutional neural network structure roi And reducing the dimension of the image to obtain a 128-dimensional simplified feature for comparing the similarity of the two roi images. Correlating rectangular frames between frames according to positions, speeds and similarities to obtain a pedestrian and vehicle track point sequence { (x) i, y i ) Represents the trajectory of a certain vehicle or a certain pedestrian, (x) i ,y i ) The dot in the rectangle, i represents the serial number of the trace point.
And S03, constructing an inverse perspective transformation relation matrix from the road monitoring picture to a world coordinate system.
Referring to fig. 4, an inverse perspective transformation relationship of the image is constructed, that is, pixel points on the image are mapped from the monitoring picture to a plane where a road is located in the world coordinate system for subsequent calculation of an actual area and an actual distance. Specifically, without loss of generality, assuming that the plane of the road surface is a horizontal plane, in a world coordinate system, the coordinates of the position P of the monitoring camera are (0, h), where h is the height of the monitoring installation, the downward pitch angle is θ, and the equivalent focal length of the camera is f.
S031, obtain a frame of picture of the road monitoring picture video stream, obtain the height that the control of the lens is installed, angle of pitch down and lens equivalent focal length;
obtaining a frame of image with width from video stream image Height of image . And manually setting the height h of monitoring installation of the monitoring camera, a downward pitch angle theta and the equivalent focal length f of the camera.
And S032, calculating a longitudinal half field angle and a transverse half field angle of the camera according to the equivalent focal length of the camera.
And calculating the longitudinal half field angle alpha and the transverse half field angle beta of the camera at the equivalent focal length f according to the following formula.
And S033, calculating the relationship between the road monitoring picture image and the world coordinate system according to the width of the road monitoring picture image, the height of the road monitoring picture image, the longitudinal half field angle and the transverse half field angle of the camera, the height of the monitoring installation of the camera and the downward pitch angle.
The inverse perspective transformation is carried out, and a certain point (X, y) on the image can be mapped to the plane (X) of the road surface of the world coordinate system P ,Y P ). The relationship between them is as follows.
S034, selecting four points on the road monitoring picture image, and establishing an inverse perspective transformation relation matrix according to the selected four points.
Specifically, each line of the image is traversed, the coordinates of the head and the tail of each line are transformed according to the formula, and the real width after the transformation of the two points is calculated. The true width of the bottom row is width1 real The head and the tail are p1 and p2.
Find the width to be 3 times width1 real If all the rows are less than 3 × width1 real The first row is selected. The row and last row height difference Δ h is calculated according to a formula. Record the true width2 of the line real And head-to-tail point coordinates p3, p4.
The coordinates p1 ', p 2', p3 ', p 4' of the four points p1, p2, p3, p4 of the original image on the inverse perspective transformed image are calculated by:
p1`=(width image /width2 real *△h,(width image -width image /width2 real *width1 real )/2);
p2`=(width image /width2 real *△h,(width image +width image /width2 real *width1 real )/2);
p3`=(0,0);
p4`=(0,widthimage);
establishing an inverse perspective transformation relation matrix M by using the coordinates of p1, p2, p3 and p4 and the coordinates of p1 ', p 2', p3 'and p 4', and knowing that the actual length represented by a pixel point on an inverse perspective transformation image is dL = width2 real /width image Obtaining the area dS = (width 2) represented by a pixel on the inverse perspective transformation image real /width image ) 2 Thus, the real world area can be conveniently calculated by counting the number of pixel points on the inverse perspective transformation image.
The line where the real length of the last line is 3 times is selected as the boundary, and the significance is that when the perspective effect of the last line is obvious, the real area represented by the far point is large, the image segmentation is not accurate, the positioning error is also large, and therefore the real area is directly discarded. As can be seen, the points other than p1 ', p 2', p3 ', and p 4' are all distant points on the original diagram.
S04, acquiring a sidewalk area and a roadway area according to the pedestrian track point sequence and the vehicle track point sequence, wherein the roadway area comprises a one-way roadway area and an intersection area.
In the step, the tracks of tracked vehicles and pedestrians are used for finding a region (hereinafter, referred to as a sidewalk) and a roadway belonging to the pedestrian activity, and distinguishing the road surface belonging to a single direction from the road surface belonging to an intersection in the vehicle type roadway. In the step k, the accumulated water is processed in the sidewalk and the intersection area according to the same logic 2, namely the accumulated water is used as a main basis for influencing traffic according to the actual area, and the trip risk is judged in an auxiliary mode according to the reference depth. For a one-way road area, the length of the accumulated water in the direction vertical to the lane is judged, and specifically, the method comprises the following steps:
and S041, transforming the pedestrian track point sequence and the vehicle track point sequence according to the inverse perspective transformation relation matrix to obtain a pedestrian track point sequence and a vehicle track point sequence on the inverse perspective transformation image.
Executing a transformation matrix M on the pedestrian track point sequence and the vehicle track point sequence obtained in the step S02 to obtain the coordinates { (x ″) of the track on the inverse perspective transformation image i ,y` i )}。
And S042, creating a pedestrian activity area heat map according to the pedestrian track point sequence on the inverse perspective transformation image.
The pedestrian trajectory is processed first. Creating a blank matrix (integer type, all values are 0) with the same resolution as the inverse perspective transformation, and converting (x') i ,y` i ) And counting and adding 1 to the area of N × N surrounding pixels, and performing the operation on all the pedestrian track points to obtain the heat map of the pedestrian activity area. Where N is taken to be 3/dL, representing a 3 x 3 meter region centered at this location.
And S043, creating a vehicle heat map containing the driving direction according to the vehicle track point sequence on the inverse perspective transformation image.
The vehicle trajectory is then processed. As with the pedestrian trajectory approach, it is also necessary to establish a heat map of the vehicle's active area. But besides counting the coincident pixels, the direction needs to be recorded, and the direction is calculated by two points before and after the track point.
dx=(x` i+1 -x` i-1 )/(sqrt((x` i+1 -x` i-1 )^2+(y` i+1 -y`i -1 )^2));
dy=(y` i+1 -y` i-1 )/(sqrt((x` i+1 -x` i-1 )^2+(y` i+1 -y` i-1 )^2));
Creating two new blank matrixes to accumulate dx and dy respectively, and finally averaging to obtain the average direction of the positionAnd converting the direction into an angle value to obtain an angle value matrix.
Meanwhile, histogram statistics are performed globally for each trajectory direction. A histogram with 360 degrees of abscissa is created, and the angles of the dx, dy vectors are calculated and added by one to the histogram versus position. And finally finding out the angles of a plurality of main lane directions by searching the position of the peak value of the histogram.
If the angle of a certain pixel on the angle value matrix is different from the angle of the main lane direction by more than 5 degrees, the pixel is considered to be in the intersection area, otherwise, the pixel belongs to the one-way lane area, because the intersection area usually bears the passing of multi-directional vehicles or the vehicles are in the process of turning, the statistical average direction vector can be deviated from the main lane direction.
And S044, binarizing the heat map of the pedestrian activity area and the heat map of the vehicle, and filtering an area with the heat lower than a set value and an overlapping area to obtain a sidewalk area and a roadway area.
And binarizing the pedestrian heat map and the vehicle heat map, and filtering the regions with lower heat. And checking the pavement area and setting the partial heat map which is overlapped with the vehicle passing area to zero. Finally, a mask graph of the traffic area, namely a mask of the one-way lane area is obtained straight Intersection region mask intersection And sidewalk area mask pedestrian 。
After the stage A is finished, the stage B is periodically executed, the ponding area and the ponding depth are identified, and the ponding degree judgment is given by combining the perspective relation and the traffic information of the stage A. The phase A is similar to initialization, and can automatically perform stopping after one day and is unchanged for a long time. The B stage is equivalent to the operation stage and is a process for monitoring the road surface water accumulation condition by an algorithm instead of human eyes. In the stage A and the stage B, the cameras are ensured to be in the same machine positions, a monitoring preset bit interface can be called, and the reset operation is completed before the stage B is executed each time. The task types of the convolutional neural network used in the method comprise classification (efficientnet), target detection (yolov 5), image segmentation (depllabv 3) and target tracking (depsort), and the selected model structures with better accuracy and efficiency are popular models in the field of image processing, and the principle of the part is not described in detail.
The B stage is a process of completely monitoring the road surface once by an algorithm and can be executed periodically according to needs. Acquiring a frame of picture from a video stream, respectively identifying the ponding depth and the ponding area on the picture, then transforming the identification result to the top view of the world coordinate road surface, and calculating the area and the corresponding depth of the ponding area on the top view. And comprehensively judging with a traffic heat analysis result to give the influence of accumulated water on the travel of pedestrians and the traffic of vehicles. The method specifically comprises the following steps:
s1, acquiring a road monitoring picture, and detecting the road monitoring picture to acquire rectangular frame information of pedestrians and vehicles.
And detecting the pedestrian and vehicle targets on the picture, wherein the used model is the same as the step S01. Obtaining rectangular frame information and categories (x) of pedestrians and vehicles c ,y c ,w,h,c)。
Wherein x is c And y c Is the center coordinates of the rectangular frame, w is the rectangular frame width, h is the rectangular frame height, and c is the category information.
S2, expanding the rectangular frame information of the pedestrians and the vehicles to a set size, classifying the water accumulation grade of the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the water accumulation in the rectangular frame information of the pedestrians to obtain water accumulation depth reference information of the central position of the bottom of the rectangular frame, and classifying the water accumulation grade of the central position of the rectangular frame according to the position of the vehicle submerged by the water accumulation in the rectangular frame information of the vehicles to obtain water accumulation depth reference information of the central position of the rectangular frame.
Specifically, a roi (region of interest) image within the new rectangular box is truncated. Expand the rectangular frame area to (x) c ,y c 1.2 max (w, h), 1.2 max (w, h)), which is done to contain the stagnant area and to let each rectangular box be unified as a square.
And (4) carrying out water accumulation grade classification on the roi image by using a neural network model of an efficentnet structure. Six hydrops level ratings are used herein, with pedestrians being classified by hydrops free, ankle, mid-calf, knee, mid-thigh, and waist (inclusive), and vehicles being classified by hydrops free, mid-lower tire, in tire, mid-upper tire, top of tire, hood height.
The classification model needs to build a training sample set, pedestrians and vehicles in a ponding picture are high-probability events, the total number of pedestrian samples of all flooding levels is more than 7000, the number of vehicle samples is more than 4000, and accuracy is guaranteed. In addition, the classification model is output according to the flooding grades (0-5), and the pedestrian and vehicle flooding samples are combined according to the grades and trained.
It should be noted that, compared to the patent application documents CN201811514535 and CN201910101996, which use tires for direct target detection, where the tires are used only as reference heights for labeling, the whole vehicle lower part and the water surface provide features together and are not affected by the visibility of the tires. The surface portion also provides features, the number of samples already available is sufficient for this simpler classification problem, and there is naturally no need to consider the occlusion situation (since occluded samples would be labeled directly as level 0, i.e. not flooded).
After the ponding grades of the pedestrians and the vehicles are identified, position information and the ponding grade (x) are obtained r ,y r ,c r ) Referred to as water depth reference information, as part of the subsequent input. Where the position information is slightly different for pedestrians and vehicles, as follows. Namely the coordinates of the central point of the rectangular frame for vehicles and the central position of the bottom of the rectangular frame for pedestrians. That is, for a pedestrian:
x r =x c y r =y c +0.5*h;
for a vehicle:
x r =x c y r =y c;
and S3, identifying the water surface area of the road monitoring picture to obtain ponding profile information.
And (4) performing water surface area identification on the image, wherein a depeplabv 3 network is used in the step, and a mask image of the ponding area is output. The water surface area is characterized by being special, the ponding area and the non-ponding area can be accurately divided when the number of samples is more than 2000, and the pavement which is wetted but has no ponding can be used as the non-ponding area when the samples are manufactured. And (4) binarizing the mask image, wherein the area with the value of 0 is non-ponding, and the area with the value of 1 is ponding. Contour extraction is carried out on the binarized image, and the obtained ponding contour information { (x) contour ,y contour ) As part of the input to the subsequent step. Note that there may be multiple water accumulation regions in the frame, corresponding to multiple contours.
And S4, transforming the ponding profile information and the ponding depth reference information by adopting an inverse perspective transformation relation matrix to obtain the ponding profile information and the ponding depth reference information on the inverse perspective transformation image.
Transforming all coordinate points in the ponding depth reference information and the ponding outline information found under the monitoring visual angle by utilizing the perspective transformation relation matrix M obtained in the stage A to obtain the coordinate (x' in the top view r ,y` r ,c r ) And { (x ″) contour ,y` contour )}。
And S5, counting the outline, the maximum reference depth and the real area of each water block as water accumulation information according to the water accumulation outline information and the water accumulation depth reference information on the inverse perspective transformation image.
For each water depth reference information (x ″) r ,y` r ,c r ) Traversal { (x ″) contour ,y` contour ) Is judged (x ″) r ,y` r ) Whether in silhouette { (x ″) contour ,y` contour ) In will belong to the wheelDepth of profile water accumulation c r And (6) recording.
Each contour is processed to find the maximum depth level of the record as the reference level for that contour. If there is no recorded reference level, it is marked as "none". It is necessary to distinguish between "none" and 0, where "none" means that there is no water depth information that can be referenced, and 0 represents that there is water identified but has no effect on pedestrians and vehicles.
And filling each contour, counting the number of pixels in the contour, and multiplying the counted number by dS obtained in the step c4 to obtain the real area S of the accumulated water of the contour. Thus obtaining the outline, the maximum reference depth and the real area ({ (x ″) of each water block contour ,y` contour )},c r And S). As part of the input to the subsequent steps.
S6, obtaining a sidewalk area with accumulated water and a roadway area according to the sidewalk area, the roadway area and the accumulated water information, and judging the accumulated water early warning level according to the area and the accumulated water depth of the sidewalk area with the accumulated water and the roadway area.
And comparing the obtained information of each water accumulation area with the obtained mask map of the traffic area one by one. And (3) providing accumulated water degree evaluation through the area of the intersection area and the reference depth, wherein the traffic area comprises a sidewalk area and a roadway area, and specifically:
the contour { (x ″) contour ,y` contour ) Fill, find the mask in the intersection area with the mask graph water . The intersection area can be regarded as a part of the accumulated water influencing traffic, and the real area S of the traffic area with the accumulated water can be obtained by counting the number of pixels of the intersection area effective Using c r As a depth reference level.
When the judged mask is the sidewalk area, if S effective If the square meter is less than 5, judging that water accumulation does not exist; if the reference depth grade is more than or equal to 3, judging that the risk exists; if the reference depth is 'none' or the grade is less than 3 and the square meter is not more than 5 effective Less than 10 square meters, and judging as influencing traffic; the reference depth is "none" or the grade < 3, and S effective And judging the square meter is more than or equal to 10 square meters, and judging the square meter is blocked to pass. Sequentially judging, if they are matched, determining that it is matchedThe step is interrupted and exited.
When the judged mask is the intersection area, if S is effective Less than 10 square meters, and judging that water accumulation does not affect; if the reference depth level is more than or equal to 3, judging that the risk exists; if the reference depth is 'none' or the grade is less than 3 and the square meter is not more than 10 and is not more than S effective If the square meter is less than 20, judging that the pass is influenced; the reference depth level is none or level < 3, and S effective And judging as blocking the passage if the square meter is more than or equal to 20. And sequentially judging, if the judgment is met, determining the judgment and exiting the step.
When the judged mask is the one-way lane area, the mask is calculated firstly water The centroid coordinate of the lane is taken as a unit vector in the direction perpendicular to the lane. Re-extracting mask water Is defined by the contour { (x ″) mask ,y` mask ) The points on the outline are traversed, and the maximum value l multiplied by the points of the vector consisting of the centroid and the unit vector is found max And a minimum value of l min Then the mask can be calculated water Length L in the vertical lane direction block =(|l max |+|l min |)*dL。
If L is block <2m, judging that the water accumulation does not affect the water accumulation; if the reference depth level is more than or equal to 3, judging that the risk exists; if the reference depth is none or the grade is < 3, and 2m ≦ L block If the distance is less than 5m, judging the distance as 'influencing passing'; the reference depth level is none or level < 3, and L block If the distance is more than 5m, judging that the traffic is blocked. And sequentially judging, if so, determining the judgment and exiting the step.
The intersection is similar to the sidewalk area in that pedestrians do not have a fixed traveling direction, and the intersection area needs to be also responsible for vehicles in multiple directions, and traffic can be affected as long as the area is affected by water accumulation. For a lane in a single direction, the influence of the length of accumulated water in the direction perpendicular to the lane can be judged only.
Referring to fig. 5, a second embodiment of the present invention is:
a road ponding detection terminal 1 based on video monitoring comprises a memory 3, a processor 2 and a computer program which is stored on the memory 3 and can run on the processor 2, wherein the first step of the embodiment is realized when the processor 2 executes the computer program.
In summary, according to the road ponding detection method and the terminal based on video monitoring provided by the invention, pedestrians and vehicles are used as reference objects for depth judgment, and lateral modes such as sample generation and key point judgment are not used for depth judgment, the method is used for modeling under a normal color image, does not depend on infrared equipment, does not use a special reference object, has good universality, is suitable for large-area popularization, and compared with the prior art, the shielding condition does not need to be considered naturally, and the problem that the more submerged reference is less is avoided.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. A road ponding detection method based on video monitoring is characterized by comprising the following steps:
s1, acquiring a road monitoring picture, and detecting the road monitoring picture to acquire rectangular frame information of pedestrians and vehicles;
s2, expanding the rectangular frame information of the pedestrians and the vehicles to a set size, classifying the water accumulation grade of the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the accumulated water in the rectangular frame information of the pedestrians to obtain water accumulation depth reference information of the central position of the bottom of the rectangular frame, and classifying the water accumulation grade of the central position of the rectangular frame according to the position of the vehicle submerged by the accumulated water in the rectangular frame information of the vehicles to obtain water accumulation depth reference information of the central position of the rectangular frame;
s3, identifying the water surface area of the road monitoring picture to obtain ponding profile information;
s4, transforming the ponding profile information and the ponding depth reference information by adopting an inverse perspective transformation relation matrix to obtain ponding profile information and ponding depth reference information on an inverse perspective transformation image;
and S5, according to the ponding outline information and the ponding depth reference information on the inverse perspective transformation image, counting the outline, the maximum reference depth and the real area of each block of ponding as the ponding information.
2. The method for detecting the road ponding based on the video monitoring as claimed in claim 1, further comprising the steps of:
s6, obtaining a sidewalk area with accumulated water and a roadway area according to the sidewalk area, the roadway area and the accumulated water information, and judging the accumulated water early warning level according to the area and the accumulated water depth of the sidewalk area with the accumulated water and the roadway area.
3. The method for detecting the road ponding based on the video monitoring as claimed in claim 2, wherein the roadway area includes an intersection area and a one-way road area, and the judgment of the ponding early warning level according to the area and the ponding depth of the sidewalk area and the roadway area with the ponding is specifically as follows:
when the water accumulation early warning method is applied to a sidewalk area, judging whether the area of the sidewalk area with water accumulation is smaller than or equal to a first set area, if so, judging that the early warning level of the water accumulation is 'water accumulation and no influence', otherwise, judging whether the level of the water accumulation depth is larger than or equal to a first set depth level, if so, judging that the early warning level of the water accumulation is 'risk existence', otherwise, judging whether the area of the sidewalk area with the water accumulation is smaller than a second set area, if so, judging that the early warning level of the water accumulation is 'influence passage', otherwise, judging that the early warning level of the water accumulation is 'block passage';
for the intersection area, judging whether the area of the roadway area with accumulated water is smaller than a second set area, if so, judging that the accumulated water early warning level is 'water accumulation without influence', otherwise, judging that the level of the depth of the accumulated water is larger than or equal to a first set depth level, if so, judging that the accumulated water early warning level is 'risk exists', otherwise, judging that the area of the roadway area with the accumulated water is smaller than a third set area, if so, judging that the accumulated water early warning level is 'influence passage', otherwise, judging that the accumulated water early warning level is 'block passage';
for the one-way lane area, calculating the centroid coordinate of the lane area with the accumulated water, calculating the length of the lane area with the accumulated water in the direction vertical to the one-way lane, judging whether the length of the lane area with the accumulated water in the direction vertical to the one-way lane is smaller than a first set length, if so, judging that the accumulated water early warning level is 'influenced by the accumulated water or not', otherwise, judging that the accumulated water depth level is larger than or equal to a first set depth level, if so, judging that the accumulated water early warning level is 'risky', otherwise, judging that the length of the lane area with the accumulated water in the direction vertical to the one-way lane is smaller than a second set length, if so, judging that the accumulated water early warning level is 'influenced by the passage', and otherwise, judging that the accumulated water early warning level is 'blocked passage'.
4. The method for detecting the road ponding based on the video monitoring as claimed in claim 1, further comprising the steps executed when there is no ponding:
s01, acquiring a road monitoring picture video stream, and performing pedestrian target detection and vehicle target detection on each frame of image of the road monitoring picture video stream to obtain a pedestrian target and a vehicle target of each frame of image;
s02, carrying out target tracking on the pedestrian target and the vehicle target to obtain a pedestrian track point sequence and a vehicle track point sequence;
s03, constructing an inverse perspective transformation relation matrix from the road monitoring picture to a world coordinate system;
and S04, acquiring a sidewalk area and a roadway area according to the pedestrian track point sequence and the vehicle track point sequence, wherein the roadway area comprises a one-way roadway area and an intersection area.
5. The method for detecting the road ponding based on the video monitoring as claimed in claim 4, wherein the step S03 includes:
s031, acquiring a frame of image of a road monitoring picture video stream, and acquiring the monitoring installation height, downward pitch angle and equivalent focal length of a camera;
s032, calculating a longitudinal half field angle and a transverse half field angle of the camera according to the equivalent focal length of the camera;
s033, calculating the relation between the road monitoring picture image and a world coordinate system according to the width of the road monitoring picture image, the height of the road monitoring picture image, the longitudinal half field angle and the transverse half field angle of the camera, the monitoring installation height of the camera and the downward pitch angle;
s034, selecting four points on the road monitoring picture image, and establishing an inverse perspective transformation relation matrix according to the selected four points.
6. The method for detecting the road ponding based on the video monitoring as claimed in claim 5, wherein four points on the road monitoring picture image are selected, and specifically, the road monitoring picture image is transformed through a relation between the road monitoring picture image and a world coordinate system to obtain an inverse perspective transformation image;
selecting two points at two sides of the bottom of the inverse perspective transformation image, calculating the width of the bottom of the inverse perspective transformation image, if the width of the inverse perspective transformation image is one line of the width 2-4 of the bottom of the inverse perspective transformation image, selecting two points at two ends of the line, and if the width of the inverse perspective transformation image is not the same, selecting two points in the first line of the inverse perspective transformation image.
7. The method for detecting the road ponding based on the video monitoring of claim 5, wherein the relationship between the road monitoring picture image and the world coordinate system is calculated according to the following formula:
wherein x and y are the coordinates of the road monitoring picture image,X P and Y P The coordinate of a world coordinate system, h is the height of the monitoring installation of the camera, theta is a downward pitch angle, alpha is the longitudinal half field angle of the camera under the equivalent focal length f, and beta is the transverse half field angle of the camera under the equivalent focal length f.
8. The method for detecting the road ponding based on the video monitoring as claimed in claim 4, wherein the step S4 comprises the steps of:
s041, transforming the pedestrian track point sequence and the vehicle track point sequence according to the inverse perspective transformation relation matrix to obtain a pedestrian track point sequence and a vehicle track point sequence on an inverse perspective transformation image;
s042, creating a heat map of the pedestrian activity area according to the pedestrian track point sequence on the inverse perspective transformation image;
s043, according to the vehicle track point sequence on the inverse perspective transformation image, a vehicle heat map containing the driving direction is created;
and S044, binarizing the heat map of the pedestrian activity area and the heat map of the vehicle, and filtering an area with the heat lower than a set value and an overlapping area to obtain a sidewalk area and a roadway area.
9. The method for detecting road ponding based on video monitoring as claimed in claim 1, wherein the step of classifying the ponding grade at the central position of the bottom of the rectangular frame according to the position of the pedestrian submerged by the ponding in the rectangular frame information of the pedestrian to obtain the ponding depth reference information at the central position of the bottom of the rectangular frame is specifically as follows:
dividing the positions of the pedestrians submerged by accumulated water into six accumulated water grades of no accumulated water, no ankle, no middle calf, no knee, no middle thigh and no waist according to the rectangular frame information of the pedestrians;
the method comprises the following steps of classifying the water accumulation grade of the center position of the rectangular frame according to the position of the vehicle submerged by the accumulated water in the rectangular frame information of the vehicle to obtain the water accumulation depth reference information of the center position of the rectangular frame, wherein the water accumulation depth reference information comprises the following specific information:
and dividing the positions of the vehicles submerged by accumulated water in the rectangular frame information of the vehicles into six accumulated water grades of no accumulated water, middle and lower tires, middle and upper tires, tire tops and engine covers.
10. A video surveillance-based roadway water detection terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1-9 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211696566.3A CN115984772A (en) | 2022-12-28 | 2022-12-28 | Road ponding detection method and terminal based on video monitoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211696566.3A CN115984772A (en) | 2022-12-28 | 2022-12-28 | Road ponding detection method and terminal based on video monitoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115984772A true CN115984772A (en) | 2023-04-18 |
Family
ID=85966248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211696566.3A Pending CN115984772A (en) | 2022-12-28 | 2022-12-28 | Road ponding detection method and terminal based on video monitoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984772A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116778696A (en) * | 2023-08-14 | 2023-09-19 | 易启科技(吉林省)有限公司 | Visual-based intelligent urban waterlogging early warning method and system |
-
2022
- 2022-12-28 CN CN202211696566.3A patent/CN115984772A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116778696A (en) * | 2023-08-14 | 2023-09-19 | 易启科技(吉林省)有限公司 | Visual-based intelligent urban waterlogging early warning method and system |
CN116778696B (en) * | 2023-08-14 | 2023-11-14 | 易启科技(吉林省)有限公司 | Visual-based intelligent urban waterlogging early warning method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dhiman et al. | Pothole detection using computer vision and learning | |
CN109059954B (en) | Method and system for supporting high-precision map lane line real-time fusion update | |
Zhang et al. | A traffic surveillance system for obtaining comprehensive information of the passing vehicles based on instance segmentation | |
US8379926B2 (en) | Vision based real time traffic monitoring | |
CN101488222B (en) | Camera self-calibration method based on movement target image and movement information | |
Chen et al. | Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction | |
CN111801711A (en) | Image annotation | |
CN110379168B (en) | Traffic vehicle information acquisition method based on Mask R-CNN | |
Broggi et al. | Self-calibration of a stereo vision system for automotive applications | |
CN106951879A (en) | Multiple features fusion vehicle checking method based on camera and millimetre-wave radar | |
CN109635737B (en) | Auxiliary vehicle navigation positioning method based on road marking line visual identification | |
Kumar et al. | A semi-automatic 2D solution for vehicle speed estimation from monocular videos | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
CN112308913B (en) | Vehicle positioning method and device based on vision and vehicle-mounted terminal | |
Rodríguez et al. | An adaptive, real-time, traffic monitoring system | |
CN103206957B (en) | The lane detection and tracking method of vehicular autonomous navigation | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
Lu et al. | Automated visual surveying of vehicle heights to help measure the risk of overheight collisions using deep learning and view geometry | |
Xu et al. | Road lane modeling based on RANSAC algorithm and hyperbolic model | |
CN116503818A (en) | Multi-lane vehicle speed detection method and system | |
CN112070756A (en) | Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography | |
Wei et al. | Damage inspection for road markings based on images with hierarchical semantic segmentation strategy and dynamic homography estimation | |
CN115984772A (en) | Road ponding detection method and terminal based on video monitoring | |
CN105761507A (en) | Vehicle counting method based on three-dimensional trajectory clustering | |
Xuan et al. | Robust lane-mark extraction for autonomous driving under complex real conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |