Disclosure of Invention
The application aims to provide a vehicle driving behavior detection method, which can reduce consumption of network flow, reduce application cost, provide more accurate analysis results and optimize planning management and operation scheduling of vehicles.
The technical scheme is as follows:
a vehicle driving behavior detection method, comprising the steps of:
step 1: collecting a road image;
step 2: reading the road image, detecting vehicles in the road image and generating detection data;
and step 3: and uploading the detection data to a background server for storage so as to be queried and analyzed.
By adopting the technical scheme: and arranging equipment for executing detection operation on each local node, and returning a final detection result to the cloud platform only. Thereby saving network bandwidth significantly. Meanwhile, the cloud platform can only store the returned data in a classified manner without strong computing power, so that the cost is saved. The user can inquire data information in the background and can also analyze the data, thereby providing more scientific and effective basis for planning management and operation scheduling.
Preferably, in the vehicle driving behavior detection method, the step 2 includes:
step 21: respectively acquiring lane lines in each frame of highway image;
step 22: acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
step 23: matching the vehicle detection frame with the tracking frame based on a Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
step 24: calculating the relative position relation between the moving track of the vehicle and each lane line in real time, and judging whether the lane where the vehicle is located and the vehicle change the lane or not;
step 25: calculating the moving speed of the vehicle;
step 26: detecting a lamp of each vehicle;
step 27: and identifying license plate information of each vehicle.
By adopting the technical scheme: the detection of vehicle information, vehicle moving track, vehicle speed, road lane line and vehicle lane changing condition is realized on the end processor respectively, so that most detection requirements are met on local equipment.
More preferably, in the above vehicle driving behavior detection method, the step 21 includes:
step 211: marking interest areas in the road images and extracting interest area edges in the interest areas;
step 212: extracting left and right edges of the potential lane in the road image based on the gray values;
step 213: filtering invalid left edges and right edges based on preset lane width and lane line gray threshold values;
step 214: and combining the continuous lane lines based on DFS search.
By adopting the technical scheme: compared with the traditional Hough line detection, the scheme can be used for further detecting the curved lane line, and the accuracy of detecting the lane line is expanded. Specifically, the method comprises the following steps: hough line detection is a linear detection algorithm, and linear equations are assumed and equation parameters are fitted. The method does not have any assumption, does not adopt a method of fitting an equation, and combines continuous lines through discrete points.
More preferably, in the above vehicle driving behavior detection method, the step 26 includes:
step 261: selecting a vehicle detection frame corresponding to a vehicle to be detected;
step 262: intercepting the lower half area of the vehicle detection frame obtained in the step 261 as an interest area;
step 263: and identifying the vehicle lamp classification as a bright lamp or a dark lamp in the interest area based on the Mobilnet-SSD model.
By adopting the technical scheme: because the car lamps are arranged at two corners of the car head, the car lamps are detected only in the lower half area of the car head by reducing the interest area, the force calculation requirement is reduced, and the detection accuracy is improved.
More preferably, in the above vehicle driving behavior detection method, the step 27 includes:
step 271: selecting a vehicle detection frame corresponding to a vehicle to be detected as an interest area;
step 272: detecting the position and the type of the license plate through a convolutional neural network;
step 273: affine transformation is carried out on the license plate through a space conversion network to form a license plate right image;
step 274: identifying the license plate facing image by using LPRNet and outputting a result vector of 18x 68;
step 275: and decoding and filtering the result vector.
By adopting the technical scheme: after the license plate position is obtained, because the scene is not fixed, the angle position of the license plate in the camera device is uncertain, and the license plate is affine transformed to the front side through the STN, so that the detection accuracy is improved. Since LPRNet outputs a fixed-length fixed-character probability array, the result needs to be decoded. Here we set a length of 18 and a number of characters of 68 which contain a provincial abbreviation, numbers, letters and a space symbol. During decoding, the invalid result is directly filtered by using a license plate number rule (for example, the first place of a license plate is called province and city for short), and only the probability of the valid license plate is accumulated. Therefore, the license plate recognition and the misjudgment filtering are combined, and the probability of outputting the correct license plate is improved.
More preferably, in the vehicle driving behavior detection method, step 275 includes:
step 2751: calculating the probability of the first character, storing five characters with the maximum probability as character strings and respectively recording the probabilities;
step 2752: calculating the character probability of the next bit, and taking five characters with the maximum probability as newly-added characters to be respectively added to each character string to form five new character strings;
step 2753: multiplying the probability of the original character string and the probability of the newly added character as the probability of the new character string;
step 2754: steps 2752-2753 are performed in a loop until each bit in the result vector is traversed;
step 2755: combining the same character strings and filtering invalid character strings;
step 2755: and taking the character string with the highest probability as a license plate recognition result to be output.
More preferably, in the above vehicle driving behavior detection method, the step 25 includes:
step 251: marking two preset lines in the road image;
step 252: obtaining the time difference of the vehicle passing through the two preset lines;
step 253: and obtaining the moving speed of the vehicle based on the distance between the two preset lines and the time difference of the vehicle passing through the two preset lines.
In order to realize the vehicle driving behavior detection method, the application also discloses a vehicle driving behavior detection system, which adopts the following technical scheme:
a vehicle driving behavior detection system, characterized by comprising:
the camera device is used for acquiring a road image;
the end processor is used for reading the road image acquired by the camera device and detecting the vehicle behavior in the road image;
the 4G router is used for realizing signal transmission between the end processor and the background server;
and the background server is used for storing the detection data uploaded by the end processor.
Preferably, in the vehicle driving behavior detection system:
the end processor includes: the system comprises a vehicle detection module, a tracking detection module, a vehicle speed detection module, a lane change detection module, a vehicle lamp detection module and a license plate detection module;
the lane detection module is used for respectively acquiring lane lines in each frame of highway image;
the vehicle detection module is used for acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
the tracking detection module is used for matching the vehicle detection frame with the tracking frame based on Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
the lane change detection module is used for calculating the position relation between the moving track of the vehicle and a lane line in real time and judging whether the vehicle changes lanes or not based on the calculation result;
the vehicle speed detection module is used for calculating the moving speed of the vehicle;
the car lamp detection module is used for detecting the car lamps of all the vehicles;
the license plate detection module is used for identifying license plate information of each vehicle.
More preferably, in the vehicle driving behavior detection system: the camera device is a rotatable camera.
Compared with the prior art, the technical scheme of the application disperses the calculation force to a plurality of end devices by adopting the calculation end. A large amount of network bandwidth is saved, and different detection functions are convenient to add in the future. Meanwhile, the requirements that a plurality of end devices are accessed into the device management platform and send information such as detection results, detection records, device states and the like to the management platform at regular time or according to requests are met. The docking development amount of the platform is reduced.
Detailed Description
In order to more clearly illustrate the technical solutions of the present application, the following will be further described with reference to various embodiments.
As shown in fig. 1-2:
embodiment 1, a vehicle driving behavior detection system, comprising: the system comprises a camera device 1, an end processor 2, a 4G router 3 and a background server 4.
The camera device 1 is used for collecting road images; the end processor 2 is used for reading the road image acquired by the camera device 1 and detecting the vehicle behavior in the road image; the 4G router 3 is used for realizing signal transmission between the end processor 2 and the background server 4; and the background server 4 is used for storing the detection data uploaded by the end processor 2.
Wherein the end processor 2 comprises: the system comprises a vehicle detection module, a tracking detection module, a vehicle speed detection module, a lane change detection module, a vehicle lamp detection module and a license plate detection module; the lane detection module is used for respectively acquiring lane lines in each frame of highway image; the vehicle detection module is used for acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network; the tracking detection module is used for matching the vehicle detection frame with the tracking frame based on Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle; the lane change detection module is used for calculating the position relation between the moving track of the vehicle and a lane line in real time and judging whether the vehicle changes lanes or not based on the calculation result; the vehicle speed detection module is used for calculating the moving speed of the vehicle; the car lamp detection module is used for detecting the car lamps of all the vehicles; the license plate detection module is used for identifying license plate information of each vehicle. The camera device 1 is a rotatable camera.
In practice, the working process is as follows:
step 1: the camera device 1 collects road images;
step 2: the end processor 2 is connected with the camera device 1 through the 4G router 3, reads the road image output by the camera device 1, detects vehicles in the road image and generates detection data; the step 2 specifically comprises:
step 21: respectively obtaining lane lines in each frame of highway image, wherein the step 21 specifically comprises the following steps:
step 211: marking interest areas in the road images and extracting interest area edges in the interest areas;
step 212: extracting left and right edges of the potential lane in the road image based on the gray values;
step 213: filtering invalid left edges and right edges based on preset lane width and lane line gray threshold values;
step 214: and combining the continuous lane lines based on DFS search.
Step 22: acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
step 23: matching the vehicle detection frame with the tracking frame based on a Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
step 24: calculating the relative position relation between the moving track of the vehicle and each lane line in real time, and judging whether the lane where the vehicle is located and the vehicle change the lane or not; specifically, when the moving track of the vehicle extends from the current lane to the adjacent lane, it is determined that the vehicle is changing lanes at the time.
Step 25: the moving speed of the vehicle is calculated. Specifically, step 25 includes:
step 251: marking two preset lines in the road image, setting an area between the two lines as an area where the vehicle speed is expected to be measured, and measuring the distance between the two lines to be 14 meters;
step 252: obtaining the time difference of the vehicle passing through the two preset lines; calculating the time difference to be 2.5 seconds;
step 253: obtaining the average speed per hour of the vehicle in a speed measuring area, namely 14m/2.5 s-20.16 km/h, based on the distance between the two preset lines and the time difference of the vehicle passing through the two preset lines;
step 26: detecting the lamps of each vehicle, and the step 26 specifically includes:
step 261: selecting a vehicle detection frame corresponding to a vehicle to be detected;
step 262: intercepting the lower half area of the vehicle detection frame obtained in the step 261 as an interest area;
step 263: and inputting the intercepted image into a Mobilene-SSD model network, judging whether the image is a left side lamp or a right side lamp according to the position of the vehicle lamp frame in the vehicle frame, and judging the brightness of the lamp according to the type of the output lamp.
Step 27: and identifying license plate information of each vehicle. Specifically, step 271 includes:
step 271: selecting a vehicle detection frame corresponding to a vehicle to be detected as an interest area;
step 272: detecting the position and the type of the license plate through a convolutional neural network;
step 273: affine transformation is carried out on the license plate through a space conversion network to form a license plate right image;
step 274: identifying the license plate facing image by using LPRNet and outputting a result vector of 18x 68;
step 275: and decoding and filtering the result vector.
Step 2751: calculating the probability of the first character, storing five characters with the maximum probability as character strings and respectively recording the probabilities;
step 2752: calculating the character probability of the next bit, and taking five characters with the maximum probability as newly-added characters to be respectively added to each character string to form five new character strings;
step 2753: multiplying the probability of the original character string and the probability of the newly added character as the probability of the new character string;
step 2754: steps 2752-2753 are performed in a loop until each bit in the result vector is traversed;
step 2755: combining the same character strings and filtering invalid character strings;
step 2755: and taking the character string with the highest probability as a license plate recognition result to be output.
And step 3: and uploading the detection data to a background server 4 for storage so as to be queried and analyzed.
The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.