CN116152758A - Intelligent real-time accident detection and vehicle tracking method - Google Patents

Intelligent real-time accident detection and vehicle tracking method Download PDF

Info

Publication number
CN116152758A
CN116152758A CN202310449242.8A CN202310449242A CN116152758A CN 116152758 A CN116152758 A CN 116152758A CN 202310449242 A CN202310449242 A CN 202310449242A CN 116152758 A CN116152758 A CN 116152758A
Authority
CN
China
Prior art keywords
vehicle
image
vehicles
license plate
tracking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310449242.8A
Other languages
Chinese (zh)
Inventor
刘寒松
王永
王国强
刘瑞
谭连盛
董玉超
李贤超
焦安健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202310449242.8A priority Critical patent/CN116152758A/en
Publication of CN116152758A publication Critical patent/CN116152758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of traffic, and relates to an intelligent real-time accident detection and vehicle tracking method, which comprises the steps of optimizing information of monitoring video information of a parking lot, detecting the vehicles, and ensuring that the vehicles can be rapidly identified when the vehicles appear in the monitoring video; tracking the identified vehicle, and when a vehicle is detected, tracking each detected object in a subsequent video frame; finally, whether an accident occurs or not is judged by calculating the angle, track and acceleration change of the vehicle, so that the judging accuracy is improved, meanwhile, the recognition of license plates is increased, the Mask R-CNN and the CRNN are combined, the accuracy of license plate recognition is improved, the situation of erroneously recognizing the license plates is reduced, and the accident vehicle is conveniently tracked.

Description

Intelligent real-time accident detection and vehicle tracking method
Technical Field
The invention belongs to the technical field of traffic, and relates to an intelligent real-time accident detection and vehicle tracking method, in particular to an intelligent real-time accident detection and vehicle tracking method based on Mask R-CNN and centroid tracking.
Background
The occurrence of the accident of the parking lot can influence the parking and use experience of other car owners, bring negative influence to the parking lot, discover and process the accident in time, can help to improve the safety and management efficiency of the parking lot, and provide better service for car owners. Therefore, the development of the parking lot accident detection technology has important practical significance, can help to reduce the occurrence and influence of accidents, and improves the management efficiency and the service quality of the parking lot.
At present, the abnormal behavior detection of a vehicle has been well developed, in the abnormal behavior detection of a vehicle, a plurality of modules of preprocessing, behavior modeling and abnormal detection are generally included, various dynamic or static information in video information is firstly extracted, and characteristic representations of the dynamic or static information are learned, the behavior modeling module learns behaviors and forms rules through processing the extracted characteristic representations, and in the abnormal behavior detection module, an abnormal index of the current behavior is obtained as an index for judging whether the vehicle is abnormal or not through comparing the detected behavior of a target with the previously learned rules. However, the existing method for detecting abnormal vehicles faces the scene of vehicles on the road, does not specially treat the dark illumination intensity brought by the environment of the parking lot, is not suitable for the application of the parking lot, and the bayonet camera for road monitoring is usually up to 800 ten thousand pixels, the resolution of monitoring equipment of the underground parking lot is far lower than that, the extracted video data is noisy, and the detection, tracking and accident detection of the vehicles are adversely affected. Therefore, there is a need for an intelligent real-time accident detection and vehicle tracking method suitable for a parking lot, which can quickly lock a vehicle with scratch through monitoring video analysis after the vehicle scratch accident occurs in the parking lot, save pictures and short videos when the vehicle scratch occurs, provide the pictures and the short videos for operators, and analyze reasons of the accident occurrence by inquiring related pictures and short videos when the vehicle scratch occurs, perform responsibility determination on a vehicle owner, reduce disputes and provide better services for the vehicle owner.
Disclosure of Invention
In order to achieve the above purpose, the invention provides an intelligent real-time accident detection and vehicle tracking method, which is characterized in that firstly, information optimization is carried out on monitoring video information of a parking lot, and then, the detection is carried out on the vehicles which appear, so that the vehicles can be rapidly identified when the vehicles appear in the monitoring video; then tracking the identified vehicle, removing other non-key objects in the video when the vehicle is detected, and tracking each detected object in a subsequent video frame; finally, by calculating the boundary box, track, speed and acceleration of the vehicle, when the calculation result is collision, the accident vehicle is locked.
In order to achieve the above object, the specific process of the present invention comprises the steps of:
s1, collecting monitoring video information of a parking lot, and performing video information optimization on the monitoring video information to obtain an image with optimized visual information;
s2, vehicle detection is carried out according to the image optimized by visual information, the image is input into a Mask R-CNN model to obtain a boundary frame of a vehicle and a license plate, and then an end-to-end character recognition network CRNN is utilized to recognize the license plate number in the boundary frame of the license plate;
s3, determining the mass center of the object by acquiring the intersection point of the lines passing through the middle point of the detected boundary frame of the vehicle, and realizing vehicle tracking by determining the overlapping of the boundary frames of the vehicle, the track of the vehicle and the intersection angle thereof, and the speed and the acceleration change thereof;
s4, inputting the acceleration change, the track change and the angle change into a trained two-classifier when boundary boxes of two vehicles overlap, obtaining the probability of occurrence of accidents, if the probability of occurrence of the accidents is more than 0.5, considering that the accidents occur, and storing the license plate numbers recognized in advance and videos from the occurrence of the vehicles so as to facilitate subsequent responsibility fixing treatment; if the probability of accident is less than 0.5, no accident is considered to happen.
As a further technical solution of the present invention, the process of optimizing video information in step S1 is as follows:
s11, adopting a mean value filtering method, taking the neighbors around the image pixels in the video information into consideration, and replacing the original pixels with the average value of the surrounding pixels to obtain a denoised image;
and S12, carrying out histogram equalization on the denoised image, enhancing the contrast of the picture information, and obtaining the image with optimized visual information.
As a further technical scheme of the invention, the denoised image obtained in the step S11
Figure SMS_3
The method comprises the following steps: />
Figure SMS_5
Wherein->
Figure SMS_7
Representing the initial image +.>
Figure SMS_2
The representation comprises->
Figure SMS_4
Is, +.>
Figure SMS_6
The gray value of each pixel of (1) is defined by +.>
Figure SMS_8
Are all comprised of->
Figure SMS_1
Is determined by the gray level average value of the pixels in the predetermined area.
As a further technical scheme of the present invention, the process of step S12 is as follows: k gray level r in denoised image k The probability of occurrence is
Figure SMS_11
Wherein n is the total number of pixels, +.>
Figure SMS_13
Is the gray level in the image +.>
Figure SMS_15
L is the number of gray levels of the image, the gray level change function is: />
Figure SMS_10
The method comprises the steps of carrying out a first treatment on the surface of the By->
Figure SMS_12
And->
Figure SMS_14
Modifying the denoised image +.>
Figure SMS_16
After the gray level of (2) a visual information optimized image is obtained>
Figure SMS_9
As a further technical scheme of the invention, the concrete process of detecting the bounding boxes of the vehicle and the license plate by the Mask R-CNN model in the step S2 is as follows: the method comprises the steps that characteristics of an image subjected to video information optimization are extracted through a main network of a Mask R-CNN model to obtain a characteristic diagram, a part of the characteristic diagram is input into a region extraction network to generate a plurality of interested regions, the interested regions and the characteristic diagram are input into a region of interest alignment module (Rol alignment module), the quantization operation is canceled by the region of interest alignment module (Rol alignment module), and image values on pixel points with coordinates of floating point numbers are obtained by using a bilinear interpolation method, so that the whole characteristic aggregation process is converted into a continuous operation; obtaining a feature map of each region through a region of interest alignment module (Rol alignment module), inputting the feature map of each region into a fully-connected network, and then entering two fully-connected network branches, wherein one of the feature maps is used for predicting object types, the object types respectively represent vehicles and license plates, the other feature map is used for predicting object boundary frames to obtain object frame coordinates, and the object boundary frames are the vehicle and license plate boundary frames, so that target detection is realized; and when the object type and the object boundary frame are predicted, the other part of the feature image extracted through the main network sequentially passes through the region of interest alignment module (Rol alignment module) and the two convolution networks to generate a mask of a pixel level, and the mask carries out two classification on each pixel on the feature image generated by the region of interest alignment module to distinguish whether the pixel is a target pixel or not, so that the accurate segmentation of the vehicle and the license plate is realized.
As a further technical scheme of the invention, the master R-CNN backbone network of the Mask R-CNN model adopts a fast R-CNN backbone network.
As a further technical scheme of the invention, the end-to-end character recognition network CRNN in step S2 includes three parts, namely a convolution layer, a circulation layer and a transcription layer, wherein the convolution layer adopts a CNN network for extracting the basic features of license plate images; the circulating layer adopts a layer bidirectional LSTM network and is used for continuously extracting the characteristics contained in the license plate image basic characteristic text sequence and preparing for the transcription layer; the transcription layer receives the information of the circulation layer, and converts the characteristics contained in the character sequence into characters to obtain a license plate recognition result.
As a further technical solution of the present invention, the step S3 of determining that the vehicle bounding boxes overlap includes: for example, the bounding box of two vehicles is satisfied
Figure SMS_17
Wherein->
Figure SMS_18
,/>
Figure SMS_19
For the bounding box of two vehicles, +.>
Figure SMS_20
For a given vehicle centroid coordinates +.>
Figure SMS_21
The width and height of the vehicle bounding box, respectively, the bounding boxes of the two vehicles will overlap.
As a further technical scheme of the invention, step S3 is to determine the vehicle track by acquiring the difference between the centroids of the tracked vehicles in every 10 continuous video frames, and calculate a 2D vector representing the direction vector of the vehicle motion, wherein the direction vector
Figure SMS_22
The size of (2) is>
Figure SMS_23
The vector is divided by the scalar to perform normalization operation to obtain direction vectors, and the two direction vectors are used for vehicles with overlapped boundary boxes
Figure SMS_24
Calculate the angle of the track between them +.>
Figure SMS_25
For detecting whether they collide +.>
Figure SMS_26
As a further aspect of the present invention, step S3 determines the vehicle speed
Figure SMS_27
When estimated by the number of frames per second of video FPS: />
Figure SMS_28
The effect of distance from the monitoring camera is then eliminated by the distance S between every 10 frames and normalizing: />
Figure SMS_29
WhereincFor the distance in the image, H is the height of the vehicle bounding box, and H is the height of the video, whereby the acceleration of the tracked vehicle is calculated +.>
Figure SMS_30
,/>
Figure SMS_31
Compared with the prior art, the invention has the following advantages:
(1) According to the method, the actual application scene of the parking lot is fully considered, the video information is optimized, and the video noise is removed through processing in low illumination, so that the video is processed more accurately by a subsequent model;
(2) The vehicle accident detection method has the advantages that whether an accident occurs is judged by tracking the clock from the three angles of the acceleration, the angle and the track of the vehicle, the judging accuracy is improved, meanwhile, the recognition of the license plate is increased, the Mask R-CNN and the CRNN are combined, the accuracy of license plate recognition is improved, the situation of mistakenly recognizing the license plate is reduced, and the accident vehicle is conveniently tracked.
Drawings
Fig. 1 is a block diagram of the workflow of the present invention.
FIG. 2 is a block diagram of the Mask R-CNN model according to the present invention.
Fig. 3 is a block diagram of the end-to-end character recognition network CRNN according to the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings.
Examples:
as shown in fig. 1, the specific process for implementing intelligent real-time accident detection and vehicle tracking in this embodiment is as follows:
s1, collecting parking lot monitoring video information, and performing video information optimization on the collected parking lot monitoring video information to obtain visual information; the step is to perform video denoising and video enhancement aiming at the unique environmental characteristics of the parking lot, and the collected video information is usually low light due to the low illumination of the parking lot environment and the problem of monitoring equipmentThe problem is solved by optimizing video information with noise, firstly adopting an average filtering method to reduce noise of the image, considering neighbors around the pixels, replacing original pixels with average values of surrounding pixels, taking 3x3 field as an example, assuming that the current pixel to be processed is f (m, n), then the average filtering module is
Figure SMS_32
Assuming that a digital image f (x, y) with the size MxN is filtered and denoised to obtain an image g (x, y), the gray value of each pixel point of g (x, y) is determined by the gray level average value of k pixels in the predetermined area including (x, y)>
Figure SMS_33
Thereby, a denoised image g (x, y) is obtained;
to reduce the effect of low illumination, highlighting vehicle information in video, continuing histogram equalization on g (x, y) basis to enhance contrast of picture information, the basic idea of histogram equalization algorithm is to transform an image of known gray probability distribution into a new image with uniform probability distribution, since the gray map is discrete, the k-th gray level r in the denoised image k The probability of occurrence is:
Figure SMS_35
wherein n is the total number of pixels, +.>
Figure SMS_37
Is the gray level in the image +.>
Figure SMS_39
L is the number of gray levels of the image, the gray level change function is:
Figure SMS_36
the method comprises the steps of carrying out a first treatment on the surface of the By->
Figure SMS_38
And->
Figure SMS_40
Modifying the denoised image +.>
Figure SMS_41
After the gray level of (2) a visual information optimized image is obtained>
Figure SMS_34
S2, vehicle detection is carried out according to the image optimized by visual information, the image is input into a Mask R-CNN model shown in FIG. 2 to obtain a license plate boundary frame, and then the end-to-end character recognition network CRNN shown in FIG. 3 is utilized to recognize license plate numbers in the license plate boundary frame; the Mask R-CNN model detects the boundary boxes of the vehicle and the license plate, and the specific process is as follows: extracting features of the image subjected to video information optimization through a main network (fast R-CNN network) of a Mask R-CNN model to obtain a feature map, inputting a part of the feature map into a region extraction network to generate a plurality of regions of interest, inputting the regions of interest together with the feature map into a region of interest alignment module (Rol alignment module), canceling quantization operation by the region of interest alignment module (Rol alignment module), and obtaining image values on pixels with coordinates of floating points by using a bilinear interpolation method, thereby converting the whole feature aggregation process into a continuous operation; obtaining a feature map of each region through a region of interest alignment module (Rol alignment module), inputting the feature map of each region into a fully-connected network, and then entering two fully-connected network branches, wherein one of the feature maps is used for predicting object types, the object types respectively represent vehicles and license plates, the other feature map is used for predicting object boundary frames to obtain object frame coordinates, and the object boundary frames are the vehicle and license plate boundary frames, so that target detection is realized; while predicting the object type and the object boundary frame, the other part of feature images extracted through the main network sequentially pass through a region of interest alignment module (Rol alignment module) and two convolution networks to generate a mask of a pixel level, and the mask carries out two classification on each pixel on the feature images generated by the region of interest alignment module to distinguish whether the pixels are target pixels or not, so that accurate segmentation of vehicles and license plates is realized; the end-to-end character recognition network CRNN comprises a convolution layer, a circulation layer and a transcription layer, wherein the convolution layer adopts a CNN network and is used for extracting the basic characteristics of license plate images; the circulating layer adopts a layer bidirectional LSTM network and is used for continuously extracting the characteristics contained in the license plate image basic characteristic text sequence and preparing for the transcription layer; the transcription layer receives the information of the circulation layer, and converts the characteristics contained in the character sequence into characters to obtain a license plate recognition result;
s3, accurately detecting the vehicles through Mask R-CNN and CRNN networks, then entering a vehicle tracking step, keeping the bounding boxes of each vehicle positioned by Mask R-CNN, tracking the objects in subsequent video frames, determining the mass centers of the objects by acquiring the intersection points of lines passing through the midpoints of the bounding boxes of the detected vehicles, and monitoring the mass centers of the vehicles by determining the overlapping of the bounding boxes of the vehicles, the vehicle track and intersection angle thereof, the vehicle speed and acceleration change thereof, thereby realizing vehicle tracking, wherein the process of determining the overlapping of the bounding boxes of the vehicles is as follows: for example, the bounding box of two vehicles is satisfied
Figure SMS_47
Wherein->
Figure SMS_43
,/>
Figure SMS_50
Is a bounding box of two vehicles,
Figure SMS_46
for a given vehicle centroid coordinates +.>
Figure SMS_53
The width and height of the vehicle boundary box, respectively, so that the boundary boxes of two vehicles overlap +.>
Figure SMS_54
Checking whether the centers of the two bounding boxes are close enough that the two vehicles will intersect, thereby determining whether the two vehicles overlap; determining the trajectory of the vehicle by taking the difference between the centroids of the tracked vehicle every 10 consecutive video frames, calculating a 2D vectorA direction vector representing the movement of the vehicle, a direction vector +.>
Figure SMS_57
The size of (2) is>
Figure SMS_48
The vector is divided by the scalar to perform normalization operation, and if the original size of a certain tracking object is too small, the object is discarded in order to avoid error tracking of the static object; after the direction vectors are obtained, for each pair of vehicles with overlapped bounding boxes, two direction vectors are used for +.>
Figure SMS_55
Calculate the angle of the track between them +.>
Figure SMS_42
For detecting whether they collide +.>
Figure SMS_49
The method comprises the steps of carrying out a first treatment on the surface of the Determining vehicle speed +.>
Figure SMS_45
When estimated by the number of frames per second of video FPS: />
Figure SMS_52
The effect of distance from the monitoring camera is then eliminated by the distance S between every 10 frames and normalizing: />
Figure SMS_51
WhereincFor distance in the image, H is the height of the vehicle bounding box and H is the height of the video, thereby calculating the acceleration of the tracked vehicle
Figure SMS_56
,/>
Figure SMS_44
S4, when boundary frames of the two vehicles overlap, the acceleration change is observed
Figure SMS_60
Track change->
Figure SMS_62
And angle change->
Figure SMS_64
Performing accident detection by training a two-classifier f to make +.>
Figure SMS_59
、/>
Figure SMS_63
And->
Figure SMS_65
As input of f, ++>
Figure SMS_66
Indicating the probability of an accident if +.>
Figure SMS_58
If the number is more than 0.5, the accident is considered to happen, and the number of the license plate which is recognized in advance and the video which starts from the occurrence of the vehicle are saved for subsequent responsibility fixing processing; if->
Figure SMS_61
If the number is less than 0.5, no accident is considered to occur.
Network structures, algorithms, and computing processes not described in detail herein are all general techniques in the art.
It should be noted that the purpose of the disclosed embodiments is to aid further understanding of the present invention, but those skilled in the art will appreciate that: various alternatives and modifications are possible without departing from the spirit and scope of the invention and the appended claims, and therefore the invention should not be limited to the embodiments disclosed, but rather the scope of the invention is defined by the appended claims.

Claims (10)

1. An intelligent real-time accident detection and vehicle tracking method is characterized by comprising the following steps:
s1, collecting monitoring video information of a parking lot, and performing video information optimization on the monitoring video information to obtain an image with optimized visual information;
s2, vehicle detection is carried out according to the image optimized by visual information, the image is input into a Mask R-CNN model to obtain a boundary frame of a vehicle and a license plate, and then an end-to-end character recognition network CRNN is utilized to recognize the license plate number in the boundary frame of the license plate;
s3, determining the mass center of the object by acquiring the intersection point of the lines passing through the middle point of the detected boundary frame of the vehicle, and realizing vehicle tracking by determining the overlapping of the boundary frames of the vehicle, the track of the vehicle and the intersection angle thereof, and the speed and the acceleration change thereof;
s4, inputting the acceleration change, the track change and the angle change into a trained two-classifier when boundary boxes of two vehicles overlap, obtaining the probability of occurrence of accidents, if the probability of occurrence of the accidents is more than 0.5, considering that the accidents occur, and storing the license plate numbers recognized in advance and videos from the occurrence of the vehicles so as to facilitate subsequent responsibility fixing treatment; if the probability of accident is less than 0.5, no accident is considered to happen.
2. The intelligent real-time accident detection and vehicle tracking method according to claim 1, wherein the process of optimizing the video information in step S1 is as follows:
s11, adopting a mean value filtering method, taking the neighbors around the image pixels in the video information into consideration, and replacing the original pixels with the average value of the surrounding pixels to obtain a denoised image;
and S12, carrying out histogram equalization on the denoised image, enhancing the contrast of the picture information, and obtaining the image with optimized visual information.
3. The intelligent real-time accident detection and vehicle tracking method according to claim 2, wherein the denoised image obtained in step S11
Figure QLYQS_3
The method comprises the following steps: />
Figure QLYQS_5
Wherein->
Figure QLYQS_6
Representing the initial image +.>
Figure QLYQS_2
The representation comprises
Figure QLYQS_4
Is, +.>
Figure QLYQS_7
The gray value of each pixel of (1) is defined by +.>
Figure QLYQS_8
Are all comprised of->
Figure QLYQS_1
Is determined by the gray level average value of the pixels in the predetermined area.
4. The intelligent real-time accident detection and vehicle tracking method according to claim 3, wherein the process of step S12 is: k gray level r in denoised image k The probability of occurrence is
Figure QLYQS_10
Wherein n is the total number of pixels, +.>
Figure QLYQS_12
Is the gray level in the image +.>
Figure QLYQS_14
L is the number of gray levels of the image, the gray level change function is:
Figure QLYQS_11
the method comprises the steps of carrying out a first treatment on the surface of the By->
Figure QLYQS_13
And->
Figure QLYQS_15
Modifying the denoised image +.>
Figure QLYQS_16
After the gray level of (2) a visual information optimized image is obtained>
Figure QLYQS_9
5. The intelligent real-time accident detection and vehicle tracking method according to claim 1, wherein the specific process of the Mask R-CNN model in step S2 for detecting the bounding boxes of the vehicle and the license plate is as follows: the method comprises the steps that characteristics of an image subjected to video information optimization are extracted through a main network of a Mask R-CNN model to obtain a characteristic diagram, a part of the characteristic diagram is input into a region extraction network to generate a plurality of interested regions, the interested regions and the characteristic diagram are input into an interested region alignment module, quantization operation is canceled by the interested region alignment module, and image values on pixels with floating point coordinates are obtained by using a bilinear interpolation method, so that the whole characteristic aggregation process is converted into a continuous operation; the feature map of each region is obtained through the region of interest alignment module, the feature map of each region is input into a fully-connected network, then two fully-connected network branches are input, one of the fully-connected network branches is used for predicting object types, the object types respectively represent vehicles and license plates, the other one of the fully-connected network branches is used for predicting object boundary frames to obtain object frame coordinates, and the object boundary frames are the vehicle and license plate boundary frames, so that target detection is realized; and when the object type and the object boundary frame are predicted, the other part of the feature image extracted through the main network sequentially passes through the region of interest alignment module and the two convolution networks to generate a mask of a pixel level, and the mask carries out two classification on each pixel on the feature image generated by the region of interest alignment module to distinguish whether the pixel is a target pixel or not, so that the accurate segmentation of the vehicle and the license plate is realized.
6. The intelligent real-time accident detection and vehicle tracking method according to claim 5, wherein the Mask R-CNN model backbone network uses a fast R-CNN backbone network.
7. The intelligent real-time accident detection and vehicle tracking method according to claim 5, wherein the end-to-end character recognition network CRNN in step S2 includes three parts, namely a convolution layer, a circulation layer and a transcription layer, wherein the convolution layer adopts a CNN network for extracting the basic features of license plate images; the circulating layer adopts a layer bidirectional LSTM network and is used for continuously extracting the characteristics contained in the license plate image basic characteristic text sequence and preparing for the transcription layer; the transcription layer receives the information of the circulation layer, and converts the characteristics contained in the character sequence into characters to obtain a license plate recognition result.
8. The intelligent real-time accident detection and vehicle tracking method according to claim 1, wherein the process of determining that the vehicle bounding boxes overlap in step S3 is: for example, the bounding box of two vehicles is satisfied
Figure QLYQS_17
Wherein->
Figure QLYQS_18
,/>
Figure QLYQS_19
For the bounding box of two vehicles, +.>
Figure QLYQS_20
For a given vehicle centroid coordinates +.>
Figure QLYQS_21
The width and height of the vehicle bounding box, respectively, the bounding boxes of the two vehicles will overlap.
9. The intelligent real-time accident detection and vehicle tracking method according to claim 6, wherein step S3 determines the vehicle trajectory by acquiring the difference between the centroids of the tracked vehicles every 10 consecutive video frames, and calculates a 2D vector representing the direction vector of the vehicle motion, the direction vector
Figure QLYQS_22
The size of (2) is>
Figure QLYQS_23
The vector is divided by the scalar to perform normalization operation to obtain direction vectors, and two direction vectors are adopted for vehicles with overlapped boundary boxes>
Figure QLYQS_24
Calculate the angle of the track between them +.>
Figure QLYQS_25
For detecting whether they collide with each other,
Figure QLYQS_26
10. the intelligent real-time accident detection and vehicle tracking method according to claim 7, wherein step S3 determines the vehicle speed
Figure QLYQS_27
When estimated by the number of frames per second of video FPS: />
Figure QLYQS_28
The effect of distance from the monitoring camera is then eliminated by the distance S between every 10 frames and normalizing: />
Figure QLYQS_29
WhereincTo at the same timeDistance in the image, H is the height of the vehicle bounding box and H is the height of the video, whereby the acceleration of the tracked vehicle is calculated>
Figure QLYQS_30
Figure QLYQS_31
。/>
CN202310449242.8A 2023-04-25 2023-04-25 Intelligent real-time accident detection and vehicle tracking method Pending CN116152758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310449242.8A CN116152758A (en) 2023-04-25 2023-04-25 Intelligent real-time accident detection and vehicle tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310449242.8A CN116152758A (en) 2023-04-25 2023-04-25 Intelligent real-time accident detection and vehicle tracking method

Publications (1)

Publication Number Publication Date
CN116152758A true CN116152758A (en) 2023-05-23

Family

ID=86354792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310449242.8A Pending CN116152758A (en) 2023-04-25 2023-04-25 Intelligent real-time accident detection and vehicle tracking method

Country Status (1)

Country Link
CN (1) CN116152758A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292338A (en) * 2023-11-27 2023-12-26 山东远东保险公估有限公司 Vehicle accident identification and analysis method based on video stream analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503097A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN111178197A (en) * 2019-12-19 2020-05-19 华南农业大学 Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN112200131A (en) * 2020-10-28 2021-01-08 鹏城实验室 Vision-based vehicle collision detection method, intelligent terminal and storage medium
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN115797929A (en) * 2022-09-21 2023-03-14 华南农业大学 Small farmland image segmentation method and device based on double-attention machine system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503097A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN111178197A (en) * 2019-12-19 2020-05-19 华南农业大学 Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111862119A (en) * 2020-07-21 2020-10-30 武汉科技大学 Semantic information extraction method based on Mask-RCNN
CN112200131A (en) * 2020-10-28 2021-01-08 鹏城实验室 Vision-based vehicle collision detection method, intelligent terminal and storage medium
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN115797929A (en) * 2022-09-21 2023-03-14 华南农业大学 Small farmland image segmentation method and device based on double-attention machine system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周世杰 等: "基于卷积神经网络的大场景下车牌识别", 《计算机工程与设计》, vol. 41, no. 9, pages 2594 - 128 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292338A (en) * 2023-11-27 2023-12-26 山东远东保险公估有限公司 Vehicle accident identification and analysis method based on video stream analysis
CN117292338B (en) * 2023-11-27 2024-02-13 山东远东保险公估有限公司 Vehicle accident identification and analysis method based on video stream analysis

Similar Documents

Publication Publication Date Title
CN112528878B (en) Method and device for detecting lane line, terminal equipment and readable storage medium
CN111582083B (en) Lane line detection method based on vanishing point estimation and semantic segmentation
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN106845478A (en) The secondary licence plate recognition method and device of a kind of character confidence level
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN101739551A (en) Method and system for identifying moving objects
CN107480646B (en) Binocular vision-based vehicle-mounted video abnormal motion detection method
CN111008600A (en) Lane line detection method
Saran et al. Traffic video surveillance: Vehicle detection and classification
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN118096815B (en) Road abnormal event detection system based on machine vision
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN107122732B (en) High-robustness rapid license plate positioning method in monitoring scene
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN110705553B (en) Scratch detection method suitable for vehicle distant view image
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN118155149B (en) Intelligent monitoring system for smart city roads
KR102489884B1 (en) Image processing apparatus for improving license plate recognition rate and image processing method using the same
CN114937248A (en) Vehicle tracking method and device for cross-camera, electronic equipment and storage medium
CN106778675B (en) A kind of recognition methods of target in video image object and device
CN117423040A (en) Visual garbage identification method for unmanned garbage sweeper based on improved YOLOv8
CN111242051A (en) Vehicle identification optimization method and device and storage medium
CN116071713A (en) Zebra crossing determination method, device, electronic equipment and medium
CN116206297A (en) Video stream real-time license plate recognition system and method based on cascade neural network
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230523