CN116153092B - Tunnel traffic safety monitoring method and system - Google Patents

Tunnel traffic safety monitoring method and system Download PDF

Info

Publication number
CN116153092B
CN116153092B CN202310167730.XA CN202310167730A CN116153092B CN 116153092 B CN116153092 B CN 116153092B CN 202310167730 A CN202310167730 A CN 202310167730A CN 116153092 B CN116153092 B CN 116153092B
Authority
CN
China
Prior art keywords
tunnel
vehicle
license plate
entrance
exit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310167730.XA
Other languages
Chinese (zh)
Other versions
CN116153092A (en
Inventor
纪明新
顾辉
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Shentong Technology Co ltd
Original Assignee
Beijing Zhongke Shentong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Shentong Technology Co ltd filed Critical Beijing Zhongke Shentong Technology Co ltd
Publication of CN116153092A publication Critical patent/CN116153092A/en
Application granted granted Critical
Publication of CN116153092B publication Critical patent/CN116153092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for monitoring traffic safety of an intelligent tunnel, wherein the method comprises the following steps: step S1: respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at the entrance, the interior and the exit of the tunnel; step S2: performing target detection on the video frame picture; step S3: detecting and identifying license plates of vehicles at the entrance and the exit of the tunnel; step S4: the obtained license plate information of the vehicle carries out multi-target tracking across cameras; step S5: carrying out vehicle license plate re-identification on the obtained tunnel entrance, interior and exit vehicle images; step S6: detecting brightness abnormality of video frame pictures of all cameras in the tunnel; step S7: detecting the smoke and fire of video frame pictures of a plurality of cameras in the tunnel; step S8: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles; step S9: for all vehicles obtained in each camera, the track is continuous.

Description

Tunnel traffic safety monitoring method and system
Technical Field
The invention belongs to the field of tunnel monitoring, and particularly relates to a method and a system for intelligent tunnel traffic safety monitoring based on cross-camera collaborative multi-technology fusion.
Background
China is a country with multiple mountains, and tunnels have the advantages of improving road networks, saving land and the like, so that the tunnel is gradually an important component of highway construction. Tunnels are one of the important components of road traffic, and their safety is directly related to the life and property safety of people. Along with the rapid development of the economy of China, the construction projects of expressways and expressway tunnels are increased, the problem of safe operation of the tunnels is more and more prominent, and besides the civil construction quality of the tunnels, the monitoring, control and management of the tunnels become an important subject for safe and normal operation of the expressway tunnels.
The tunnel has a narrow internal structure and is relatively closed, so that the driving environment is quite complex, a large number of various facilities are arranged in the tunnel, and the tunnel is particularly important to the operation management of the tunnel. In particular, in expressway tunnels, the vehicle speed is high, the vehicle flow is large, the illumination is poor, the noise is large, the air quality is poor, and traffic accidents are easy to occur. Meanwhile, accidents in the tunnel are difficult to treat, the time for interrupting the communication is long, and a certain road condition is problematic because the information communication is not timely, the speed of the vehicle in the tunnel is high, and the tunnel is jammed or secondary accidents occur. Thus, timely and efficient intelligent monitoring and alarming of tunnels is very necessary.
At present, a plurality of monitoring cameras are installed in all domestic high-speed tunnel entrances and exits and tunnel interiors. However, due to the huge number of monitoring cameras, the tunnel condition is checked and monitored in real time in a manual mode, and huge labor cost is required to be input, so that the method is quite unrealistic. Finally, the monitoring camera which consumes huge investment and installation can be used as the basis for the retrospective backtracking of the event, and has no functions of effective early warning in advance and timely warning in the event.
When analyzing the video pictures shot by the tunnel monitoring camera, the video pictures shot by the cameras at the tunnel entrance (before entering the tunnel) and the tunnel exit (outside the tunnel exit) are clear, so that the license plate number can be seen clearly by human eyes; the interior of the tunnel is poor in video quality shot by the monitoring camera in the tunnel due to long and narrow space, dark light, poor shooting quality, poor network transmission quality and the like, and the license plate cannot be identified, so that only the fuzzy contour of the vehicle can be shot. This presents a significant challenge for intelligent monitoring of tunnels.
Disclosure of Invention
The invention aims to provide a method and a system for monitoring intelligent tunnel traffic safety, which are realized based on cross-camera collaborative multi-technology fusion, and the technical scheme of the invention is as follows: an intelligent tunnel traffic safety monitoring method comprises the following steps:
step S1: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
step S2: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
step S3: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
step S4: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles;
step S5: performing vehicle license plate re-identification on the vehicle images of the tunnel entrance, the interior and the exit obtained in the step 2 based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the tunnel entrance and the exit;
step S6: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
step S7: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type, a second target coordinate and a second target score;
step S8: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles;
step S9: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera.
Another embodiment of the present invention provides an intelligent tunnel traffic safety monitoring system, including:
and the video acquisition module is used for: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
the target detection module: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
license plate information recognition module: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
the vehicle speed detection module: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles;
license plate re-identification module: vehicle license plate re-identification is carried out on vehicle images of the entrance, the interior and the exit of the tunnel based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the entrance and the exit of the tunnel and the tunnel;
a brightness abnormality detection module: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
a smoke and fire detection module: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type, a second target coordinate and a second target score;
the track acquisition module is used for: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles;
the illegal detection module: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera.
Drawings
Fig. 1: the method of the invention is an overall flow chart;
FIG. 2 is a flow chart of target detection;
FIG. 3 is a vehicle re-identification flow chart;
FIG. 4 is a flow chart of ignition smoke detection;
FIG. 5 is a flow chart of traffic detection;
fig. 6 is a flow chart of illegal lane change detection.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without the inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
The embodiment of the invention discloses a method and a system for monitoring intelligent tunnel traffic safety, which can realize intelligent tunnel traffic safety monitoring based on cross-camera collaborative multi-technology fusion, and refer to fig. 1, and comprises the following steps:
step S1: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
in the embodiment, a first video acquisition device and a third video acquisition device are respectively arranged at an entrance and an exit of a tunnel, second acquisition devices are arranged in the tunnel, and the number of the second acquisition devices is multiple and distributed at intervals along the direction of the tunnel; the plurality of second acquisition devices are divided into a plurality of groups of arrangements:
a first group of cameras in the tunnel are arranged near the entrance of the tunnel, and a second group of cameras in the tunnel are arranged near the exit of the tunnel; the approach refers to a position which is 30-50 meters away from the inlet or the outlet;
further, between the first group of in-tunnel cameras and the second group of in-tunnel cameras, a group of cameras is provided as a third camera group according to the separation distance, for example, every 40 meters; for example, if the tunnel length of the first group of cameras in the tunnel and the second group of cameras in the tunnel is 120 meters, there are 2 groups of the third group of cameras located at 40 meters and 80 meters respectively;
step S2: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
the first object type is such as a vehicle, a pedestrian, a pet and Dan Toudeng, wherein the coordinates of the first object are pixel coordinates of the position of the center of shape or the gravity center of the object in an image, and the coordinates comprise x and y directions;
in this embodiment, the target detection is implemented by using a target detection model, which specifically includes training the target detection model, and performing target detection by using the trained target detection model. The training of the target detection model specifically comprises the following steps:
s21, collecting video frame images shot by cameras at the entrance, the tunnel and the exit;
step S21, screening operation is carried out on the video frame images collected in the tunnel to form a screened image data set;
because the tunnel scene has larger difference with the common social monitoring video, and the tunnel entrance and the tunnel have huge difference with the video pictures inside the tunnel, the invention trains the target detection model aiming at the tunnel scene. Selecting frame images acquired by the tunnel monitoring video to form an image data set; the selection refers to selecting a frame image with clear exposure image, uniform brightness and complete vehicle;
step S22, data enhancement is carried out on the target image in the image data set, and the target image is marked by adopting marking software;
step S23, dividing the image data set into a training set, a verification set and a test set;
step S24, a YOLOv5 deep convolutional neural network model is established; training the deep convolutional neural network model by using a training set and an Adam optimizer, performing super-parameter debugging by using the performance of a verification set, and storing trained weight parameters; performing test set testing, and performing boundary frame overlap ratio calculation and target category judgment to obtain a detection result;
the target detection by using the trained target detection model comprises the following steps:
step S25, the image frames of the real-time monitoring video are transmitted into a target detection and identification engine, and the target detection and identification engine outputs the detection frames, the categories and the confidence degrees of all objects contained in the image in the corresponding cameras. Step 3: and detecting and identifying license plates of vehicles detected by the entrance and exit cameras of the tunnel to obtain license plate information of the entrance and exit vehicles.
Step S3: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
step S31, a Haar cascade classifier is established to perform license plate coarse positioning, and a picture with the license plate position at the central position is obtained;
s31, cutting upper and lower boundaries of a license plate region in the obtained picture, and removing an interference region during correction;
step S32, counting the angles of the fields in each direction to find two directions with the most dense textures in the image, and carrying out license plate correction;
step S33, establishing a CNN regression model to predict four vertex positions of the license plate, and cutting out a license plate region to obtain the accurate position of the license plate;
step S34, establishing a Hyper l pr depth convolution neural network end-to-end license plate recognition model;
and step S35, obtaining the target detection result in the step S2, screening the identification object according to the confidence, transmitting the vehicle image into a license plate detection and license plate identification engine, and directly obtaining the license plate number.
Step S4: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles; the method specifically comprises the following steps:
step S41, obtaining a license plate recognition result of the tunnel entrance in step S3, and recording the time t1 when the license plate Lg is shot and the license plate number Lg;
step S42, obtaining a license plate recognition result of the tunnel exit in step S3, and recording the time t2 when the same license plate number Lg is shot;
step S43, obtaining the passing time of a certain vehicle in the tunnel according to delta t=t2-t 1;
step S44, obtaining the average speed v=s/Δt of the vehicle passing tunnel according to the tunnel length S and the passing time Δt;
step S44, uploading the passing time and the average speed to a monitoring platform by means of a communication module, judging whether the vehicle corresponding to the license plate number is overspeed or not according to a preset passing time threshold value and an overspeed low-speed driving threshold value, triggering an alarm if overspeed, facilitating subsequent audit and realizing tracking identification across cameras.
Step S5: performing vehicle license plate re-identification on the vehicle images of the tunnel entrance, the interior and the exit obtained in the step 2 based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the tunnel entrance and the exit;
because the tunnel entrance and the tunnel have huge differences in shooting angles and shooting quality, although the recognition model is trained by adopting the data set, the recognition of the vehicle license plate is still inaccurate due to insufficient light in the tunnel, so that the accuracy of a series of follow-up monitoring results is affected, and in order to accurately track the vehicles of the tunnel entrance and the tunnel inside-span cameras, the vehicle license plate re-recognition is carried out on the tunnel entrance, inside-tunnel and outside-tunnel vehicle images obtained in the step S2 based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the tunnel entrance and the tunnel exit.
The vehicle re-identification deep convolutional neural network model is firstly established and trained, and the vehicle re-identification deep convolutional neural network model is specifically as follows:
step S51, a vehicle re-identification deep convolutional neural network model (CNN) is established, wherein the vehicle re-identification deep convolutional neural network model (CNN) is divided into three sub-models which respectively correspond to an inlet re-identification detection sub-model, an internal re-identification detection sub-model and an outlet re-identification detection sub-model;
according to one embodiment, the vehicle re-recognition deep convolutional neural network model (CNN) is divided into three sub-models, namely an entrance re-recognition detection sub-model, an interior re-recognition detection sub-model and an exit re-recognition detection sub-model, so that the deep convolutional neural network model corresponding to the cameras at different positions is built, for example, in the embodiment, the entrance re-recognition detection sub-model and the exit re-recognition detection sub-model are respectively applied to the cameras at the entrance of the tunnel and the cameras at the exit of the tunnel, so that the deep convolutional neural network model (CNN) has more pertinence and more accurate recognition;
further, the internal re-identification detection sub-model includes: an internal near entrance re-identification detection sub-model, an internal intermediate re-identification detection sub-model, and an internal near exit re-identification detection sub-model;
for example, aiming at a first group of cameras in a tunnel, a second group of cameras in the tunnel and a third camera group, wherein the first group of cameras in the tunnel are close to a tunnel entrance, the second group of cameras in the tunnel are close to a tunnel exit, the third group of cameras in the tunnel are positioned in the middle section, a corresponding inner near entrance re-recognition detection sub-model, an inner middle section re-recognition detection sub-model and an inner near exit re-recognition detection sub-model are established, and training and verification are carried out by using corresponding data during subsequent training to obtain a corresponding final model;
step S52, collecting vehicle images of the same vehicle under the conditions of illumination states, appearance forms and shielding under different camera angles to obtain a cross-scene vehicle image dataset;
wherein, for the image of tunnel entrance, for example, collect 9 day-5 night images, and mark time information, weather status, and collect images of other time periods in the image information;
because the image quality is influenced by time and illumination, when the image marked with time information is trained, a neural network model can be trained according to the data image sets of different time sections, so that when the image is identified, model parameters of a corresponding time section are selected for identification according to the current time, and the accuracy is improved;
in addition, vehicle images are respectively acquired aiming at rainy days, sunny days and cloudy days, so that image sets under different weather conditions are obtained, corresponding models after training of the image sets are obtained, and the corresponding models can be applied under different current weather conditions;
the method comprises the steps of respectively collecting video images of a first group of cameras in a tunnel, a second group of cameras in the tunnel and a third group of cameras aiming at different moments of sunny days, cloudy days, rainy days and the like;
step S53, classifying the cross-scene vehicle image dataset based on the position difference of the camera at the entrance, the inside and the exit to obtain an entrance image dataset, an inside image dataset and an exit image dataset; based on the inlet image dataset, the internal image dataset and the outlet image dataset respectively train the inlet, internal and outlet re-identification detection sub-models;
in this embodiment, images captured by cameras at different positions are correspondingly used for training, verifying and testing corresponding neural network models;
for example, due to the influence of tunnel direction, the illumination intensity and illumination direction near the exit and entrance of the tunnel are different, and the illumination characteristics are different at different moments, so that for the tunnel in the north-south direction in northern areas, the light near the tunnel portal in the south of the tunnel in the morning is better, and after 5 pm, the light near the tunnel portal in the north of the tunnel is better due to the fact that the relative movement of the sun deviates to the northwest direction, so that the quality of the images shot by different cameras at different positions and at different moments is different. Therefore, the images collected by the cameras in the first group of tunnels are used for training the inner near entrance re-identification detection sub-model, the images collected by the cameras in the second group of tunnels are used for training the inner near exit re-identification detection sub-model, and the video images of the third group of cameras are used for training the inner middle segment re-identification detection sub-model.
And S54, training adopts a multi-loss function staged combined training strategy, and stores trained weight parameters to obtain a trained entrance, interior and exit re-identification detection model.
Further, the step S5 further includes,
step S55, vehicle license plate re-recognition is carried out by utilizing the trained vehicle re-recognition deep convolutional neural network model: the method specifically comprises the following steps:
step S551, extracting vehicle image characteristics and license plate image characteristics from the vehicle images in the videos acquired at the entrance in real time, and identifying by adopting an entrance re-identification detection sub-model;
step S552, extracting vehicle image characteristics and license plate image characteristics from the vehicle images in the video acquired at the exit in real time, and identifying by adopting an exit re-identification detection sub-model;
step S553, extracting vehicle image characteristics and license plate image characteristics from the vehicle images in the video acquired in real time at the inside of the tunnel, and identifying by adopting an internal re-identification detection sub-model;
meanwhile, connecting the vehicle image features and license plate image features together to form a combined feature vector;
in the present embodiment, the extracted vehicle image features are, for example: x1, X2, X3 … … Xn, license plate image features are for example Y1, Y2, Y3 … … Yn, and the connected feature vectors are: { X1, X2, X3 … … Xn, Y1, Y2, Y3 … … Yn };
in the daytime, the external light is good, and the optical fiber in the tunnel is weak, so that the shooting quality of a camera in the tunnel is generally inferior to that of the camera outside the tunnel;
in the daytime, the external light is poor and the optical fibers in the tunnel are relatively uniform, so that the shooting quality of a camera in the tunnel is better than that of the camera outside the tunnel;
when the night is dark, the external light is poor, and the optical fibers in the tunnel are relatively uniform, so that the shooting quality of a camera in the tunnel is better than that of the camera outside the tunnel;
because the vehicle target is larger than the license plate target, the recognition of the license plate features is not as accurate as the recognition of the vehicle features under the condition of poor light, so that the weight interference factors alpha and beta are added to the connected feature vectors, different weights are given to the vehicle features and the license plate features, the influence of weather and light is offset, and the result is more accurate.
{αX1,αX2,αX3……αXn,βY1,βY2,βY3……βYn};
Wherein, beta=1-alpha,
and 554, inputting the combined feature vector into a vehicle feature query library for retrieval and identification, calculating Euclidean distance with the combined feature vector of the image in the vehicle feature query library for similarity measurement, and locking the retrieval and identification result meeting the requirement based on the similarity measurement result.
The overall recognition flow chart is shown in fig. 3. And sequentially obtaining the vehicle track image streams of the tunnel entrance, the tunnel interior and the tunnel exit in time sequence, uploading the vehicle track image streams to a tunnel safety monitoring platform, and providing support for the subsequent process.
Step 6: and detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not.
Step S6: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
specifically, the step S6 includes,
step S61, transmitting the real-time monitoring video image frames in the tunnel into a brightness abnormality detection engine; converting the color image into a gray scale image;
step S62, an average gray value G of the whole image or the ROI area is calculated, wherein the gray value is the brightness value of the image; defining a threshold A, B, wherein when G is E [0, A ] the image is considered to be dark, and when G is E [ B,255] the image is considered to be bright; according to the obtained average gray value G; and uploading the average gray value G to a monitoring platform, and carrying out abnormal brightness alarm according to the set dark and bright threshold value.
Step S7: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type (smoke), a second target coordinate and a second target score;
specifically, the step S7 further includes,
step S71, processing target images acquired by various firework videos and classifying the processed target images into corresponding firework data sets;
step S72, screening and filtering target images in the smoke and fire data set, adding non-smoke and fire negative samples, and marking smoke and fire by adopting marking software;
step S73, establishing a fire smoking depth convolution neural network model;
step S74, dividing the firework data set into a training set, a verification set and a test set;
step S75, training the deep convolutional neural network by using a training set and an SGD optimizer, performing super-parameter debugging by using the performance of a verification set, and storing the trained weight parameters;
step S76, performing testing by using a testing set, and performing boundary frame overlap ratio calculation and smoke and fire category judgment to obtain a detection result;
and step 77, transmitting an image frame of the real-time monitoring video into a fire and smoke detection engine, judging whether smoke and fire characteristics exist in the image by the fire and smoke detection engine, and uploading the image frame to a monitoring platform for fire and smoke alarm if the smoke and fire characteristics exist. The overall detection flow chart is shown in fig. 4.
Step S8: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles; specifically, the step S8 further includes,
step S81, obtaining the target detection result of step S2, and screening the identification object according to the confidence level;
step S82, for each detected object, calibrating the area by using a rectangular frame, and extracting the characteristics of the center coordinates, the size and the like of each area;
step S83, then, establishing a linked list for each moving object, and storing the extracted features;
step S84, track prediction is carried out by using a Kalman filtering time update equation;
step S85, performing feature weighted matching on the predicted track and the detection in the current frame by using a Hungary algorithm;
step S86, updating the track by using a Kalman filtering measurement updating equation if the matching is successful, and considering that the track is lost if the matching is failed;
in step S87, finally, the association step assigns a digital ID to each object.
Step S9: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera. The method comprises the following steps:
the vehicle flow detection is realized by counting vehicles, and specifically comprises the following steps: firstly, acquiring a unique target identifier and a corresponding center point obtained by multi-target tracking in a single camera in the step S8; setting a virtual counting line and setting a counter to 0; when the target just appears, judging that the target center point is at one side of the virtual counting line; the target can continuously obtain a new position point in the moving process, and the target is repeatedly judged to be at one side of the straight line; when the front position point and the rear position point are on different sides of the straight line, the object is illustrated to cross the straight line, and the counter is increased by one at the moment; the counted objects set a flag field and no longer participate in the counting at a later time. The overall detection flow chart is shown in FIG. 5, where "+.! = "means not equal.
Illegal lane change detection specifically comprises: detecting lane lines, detecting the lane lines by adopting Hough transformation, and determining the position coordinates of the lane lines; the method comprises the steps of obtaining a unique target identifier and a corresponding center point through multi-target tracking in a single camera in the step S8; when the target just appears, judging that the center point of the target is at one side of the lane line; the target can continuously obtain a new position point in the moving process, and the target is repeatedly judged to be on one side of the lane line; when the front position point and the rear position point are on different sides of the straight line, the target is indicated to cross the lane line, namely, lane change behavior occurs, and the lane change behavior is uploaded to a monitoring platform to perform illegal lane change alarm; the algorithm comprises the following specific steps:
step S91: setting a lane line Forbidden to be changed as Forbidden;
step S92: calculating the distance d between a vehicle tracking track center point Pos (x, y) and a corresponding lane line;
d=Pos(x,y)-Forbidden(x,y)
step S93: calculating whether the distance is positive or negative, namely whether the distance between the center point of the vehicle track and the lane line is changed or not, so as to judge whether lane change behavior occurs or not, namely
When the positive and negative of the distance between the vehicle and the lane line are suddenly changed, ch_Event is set to 1 to represent that lane change occurs, otherwise Ch_Event is set to 0 to represent that lane change does not occur, old pos iton is the pos iton value at the past moment, current pos iton is the pos iton value at the current moment, and pos iton is the position. The overall detection flow chart is shown in fig. 6.
Traffic jam detection, specifically including: firstly, acquiring a unique target identifier and a corresponding motion trail coordinate point set obtained by multi-target tracking in a single camera in the step S8; obtaining a target pixel level speed according to the target track point coordinates; acquiring a traffic flow parameter of traffic flow detection; according to the traffic flow parameter being greater than a certain value, and the speed within a plurality of continuous frames of each vehicle being not greater than a certain value; then a traffic jam event occurs and a traffic jam alarm is made.
Illegal parking detection specifically includes: firstly, acquiring a unique target identifier and a corresponding motion trail coordinate point set obtained by multi-target tracking in a single camera in the step S8; obtaining a target pixel level speed according to the target track point coordinates; according to the fact that the speed within a plurality of continuous frames is not greater than a certain value, eliminating traffic jam; and an illegal parking event of the vehicle occurs, and illegal parking alarm is carried out.
According to the invention, under the condition of not adding any hardware cost at the front end of the tunnel, the existing tunnel monitoring camera is utilized to the maximum extent, and the vehicle type, license plate number, license plate color and vehicle speed of the vehicle passing through the tunnel can be timely identified by a multi-technology fusion means under the cooperation of the cross cameras; identifying full-flow picture information of vehicle entering, running in tunnel and leaving; traffic flow in the tunnel is counted, traffic conditions such as traffic jam in the tunnel are identified. Further, a monitoring system capable of timely monitoring a series of security problems in a tunnel is provided. Illegal actions such as parking of vehicles in tunnels, overspeed running, illegal lane changing and the like; the tunnel is provided with an unknown obstacle and abnormal conditions such as personnel walking; and the abnormality such as too dark light, ignition and smoking in the tunnel is intelligently identified. In addition, once the potential safety hazard problem occurs, the system can automatically alarm immediately, so that staff can intervene manually in time, and accidents are avoided. By the system, timely early warning and timely alarming in advance of tunnel safety accidents can be achieved, and the emergency response disposal speed of tunnel management staff to a series of tunnel safety events is greatly improved.
According to another aspect of the present invention, there is also provided an intelligent tunnel traffic safety monitoring system, including:
and the video acquisition module is used for: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
the target detection module: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
license plate information recognition module: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
the vehicle speed detection module: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles;
license plate re-identification module: vehicle license plate re-identification is carried out on vehicle images of the entrance, the interior and the exit of the tunnel based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the entrance and the exit of the tunnel and the tunnel;
a brightness abnormality detection module: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
a smoke and fire detection module: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type, a second target coordinate and a second target score;
the track acquisition module is used for: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles;
the illegal detection module: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera.
While the foregoing has been described in relation to illustrative embodiments thereof, so as to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as limited to the spirit and scope of the invention as defined and defined by the appended claims, as long as various changes are apparent to those skilled in the art, all within the scope of which the invention is defined by the appended claims.

Claims (9)

1. The intelligent tunnel traffic safety monitoring method is characterized by comprising the following steps of:
step S1: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
step S2: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
step S3: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
step S4: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles;
step S5: performing vehicle license plate re-identification on the vehicle images of the tunnel entrance, the interior and the exit obtained in the step 2 based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the tunnel entrance and the exit; the step S5 further includes,
step S55, vehicle license plate re-recognition is carried out by utilizing the trained vehicle re-recognition deep convolutional neural network model: the method specifically comprises the following steps:
s551, extracting vehicle image characteristics and license plate image characteristics from vehicle images in videos acquired at the entrance in real time, and identifying by adopting an entrance re-identification detection sub-model;
s552, extracting vehicle image characteristics and license plate image characteristics from vehicle images in videos acquired at the exit in real time, and identifying by adopting an exit re-identification detection sub-model;
s553, extracting vehicle image features and license plate image features from the vehicle images in the videos acquired in real time at the inside of the tunnel, and identifying by adopting an internal re-identification detection sub-model;
meanwhile, connecting the vehicle image features and license plate image features together to form a combined feature vector;
step 554, inputting the combined feature vector into a vehicle feature query library for retrieval and identification, calculating Euclidean distance with the combined feature vector of the image in the vehicle feature query library for similarity measurement, and locking the retrieval and identification result meeting the requirement based on the similarity independent result;
step S6: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
step S7: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type, a second target coordinate and a second target score;
step S8: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles;
step S9: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera.
2. The method for intelligent tunnel traffic safety monitoring according to claim 1, wherein the step S2 is characterized in that the target detection is performed on video frame pictures collected at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate, and specifically comprises the training of a target detection model, and the target detection is performed by using the trained target detection model, and the training of the target detection model specifically comprises:
s21, collecting video frame images shot by cameras at the entrance, the tunnel and the exit;
step S21, screening operation is carried out on the video frame images collected in the tunnel to form a screened image data set;
step S22, data enhancement is carried out on the target image in the image data set, and the target image is marked by adopting marking software;
step S23, dividing the image data set into a training set, a verification set and a test set;
step S24, a YOLOv5 deep convolutional neural network model is established; training the deep convolutional neural network model by using a training set and an Adam optimizer, performing super-parameter debugging by using the performance of a verification set, and storing trained weight parameters; performing test set testing, and performing boundary frame overlap ratio calculation and target category judgment to obtain a detection result;
the target detection by using the trained target detection model comprises the following steps:
step S25, the image frames of the real-time monitoring video are transmitted into a target detection and identification engine, and the target detection and identification engine outputs the detection frames, the categories and the confidence degrees of all objects contained in the image in the corresponding cameras.
3. The intelligent tunnel traffic safety monitoring method according to claim 1, wherein the step S3 of detecting and identifying license plates of vehicles detected by cameras at the entrance and exit of the tunnel to obtain license plate information of the entrance and exit vehicles comprises:
step S31, a Haar cascade classifier is established to perform license plate coarse positioning, and a picture with the license plate position at the central position is obtained;
s31, cutting upper and lower boundaries of a license plate region in the obtained picture, and removing an interference region during correction;
step S32, counting the angles of the fields in each direction to find two directions with the most dense textures in the image, and carrying out license plate correction;
step S33, establishing a CNN regression model to predict four vertex positions of the license plate, and cutting out a license plate region to obtain the accurate position of the license plate;
step S34, establishing a Hyperlpr deep convolutional neural network end-to-end license plate recognition model;
and step S35, obtaining the target detection result in the step S2, screening the identification object according to the confidence, transmitting the vehicle image into a license plate detection and license plate identification engine, and directly obtaining the license plate number.
4. The intelligent tunnel traffic safety monitoring method according to claim 1, wherein the step S4 is to track the vehicle license plate information obtained at the entrance and exit of the tunnel with multiple targets crossing cameras to obtain the average speed and the transit time information of all vehicles; the method specifically comprises the following steps:
step S41, obtaining a license plate recognition result of the tunnel entrance in step S3, and recording the time t1 when the license plate Lg is shot and the license plate number Lg;
step S42, obtaining a license plate recognition result of the tunnel exit in step S3, and recording the time t2 when the same license plate number Lg is shot;
step S43, obtaining the passing time of a certain vehicle in the tunnel according to delta t=t2-t 1;
step S44, obtaining the average speed v=s/Δt of the vehicle passing tunnel according to the tunnel length S and the passing time Δt;
step S44, uploading the passing time and the average speed to a monitoring platform by means of a communication module, judging whether the vehicle corresponding to the license plate number is overspeed or not according to a preset passing time threshold value and an overspeed low-speed driving threshold value, triggering an alarm if overspeed, facilitating subsequent audit and realizing tracking identification across cameras.
5. The intelligent tunnel traffic safety monitoring method according to claim 1, wherein the step S5 is characterized in that the vehicle license plate re-recognition is performed on the tunnel entrance, interior and exit vehicle images obtained in the step S2 based on the difference metric values of the tunnel entrance and exit and the vehicle shooting position and shooting quality in the tunnel, and specifically comprises the steps of firstly establishing a vehicle re-recognition depth convolutional neural network model and training, specifically comprising the following steps:
step S51, a vehicle re-identification deep convolutional neural network model (CNN) is established, wherein the vehicle re-identification deep convolutional neural network model (CNN) is divided into three sub-models which respectively correspond to an inlet, an interior and an outlet re-identification detection sub-model;
step S52, collecting vehicle images of the same vehicle under the conditions of illumination states, appearance forms and shielding under different camera angles to obtain a cross-scene vehicle image dataset;
step S53, classifying the cross-scene vehicle image dataset based on the position difference of the camera at the entrance, the inside and the exit to obtain an entrance image dataset, an inside image dataset and an exit image dataset; based on the inlet image dataset, the internal image dataset and the outlet image dataset respectively train the inlet, internal and outlet re-identification detection sub-models;
and S54, training adopts a multi-loss function staged combined training strategy, and stores trained weight parameters to obtain a trained entrance, interior and exit re-identification detection model.
6. The method for intelligent tunnel traffic safety monitoring according to claim 1, wherein the step S6 further comprises,
step S61, transmitting the real-time monitoring video image frames in the tunnel into a brightness abnormality detection engine; converting the color image into a gray scale image;
step S62, an average gray value G of the whole image or the ROI area is calculated, wherein the gray value is the brightness value of the image; defining a threshold A, B, wherein when G is E [0, A ] the image is considered to be dark, and when G is E [ B,255] the image is considered to be bright; according to the obtained average gray value G; and uploading the average gray value G to a monitoring platform, and carrying out abnormal brightness alarm according to the set dark and bright threshold value.
7. The method for intelligent tunnel traffic safety monitoring according to claim 1, wherein the step S7 further comprises,
step S71, processing target images acquired by various firework videos and classifying the processed target images into corresponding firework data sets;
step S72, screening and filtering target images in the smoke and fire data set, adding non-smoke and fire negative samples, and marking smoke and fire by adopting marking software;
step S73, establishing a fire smoking depth convolution neural network model;
step S74, dividing the firework data set into a training set, a verification set and a test set;
step S75, training the deep convolutional neural network by using a training set and an SGD optimizer, performing super-parameter debugging by using the performance of a verification set, and storing the trained weight parameters;
step S76, performing testing by using a testing set, and performing boundary frame overlap ratio calculation and smoke and fire category judgment to obtain a detection result;
and step 77, transmitting an image frame of the real-time monitoring video into a fire and smoke detection engine, judging whether smoke and fire characteristics exist in the image by the fire and smoke detection engine, and uploading the image frame to a monitoring platform for fire and smoke alarm if the smoke and fire characteristics exist.
8. The method for intelligent tunnel traffic safety monitoring according to claim 1, wherein the step S8 further comprises,
step S81, obtaining the target detection result of step S2, and screening the identification object according to the confidence level;
step S82, for each detected object, calibrating the area by using a rectangular frame, and extracting the characteristics of the center coordinates, the size and the like of each area;
step S83, then, establishing a linked list for each moving object, and storing the extracted features;
step S84, track prediction is carried out by using a Kalman filtering time update equation;
step S85, performing feature weighted matching on the predicted track and the detection in the current frame by using a Hungary algorithm;
step S86, updating the track by using a Kalman filtering measurement updating equation if the matching is successful, and considering that the track is lost if the matching is failed;
in step S87, finally, the association step assigns a digital ID to each object.
9. An intelligent tunnel traffic safety monitoring system, comprising:
and the video acquisition module is used for: the method comprises the steps of respectively acquiring video data in corresponding monitoring areas by using a plurality of video acquisition devices installed at an entrance, an interior and an exit of a tunnel, acquiring video streams in real time and decoding to obtain video frame pictures; the video acquisition equipment is a video camera;
the target detection module: performing target detection on video frame pictures acquired at the entrance, the interior and the exit of the tunnel to obtain a first target type and a first target coordinate;
license plate information recognition module: detecting and identifying license plates of vehicles detected by cameras at the entrance and the exit of the tunnel to obtain license plate information of the vehicles at the entrance and the exit;
the vehicle speed detection module: the vehicle license plate information obtained at the entrance and the exit of the tunnel is subjected to multi-target tracking of the cross-camera to obtain the average speed and the passing time information of all vehicles;
license plate re-identification module: vehicle license plate re-identification is carried out on vehicle images of the entrance, the interior and the exit of the tunnel based on the difference measurement values of the shooting positions and the shooting quality of the vehicles in the entrance and the exit of the tunnel and the tunnel; the license plate re-recognition module recognition step comprises,
step S55, vehicle license plate re-recognition is carried out by utilizing the trained vehicle re-recognition deep convolutional neural network model: the method specifically comprises the following steps:
s551, extracting vehicle image characteristics and license plate image characteristics from vehicle images in videos acquired at the entrance in real time, and identifying by adopting an entrance re-identification detection sub-model;
s552, extracting vehicle image characteristics and license plate image characteristics from vehicle images in videos acquired at the exit in real time, and identifying by adopting an exit re-identification detection sub-model;
s553, extracting vehicle image features and license plate image features from the vehicle images in the videos acquired in real time at the inside of the tunnel, and identifying by adopting an internal re-identification detection sub-model;
meanwhile, connecting the vehicle image features and license plate image features together to form a combined feature vector;
and 554, inputting the combined feature vector into a vehicle feature query library for retrieval and identification, calculating Euclidean distance with the combined feature vector of the image in the vehicle feature query library for similarity measurement, and locking the retrieval and identification result meeting the requirement based on the similarity independent result.
A brightness abnormality detection module: detecting brightness abnormality of video frame pictures of all cameras in the tunnel, and detecting whether the lamplight environment in the tunnel is abnormal or not;
a smoke and fire detection module: detecting whether the video frame pictures of a plurality of cameras in the tunnel catch fire and smoke, and detecting whether abnormal events of catching fire and smoke occur in the tunnel or not to obtain a second target type, a second target coordinate and a second target score;
the track acquisition module is used for: carrying out multi-target tracking in a single camera on vehicles detected by each camera in the tunnel to obtain continuous tracks of all vehicles;
the illegal detection module: and (3) carrying out traffic flow detection, illegal lane change detection, traffic jam detection and illegal parking detection on all the continuous tracks of the vehicles obtained in each camera.
CN202310167730.XA 2022-12-29 2023-02-27 Tunnel traffic safety monitoring method and system Active CN116153092B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211702635 2022-12-29
CN2022117026357 2022-12-29

Publications (2)

Publication Number Publication Date
CN116153092A CN116153092A (en) 2023-05-23
CN116153092B true CN116153092B (en) 2024-03-22

Family

ID=86338819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310167730.XA Active CN116153092B (en) 2022-12-29 2023-02-27 Tunnel traffic safety monitoring method and system

Country Status (1)

Country Link
CN (1) CN116153092B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852428A (en) * 2006-05-25 2006-10-25 浙江工业大学 Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision
CN104751634A (en) * 2015-04-22 2015-07-01 贵州大学 Comprehensive application method of expressway tunnel driving image acquisition information
KR101859402B1 (en) * 2017-12-14 2018-05-18 주식회사 딥스 The object tracking and lane changing vehicles detection method between linked cameras in tunnel
CN108376473A (en) * 2018-04-28 2018-08-07 招商局重庆交通科研设计院有限公司 Roads and tunnels traffic Warning System based on vehicle operational monitoring
CN113055649A (en) * 2021-03-17 2021-06-29 杭州公路工程监理咨询有限公司 Tunnel intelligent video monitoring method and device, intelligent terminal and storage medium
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN115100865A (en) * 2022-06-24 2022-09-23 上海市政工程设计研究总院(集团)有限公司 Management and control system for traffic safety of tunnel portal area

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852428A (en) * 2006-05-25 2006-10-25 浙江工业大学 Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision
CN104751634A (en) * 2015-04-22 2015-07-01 贵州大学 Comprehensive application method of expressway tunnel driving image acquisition information
KR101859402B1 (en) * 2017-12-14 2018-05-18 주식회사 딥스 The object tracking and lane changing vehicles detection method between linked cameras in tunnel
CN108376473A (en) * 2018-04-28 2018-08-07 招商局重庆交通科研设计院有限公司 Roads and tunnels traffic Warning System based on vehicle operational monitoring
CN113055649A (en) * 2021-03-17 2021-06-29 杭州公路工程监理咨询有限公司 Tunnel intelligent video monitoring method and device, intelligent terminal and storage medium
CN115100865A (en) * 2022-06-24 2022-09-23 上海市政工程设计研究总院(集团)有限公司 Management and control system for traffic safety of tunnel portal area
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel

Also Published As

Publication number Publication date
CN116153092A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Aboah A vision-based system for traffic anomaly detection using deep learning and decision trees
CN105809679B (en) Mountain railway side slope rockfall detection method based on visual analysis
CN110717433A (en) Deep learning-based traffic violation analysis method and device
CN103069434B (en) For the method and system of multi-mode video case index
CN102903239B (en) Method and system for detecting illegal left-and-right steering of vehicle at traffic intersection
CN110660222B (en) Intelligent environment-friendly electronic snapshot system for black-smoke road vehicle
CN106571039A (en) Automatic snapshot system for highway traffic offence
CN103116987A (en) Traffic flow statistic and violation detection method based on surveillance video processing
Marcomini et al. A comparison between background modelling methods for vehicle segmentation in highway traffic videos
CN112381778A (en) Transformer substation safety control platform based on deep learning
CN105227907A (en) Based on the nothing supervision anomalous event real-time detection method of video
CN111523397A (en) Intelligent lamp pole visual identification device, method and system and electronic equipment
KR102500975B1 (en) Apparatus for learning deep learning model and method thereof
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN109766743A (en) A kind of intelligent bionic policing system
CN108520528A (en) Based on the mobile vehicle tracking for improving differential threshold and displacement field match model
CN116846059A (en) Edge detection system for power grid inspection and monitoring
Li et al. Application research of artificial intelligent technology in substation inspection tour
CN115223106A (en) Sprinkler detection method fusing differential video sequence and convolutional neural network
CN109934161A (en) Vehicle identification and detection method and system based on convolutional neural network
CN116153092B (en) Tunnel traffic safety monitoring method and system
CN110909607B (en) Passenger flow sensing device system in intelligent subway operation
CN116597394A (en) Railway foreign matter intrusion detection system and method based on deep learning
CN112906511B (en) Wild animal intelligent monitoring method combining individual image and footprint image
CN116052035A (en) Power plant personnel perimeter intrusion detection method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant