CN115909223A - Method and system for matching WIM system information with monitoring video data - Google Patents

Method and system for matching WIM system information with monitoring video data Download PDF

Info

Publication number
CN115909223A
CN115909223A CN202211258178.7A CN202211258178A CN115909223A CN 115909223 A CN115909223 A CN 115909223A CN 202211258178 A CN202211258178 A CN 202211258178A CN 115909223 A CN115909223 A CN 115909223A
Authority
CN
China
Prior art keywords
vehicle
information
license plate
wim system
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211258178.7A
Other languages
Chinese (zh)
Inventor
袁杭
周毅
王晓慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202211258178.7A priority Critical patent/CN115909223A/en
Publication of CN115909223A publication Critical patent/CN115909223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method and a system for matching WIM system information with monitoring video data, which comprises the following steps: s1, tracking and collecting data of a vehicle to obtain a video data set of the vehicle, S2, obtaining a timestamp and lane information when the vehicle reaches a WIM system, and S3, identifying the vehicle type and license plate number in the video data set to obtain a predicted vehicle type and license plate number; and S4, fusing the timestamp and the lane information in the S2 and the prediction result of the vehicle type and the license plate number in the S3 with corresponding information of all vehicles acquired by the WIM system in real time, and acquiring a data row corresponding to the vehicle in the WIM system. The method considers the data information of a plurality of directions, reduces the adverse effect brought by the data delay of the WIM system, and obtains a relatively accurate matching result. Fusing WIM system data can provide more beneficial information during vision-based on-bridge traffic load analysis. The method of the invention makes the implementation and application of the previous theoretical research in the actual scene more feasible.

Description

Method and system for matching WIM system information with monitoring video data
Technical Field
The invention belongs to the field of traffic load monitoring, and particularly relates to a method and a system for matching WIM system information with monitoring video data.
Background
Traffic load monitoring is one of the most important external loads of bridges or roads and a key problem in the field of bridge or road health monitoring, and is widely researched in recent years. The traffic load identification is mainly used for accurately and stably acquiring the parameter information of vehicles running on the bridge or the road, and has important significance for bridge or road health monitoring and safety early warning. The parameter information of the vehicle comprises the position, the speed, the type, the length, the number of axles, the axle weight, the total weight and the license plate number of the vehicle, and the like, is an important evidence reflecting the stress state and the traffic density of a bridge or a road, and is also an important component of an intelligent traffic system.
In recent years, computer vision technology has been adopted as a new method for acquiring vehicle information on bridges or roads. For bridges or roads without WIM (weight-in-motion) systems, khuc and Catbasza use computer vision based measurements to monitor bridge or road health by placing cameras in two directions parallel and perpendicular to the traffic flow to identify vehicle type, location and axle count. The non-contact mode based on the monitoring video has low cost, does not damage the structure, and is easy to install and maintain; when the WIM system is used alone, the data generation time is usually delayed and noisy. However, in a real scene, the single system is inevitably affected by external interference factors, so that the system is unstable and the vehicle information recording deviation is caused.
At present, a method for acquiring a bridge or road traffic load based on fusion of WIM system data and vehicle space information has achieved remarkable results in a certain scene, but in the field of practical application, certain limitations still exist. When fusing WIM system data with target information detected from a video, a matching method based on time is generally adopted. Due to the hysteresis of WIM system data in an actual scene, the data information matching process is not effective any more, and the matching mode simply by taking time as a reference does not meet the actual requirement any more.
Aiming at the problems, the invention provides a method and a system for matching WIM system information with monitoring video data.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides a method and a system for matching WIM system information with monitoring video data, which are used for overcoming the defects existing at present.
A method for matching WIM system information with monitoring video data comprises the following steps:
s1, tracking and collecting data of a vehicle to obtain a video data set of the vehicle;
s2, acquiring a timestamp and lane information when the vehicle arrives at the WIM system;
s3, identifying vehicle types and license plate numbers in the video data set to obtain predicted vehicle types and predicted license plate numbers;
and S4, fusing the time stamp and the lane information in the S2 and the prediction results of the vehicle type and the license plate number in the S3 with corresponding information of all vehicles acquired by the WIM system in real time, and obtaining a data row corresponding to the vehicle in the WIM system.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, before the S1, further including setting a virtual detection area, where the virtual detection area is set in a detection area of the WIM system.
The foregoing aspects and any possible implementations further provide an implementation, where the S1 includes detecting a vehicle in a video frame by using a target detector to generate a target bounding box of the vehicle after the vehicle enters a virtual detection area, and buffering images enclosed in the target bounding box in a vehicle body frame queue and a license plate frame queue at intervals of a certain number of pixels in a process that the vehicle enters the virtual detection area and leaves the virtual detection area, and storing the vehicle body frame queue and the license plate frame queue as video data sets.
The above aspect and any possible implementation further provide an implementation, wherein the number of pixels is obtained by:
Figure 100002_DEST_PATH_IMAGE001
wherein is present>
Figure 100002_DEST_PATH_IMAGE002
Is the number of pixels, is>
Figure 100002_DEST_PATH_IMAGE003
Represents the length of the virtual inspection area in the WIM system vertical direction; />
Figure 100002_DEST_PATH_IMAGE004
The size of the vehicle body picture queue or the license plate picture queue.
The above aspect and any possible implementation manner further provide an implementation manner, where the S2 includes obtaining a lane number where the vehicle is located according to the center coordinate information of the target bounding box of the vehicle in the video frame, where the lane number is obtained by the following formula:
Figure 100002_DEST_PATH_IMAGE005
wherein->
Figure 100002_DEST_PATH_IMAGE006
Is the center coordinate of the target bounding box of the vehicle i->
Figure 100002_DEST_PATH_IMAGE007
Is the total number of lanes>
Figure 100002_DEST_PATH_IMAGE008
Is a certain lane line.
In the foregoing aspect and any possible implementation manner, an implementation manner is further provided, and the S3 includes performing feature extraction and fusion processing on a vehicle body image queue by using a convolutional neural network to obtain a vehicle type prediction result of the vehicle.
In the foregoing aspect and any possible implementation manner, an implementation manner is further provided, where the S3 includes identifying all license plate images in the license plate image queue by using a lightweight convolutional neural network, so as to obtain a license plate prediction result of the vehicle.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the S4 specifically includes: determining the time interval of the data to be matched of the WIM system as
Figure 100002_DEST_PATH_IMAGE009
In which>
Figure 100002_DEST_PATH_IMAGE010
Is a time stamp->
Figure 100002_DEST_PATH_IMAGE011
Selecting data with the same lane number as the lane number acquired in S2 and the same vehicle type and vehicle type prediction result in WIM system data in the time interval for the estimated maximum delay time, and finding out the recorded license plate number and the predicted license plate number ^ in the data>
Figure DEST_PATH_IMAGE012
And the data row with the highest similarity is used as the WIM system data corresponding to the target tracking vehicle.
The invention also provides a system for matching the WIM system information with the monitoring video data, which comprises: the acquisition unit is used for tracking and acquiring data of the vehicle to obtain a video data set of the vehicle;
the system comprises an acquisition unit, a recognition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a timestamp and lane information when a vehicle arrives at a WIM system;
a fusion unit: and fusing the timestamp, the lane information, the predicted vehicle type and the predicted license plate number with corresponding information of all vehicles collected by the WIM system in real time to obtain a data row corresponding to the vehicle in the WIM system.
The aspect described above and any possible implementation manner further provide an implementation manner, further including a setting unit, configured to set a virtual detection area, where the virtual detection area covers a detection area of the WIM system provided on the bridge.
The invention has the advantages of
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a method for matching WIM system information with monitoring video data, which comprises the following steps: s1, tracking and collecting data of a vehicle to obtain a video data set of the vehicle, S2, obtaining a timestamp and lane information when the vehicle reaches a WIM system, and S3, identifying the vehicle type and license plate number in the video data set to obtain a predicted vehicle type and a predicted license plate number; and S4, fusing the timestamp and the lane information in the S2 and the prediction result of the vehicle type and the license plate number in the S3 with corresponding information of all vehicles acquired by the WIM system in real time, and acquiring a data row corresponding to the vehicle in the WIM system. When the WIM system information is matched with the corresponding vehicle in the video monitoring, the method fully considers the data information of a plurality of directions of the vehicle, thereby reducing the adverse effect caused by the data delay of the WIM system, obtaining a relatively accurate matching result and providing accurate information for the traffic load on the bridge or the road. In the process of traffic load analysis based on computer vision, more favorable information can be provided by fusing WIM system data, and the method provided by the invention makes the implementation and application of previous theoretical research in an actual scene more feasible.
Drawings
Fig. 1 is a schematic view of a situation where the surveillance camera is centered over the lane.
Fig. 2 is a schematic diagram of a vehicle type classification network based on multiple frames.
FIG. 3 is a flow chart of a method in an embodiment of the invention.
Detailed Description
In order to better understand the technical solution of the present invention, the summary of the invention includes but is not limited to the following detailed description, and similar techniques and methods should be considered as within the scope of the present invention. In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
It should be understood that the described embodiments of the invention are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, a method for matching WIM system information with surveillance video data according to the present invention includes the following steps:
s1, tracking and collecting data of all vehicles entering a virtual detection area to obtain a picture sequence of each vehicle intercepted by a monitoring video, and storing the picture sequence as a video data set;
s2, simultaneously acquiring a timestamp, lane information and the like when the vehicle arrives at the WIM system;
s3, identifying vehicle types and license plate numbers in the video data set to obtain predicted vehicle types and predicted license plate numbers;
and S4, performing fusion matching by adopting the timestamp and the lane information in the S2 and the information in the WIM system and the predicted vehicle type and the predicted license plate number in the S3, and obtaining a data row corresponding to the predicted vehicle type and the predicted license plate number in the WIM system. The data line obtained by the matching method is used for subsequent analysis of traffic load information on the bridge. The method is used in a real scene, combines the dynamic weighing system with the monitoring video data, and is used for subsequently constructing the traffic load information of the bridge or the road after matching. And tracking all vehicles in the video monitoring continuously (matching is only carried out once, and the vehicles are provided with data of the WIM system), so that the vehicle load information on the bridge or the road at any time can be obtained.
Specifically, the specific operation process of the invention is as follows: the WIM system and the monitoring system are arranged on the bridge or the road at the same time, the monitoring system is realized by adopting the monitoring cameras, and the monitoring cameras are arranged on two sides of the road and above the road in the detection area where the WIM system is located.
Step 1: and carrying out target tracking and target data acquisition on all vehicles entering the virtual detection area. Vehicle detection for each frame in the video is the basis for subsequent operations. Before vehicle detection and tracking are carried out, firstly, a reasonable virtual detection area is set, the virtual detection area is a detection area of the WIM system and is used for obtaining video information of all vehicles entering the virtual detection area, the area of the virtual detection area is the same as or larger than that of the detection area where the WIM system is located, namely the set virtual detection area covers the detection area of the whole WIM system, and therefore the vehicles passing through the WIM system can be monitored by the monitoring system at the same time. The Detection and Tracking of the vehicles on the bridge or the road belong to the problem of multi-target Tracking (MOT), a DBT (Detection Based Tracking) Tracking algorithm DeepsORT Based on Detection is selected, a single-stage target Detection algorithm YOLO is used as a target detector, and a YOLO model is trained to be capable of identifying two types of objects, namely the vehicles and license plates. When a certain vehicle i enters the virtual detection area, the target detector is used for detecting the vehicle entering the video frame of the monitoring video camera, a target bounding box (bounding-box) of the vehicle i is generated, and the vehicle i is continuously tracked in the subsequent frames. Constructing corresponding body picture queue for vehicle i
Figure DEST_PATH_IMAGE013
And a license plate picture queue>
Figure DEST_PATH_IMAGE014
The picture is detected by the detector and then the boundary is pressed on the original pictureFrame cutting is carried out, the two queues are initially empty, pictures of the vehicle body are added into the CQ and pictures of the license plate are added into the PQ respectively, the two queues are equal in length and are used for judging whether the vehicle body is empty or not>
Figure DEST_PATH_IMAGE015
It means that, depending on the actual situation, the longer the length, the greater the number of pictures in the queue, and the more accurate the classification result. Every fixed number of pixels/after the tracking of the vehicle i into the virtual detection area until leaving the virtual detection area>
Figure DEST_PATH_IMAGE016
Then the vehicle body image area surrounded by the target boundary frame of the vehicle i is buffered in ^ er>
Figure DEST_PATH_IMAGE017
The detected corresponding license plate area is buffered in->
Figure DEST_PATH_IMAGE018
In the method, pictures of the vehicle at different positions at different moments are obtained at certain intervals, so that redundant information is provided by using multiple frames of pictures of the same object, and the aims of improving the accuracy of vehicle type classification and license plate recognition are fulfilled. The pixel number Δ p is calculated as follows:
Figure 773129DEST_PATH_IMAGE001
wherein
Figure DEST_PATH_IMAGE019
Represents the length of the virtual detection area in the vertical direction of the WIM system, has a pixel unit, and is determined and then is selected>
Figure DEST_PATH_IMAGE020
Is the known value, is>
Figure DEST_PATH_IMAGE021
For car body picture alignmentOr the length of the license plate picture queue, is a known quantity. When the shielding or missing detection condition is met, the acquisition is carried out by backward and forward one frame.
Step 2: when the vehicle is detected to arrive at the WIM system through the video picture of the monitoring camera, the timestamp and the lane information of the vehicle arriving at the WIM system are recorded at the same time, and step 1 is started when the vehicle enters the virtual detection area, namely the video picture is obtained at an interval of delta p until the vehicle exits the virtual detection area. Thus, while step 2 is proceeding, step 1 is always being performed. For example, when a monitoring camera detects that a certain vehicle i arrives at the WIM system, the current timestamp is recorded
Figure DEST_PATH_IMAGE022
And deducing the number of the lane where the vehicle i is located according to the center coordinate information of the target boundary frame of the vehicle i in the video frame>
Figure DEST_PATH_IMAGE023
Figure DEST_PATH_IMAGE024
Wherein
Figure DEST_PATH_IMAGE025
Is the center coordinate of the target bounding box of the vehicle i->
Figure 1854DEST_PATH_IMAGE007
Is the total number of lanes>
Figure DEST_PATH_IMAGE026
For a certain lane line, e.g. when n takes k, then>
Figure DEST_PATH_IMAGE027
Indicating the lane lines between the lanes numbered k and k + 1. For a 2-lane road, when the monitoring camera is centered on the lane, as shown in fig. 1, the lane number of the vehicle i is greater than or equal to>
Figure DEST_PATH_IMAGE028
Is determined by:
Figure DEST_PATH_IMAGE029
after the step is finished, the timestamp and lane information when the vehicle reaches the WIM system can be detected by the monitoring camera, and preparation is made for subsequent matching. The WIM system is normally operated by default, and records a time stamp, lane information, a license plate and vehicle type number information corresponding to a passing vehicle when the vehicle passes by. The virtual detection area is set in the invention to capture some data acquired when the vehicle runs in the area, such as license plate, vehicle type, time of arriving at WIM system and lane information, etc. by the monitoring camera. After the vehicles leave the virtual detection area (but do not leave the area monitored by the monitoring camera), the data can be used for matching relevant information corresponding to the WIM system on the monitored vehicles, and when the vehicles are continuously tracked in subsequent monitoring, the vehicles all carry corresponding load information.
And 3, step 3: to vehicle body picture queue
Figure DEST_PATH_IMAGE030
And a license plate picture queue>
Figure DEST_PATH_IMAGE031
And identifying and predicting the vehicle type and the license plate number in the vehicle. When the vehicle is confirmed to leave the virtual detection area, the queue is based on the corresponding vehicle body picture>
Figure 30858DEST_PATH_IMAGE030
And license plate picture queue
Figure 363751DEST_PATH_IMAGE031
The target image sequence in (1) utilizes redundant information provided by multiple frames to classify and identify the vehicle type and the license plate number relatively more accurately.
For the classification task of the vehicle type, since the vehicle is continuously driven on the road,the body angles captured at different times may be different, and more information can be provided using multiple frames of images of the same vehicle captured consecutively within the virtual detection area. In addition, under night scenes, the vehicle body areas illuminated by the light source are different at different positions of the vehicle at different moments, so that the classification method combining the multi-frame vehicle target characteristics can be more favorable for vehicle type classification prediction, and is less influenced by adverse factors such as movement, shielding and illumination. The invention uses the Convolution Neural Network (CNN) to extract the characteristics of all target image sequences in the vehicle body picture queue, such as the CNN shown in figure 2 1 The network of feature extraction is pre-trained in advance, and the CNN of the layer is shared by parameters; then the extracted features are fused, and the fused features are transmitted into the next layer of CNN network, namely CNN 2 (ii) a Then, the final characteristic descriptor of the vehicle, namely the FC layer, can be obtained through the full connection layer; finally, outputting vehicle type classification probability values of the vehicles through the softmax layer, wherein the classification with the maximum probability value is a vehicle type prediction result of the vehicle i and is recorded as a vehicle type prediction result of the vehicle i
Figure DEST_PATH_IMAGE032
. In order to facilitate subsequent matching, it is necessary to ensure that the vehicle type classification standard of the vehicle type classification network is consistent with the vehicle type classification standard of the WIM system.
For the license plate number recognition task, the license plate picture queue is also selected
Figure DEST_PATH_IMAGE033
All target image sequences in (2). The invention adopts LPRnet composed of lightweight convolution neural network to complete the queue of the license plate picture>
Figure 302757DEST_PATH_IMAGE033
And identifying license plate numbers of all license plate images. Similarly, as the vehicle body is in the process of continuous movement, characters in some positions of the intercepted license plate area may have fuzzy conditions, and the license plate number identified by a single frame of picture may not be accurate enough, the invention adopts the identification result of multiple frames of pictures to improve the accuracy and reliability of identification.After the predicted values of the license plate numbers of all the image frames are obtained, the character with the highest frequency of occurrence in the result set is selected bit by bit to represent the final predicted result. If different character results with the same frequency appear at a certain position, any special symbol (illegal license plate character, such as%) is used for substitution, the character at the position cannot be accurately identified, and finally the prediction result of the license plate number of the vehicle i is recorded as ^ er>
Figure DEST_PATH_IMAGE034
And 4, step 4: and (3) performing information fusion on WIM system data and a tracking target vehicle by using the vehicle information extracted in the step (2) and the step (3), wherein the vehicle information comprises a timestamp, lane information, a vehicle type and a license plate number obtained by identification and prediction and the like, and after the vehicle enters the WIM system through the virtual detection area, the WIM system automatically records the weight of the vehicle, the driving lane, the vehicle type, the license plate number and other data. For the vehicle i, after the vehicle i completely passes through the virtual detection area, the corresponding passing time of the vehicle i can be identified through the video picture of the monitoring camera
Figure DEST_PATH_IMAGE035
I.e. the time the vehicle has passed from entering to exiting the virtual detection area, the lane number ≥>
Figure DEST_PATH_IMAGE036
The predicted vehicle type->
Figure DEST_PATH_IMAGE037
Number plate and/or board>
Figure DEST_PATH_IMAGE038
. When fusion is carried out, first pass +>
Figure 561437DEST_PATH_IMAGE010
Determining a time interval of data to be matched, i.e. < >>
Figure DEST_PATH_IMAGE039
Wherein->
Figure DEST_PATH_IMAGE040
And if the estimated maximum delay time exceeds the maximum delay time, the WIM system data is considered to be lost. The time at which the WIM system generates vehicle i may only be delayed, not advanced, relative to the actual transit time of the vehicle, and the specific maximum delay time->
Figure 766154DEST_PATH_IMAGE040
It is determined by different specifications of WIM systems. Selecting the lane number in the WIM system to be equal to or greater than the preset value for the WIM system data in the time interval>
Figure DEST_PATH_IMAGE041
And the vehicle type is equal to->
Figure 152005DEST_PATH_IMAGE037
If there is no data line satisfying the condition in the time interval, it is determined that noise is generated in the WIM system data, and the subsequent matching process is not performed on the vehicle object. Then calculates the predicted number plate number->
Figure 194785DEST_PATH_IMAGE038
And (3) regarding the similarity between the license plate numbers recorded by each candidate data row, namely the total number of characters of the corresponding positions of the license plate numbers, as WIM system data corresponding to the vehicle i, and if the similarity is the same, selecting a timestamp distance(s) < SUB > H >>
Figure DEST_PATH_IMAGE042
The most recent corresponding data row serves as WIM system data corresponding to vehicle i. After the successful matching, the WIM system data is marked as matched, and the WIM system data is not considered in the subsequent matching process, meanwhile, the matching method of the invention can also be that the lane number and the license plate number are combined>
Figure 125832DEST_PATH_IMAGE038
Match is carried outThe purpose of matching can be achieved by determining the candidate data row and then matching the vehicle type, and the invention is not limited and is within the protection scope of the invention. The method is adopted for matching all the vehicles i, and weight information can be attached to the vehicles in the subsequent video tracking process, so that data support is provided for bridge load analysis. The embodiment of the present invention is illustrated using one vehicle, and the same approach is used for a plurality of vehicles.
Through the steps of the method, the matched WIM system data is combined in the subsequent vehicle continuous video tracking process to perform more specific monitoring analysis on the load distribution on the bridge.
The method can accurately fuse the WIM system data with the vehicle space-time information acquired based on the video monitoring camera to acquire the traffic load information on the bridge or the road. The data in the two aspects are matched and fused according to the time relation between the two aspects in the prior art, and the method fully considers multidirectional information such as combination of lane information, vehicle types, license plate numbers and the like so as to carry out more reasonable data matching to the maximum extent.
The steps of the method of the invention are specifically based on the information dimensions of the timestamp, the lane number, the license plate number and the vehicle type, and the WIM system data is accurately matched with the detected vehicle entity. In the invention, in a detection area at the entrance of the WIM system, the target vehicle is tracked, related data acquisition is carried out, and related vehicle information including images of a vehicle body and a license plate area, and a timestamp and a lane number of the vehicle when the vehicle passes through the WIM system are cached. Wherein the recorded timestamps are used to determine the time intervals for which matching WIM system data is required. When the vehicle exits the detection area, the vehicle type and the license plate number of the vehicle are classified and identified through the cached image sequence. And finally, matching data rows with the vehicle type completely consistent with the lane number, and selecting the data row with the highest license plate number similarity as the optimal matching item, so that the matching accuracy and reliability are improved, and the matching efficiency is also improved.
Preferably, an embodiment of the present invention further provides a system for matching WIM system information with surveillance video data, where the system includes: the acquisition unit is used for tracking and acquiring data of the vehicle to obtain a video data set of the vehicle;
the system comprises an acquisition unit, a recognition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a timestamp and lane information when a vehicle arrives at the WIM system;
a fusion unit: and fusing the timestamp, the lane information, the predicted vehicle type and the predicted license plate number with corresponding information of all vehicles collected by the WIM system in real time to obtain a data row corresponding to the vehicle in the WIM system.
Preferably, the embodiment of the present invention further includes a setting unit, configured to set a virtual detection area, where the virtual detection area covers a detection area of the WIM system disposed on the bridge.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The foregoing description shows and describes several preferred embodiments of the invention, but as aforementioned, it is to be understood that the invention is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for matching WIM system information with monitoring video data is characterized by comprising the following steps:
s1, tracking and collecting data of a vehicle to obtain a video data set of the vehicle;
s2, acquiring a timestamp and lane information when the vehicle arrives at the WIM system;
s3, identifying vehicle types and license plate numbers in the video data set to obtain predicted vehicle types and predicted license plate numbers;
and S4, fusing the time stamp and the lane information in the S2 and the prediction results of the vehicle type and the license plate number in the S3 with corresponding information of all vehicles acquired by the WIM system in real time, and obtaining a data row corresponding to the vehicle in the WIM system.
2. The method for matching WIM system information with surveillance video data according to claim 1, further comprising setting a virtual detection area before S1, the virtual detection area being set in a detection area of the WIM system.
3. The method for matching WIM system information with surveillance video data according to claim 2, wherein the S1 includes detecting a vehicle in a video frame by using a target detector to generate a target bounding box of the vehicle after the vehicle enters a virtual detection area, buffering images enclosed in the target bounding box in a vehicle body frame queue and a license plate frame queue at intervals of a certain number of pixels respectively in the process that the vehicle enters and leaves the virtual detection area, and storing the vehicle body frame queue and the license plate frame queue as video data sets.
4. The method of claim 3 wherein said number of pixels is determined by the following equation:
Figure DEST_PATH_IMAGE001
wherein is present>
Figure DEST_PATH_IMAGE002
Is the number of pixels, is>
Figure DEST_PATH_IMAGE003
Represents the length of the virtual inspection area in the WIM system vertical direction; />
Figure DEST_PATH_IMAGE004
The size of the vehicle body picture queue or the license plate picture queue.
5. The method for matching WIM system information with surveillance video data according to claim 1, wherein S2 comprises obtaining a lane number of the vehicle according to the coordinate information of the center of the target bounding box of the vehicle in the video frame, wherein the lane number is obtained by the following formula:
Figure DEST_PATH_IMAGE005
wherein->
Figure DEST_PATH_IMAGE006
Is the center coordinate of the target bounding box of the vehicle i->
Figure DEST_PATH_IMAGE007
Is the total number of lanes>
Figure DEST_PATH_IMAGE008
Is a certain lane line.
6. The method for matching WIM system information with surveillance video data according to claim 3, wherein S3 comprises performing feature extraction and fusion processing on a vehicle body picture queue by using a convolutional neural network to obtain a vehicle type prediction result of the vehicle.
7. The method of claim 3, wherein the step S3 comprises identifying all license plate images in the license plate image queue using a lightweight convolutional neural network to obtain the license plate prediction result of the vehicle.
8. The method of claim 2, wherein the step S4 specifically comprises: determining the time interval of the data to be matched of the WIM system as
Figure DEST_PATH_IMAGE009
Wherein->
Figure DEST_PATH_IMAGE010
For the time elapsed from the entry of the vehicle to the exit of the virtual detection area, a decision is made as to whether a vehicle has passed>
Figure DEST_PATH_IMAGE011
And selecting data in the WIM system data in the time interval, wherein the lane number is the same as the lane number acquired in the S2 and the vehicle type and vehicle type prediction result is the same, and finding the data row with the highest similarity between the recorded license plate number and the predicted license plate number in the data, wherein the data row is the WIM system data corresponding to the vehicle.
9. A system for matching WIM system information to surveillance video data, the system comprising: the acquisition unit is used for tracking and acquiring data of the vehicle to obtain a video data set of the vehicle;
the system comprises an acquisition unit, a recognition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a timestamp and lane information when a vehicle arrives at the WIM system;
a fusion unit: and fusing the timestamp, the lane information, the predicted vehicle type and the predicted license plate number with corresponding information of all vehicles collected by the WIM system in real time to obtain a data row corresponding to the vehicle in the WIM system.
10. The system for matching WIM system information to surveillance video data of claim 9, further comprising a setting unit for setting a virtual detection area, the virtual detection area being set at a detection area of the WIM system.
CN202211258178.7A 2022-10-14 2022-10-14 Method and system for matching WIM system information with monitoring video data Pending CN115909223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211258178.7A CN115909223A (en) 2022-10-14 2022-10-14 Method and system for matching WIM system information with monitoring video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211258178.7A CN115909223A (en) 2022-10-14 2022-10-14 Method and system for matching WIM system information with monitoring video data

Publications (1)

Publication Number Publication Date
CN115909223A true CN115909223A (en) 2023-04-04

Family

ID=86477359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211258178.7A Pending CN115909223A (en) 2022-10-14 2022-10-14 Method and system for matching WIM system information with monitoring video data

Country Status (1)

Country Link
CN (1) CN115909223A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197760A (en) * 2023-09-06 2023-12-08 东南大学 Bridge vehicle load distribution long-term monitoring method based on video monitoring
CN117409379A (en) * 2023-10-17 2024-01-16 哈尔滨工业大学 Large-span bridge vehicle tracking and vehicle load spectrum intelligent recognition method based on computer vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197760A (en) * 2023-09-06 2023-12-08 东南大学 Bridge vehicle load distribution long-term monitoring method based on video monitoring
CN117409379A (en) * 2023-10-17 2024-01-16 哈尔滨工业大学 Large-span bridge vehicle tracking and vehicle load spectrum intelligent recognition method based on computer vision

Similar Documents

Publication Publication Date Title
CN109784162B (en) Pedestrian behavior recognition and trajectory tracking method
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
Bas et al. Automatic vehicle counting from video for traffic flow analysis
CN109887281B (en) Method and system for monitoring traffic incident
US11380105B2 (en) Identification and classification of traffic conflicts
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
CN108091142A (en) For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN113676702B (en) Video stream-based target tracking and monitoring method, system, device and storage medium
EP2313863A1 (en) Detection of vehicles in images of a night time scene
CN110619276B (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
CN110544271B (en) Parabolic motion detection method and related device
CN115034324B (en) Multi-sensor fusion perception efficiency enhancement method
CN110070729A (en) It is a kind of that vehicle detecting system and method are stopped based on the separated of mist calculating
CN111079621A (en) Method and device for detecting object, electronic equipment and storage medium
CN112257683A (en) Cross-mirror tracking method for vehicle running track monitoring
KR101089029B1 (en) Crime Preventing Car Detection System using Optical Flow
WO2020174916A1 (en) Imaging system
KR101161557B1 (en) The apparatus and method of moving object tracking with shadow removal moudule in camera position and time
Tsai et al. Multi-lane detection and road traffic congestion classification for intelligent transportation system
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
CN110021174A (en) A kind of vehicle flowrate calculation method for being applicable in more scenes based on video image
CN116152753A (en) Vehicle information identification method and system, storage medium and electronic device
JP2014241134A (en) Methods and systems of classifying vehicles using motion vectors
CN110942642B (en) Video-based traffic slow-driving detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination