CN112183204A - Method and device for detecting parking event - Google Patents
Method and device for detecting parking event Download PDFInfo
- Publication number
- CN112183204A CN112183204A CN202010874151.5A CN202010874151A CN112183204A CN 112183204 A CN112183204 A CN 112183204A CN 202010874151 A CN202010874151 A CN 202010874151A CN 112183204 A CN112183204 A CN 112183204A
- Authority
- CN
- China
- Prior art keywords
- image data
- frame
- vehicle
- detection information
- parking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000001514 detection method Methods 0.000 claims abstract description 176
- 230000008859 change Effects 0.000 claims description 34
- 238000013136 deep learning model Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 17
- 230000007613 environmental effect Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 description 16
- 238000012549 training Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000012544 monitoring process Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 4
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000008014 freezing Effects 0.000 description 2
- 238000007710 freezing Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for detecting a parking event, wherein the method comprises the following steps: acquiring video data; in the detection period of the parking event, detecting first frame image data of the video data to obtain one or more vehicle objects, and determining first detection information of the one or more vehicle objects in the first frame image data; tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in second frame image data of the video data; wherein the second frame of image data is image data ordered after the first frame of image data in the video data; and determining the parking vehicle object with the parking event from the one or more vehicle objects according to the first detection information and the second detection information. By the embodiment of the invention, the judgment of the parking event is realized, the accuracy of the parking event detection method is improved, and the misjudgment is effectively avoided.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a method and a device for detecting a parking event.
Background
In traffic events, parking events are the most common, and traffic accidents can be better avoided by detecting the parking events.
In the detection method of the parking event, compared with other detection methods, the detection method based on the traffic monitoring video can provide more intuitive and detailed visual information. In the parking event detection method based on the traffic monitoring video, the vehicle behavior in the video monitoring can be automatically analyzed through algorithm detection, and when a parking vehicle appears, the detection algorithm can report the information of the parking event. Therefore, the abnormal traffic behaviors are mastered through monitoring and analyzing the vehicle behaviors in the traffic monitoring video, and real-time early warning and processing are carried out, so that traffic accidents can be prevented.
The existing traffic monitoring video-based parking event detection method has the problem that the judgment of the parking event of the vehicle is not accurate enough, and the misjudgment of the vehicle state of the traffic monitoring video is easily caused.
Disclosure of Invention
In view of the above, it is proposed to provide a method and a device for detecting a parking event that overcome or at least partially solve the above problems, comprising:
a method of detecting a parking event, the method comprising:
acquiring video data;
in the detection period of the parking event, detecting first frame image data of the video data to obtain one or more vehicle objects, and determining first detection information of the one or more vehicle objects in the first frame image data;
tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in second frame image data of the video data, wherein the second frame image data is image data sequenced after the first frame image data in the video data;
and determining the parking vehicle object with the parking event from the one or more vehicle objects according to the first detection information and the second detection information.
Optionally, in the detection period of the parking event, the detecting the first frame image data of the video data to obtain one or more vehicle objects includes:
inputting first frame image data of the video data into a preset depth learning model; the deep learning model is used for detecting the first frame of image data by combining the environmental information of the first frame of image data;
receiving one or more vehicle objects for the first frame of image data output by the deep learning model.
Optionally, the tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in a second frame of image data of the video data includes:
for each vehicle object in the first frame of image data, sequentially tracking in image data in the video data that is ordered after the first frame of image data to determine the vehicle object in a second frame of image data of the video data;
second detection information of the vehicle object in the second frame image data is determined.
Optionally, there are nth frame image data and N +1 th frame image data between the first frame image data and the second frame image data, and for each vehicle object in the first frame image data, tracking is performed in the video data sequentially in the image data after the first frame image data to determine the vehicle object in the second frame image data of the video data, including:
acquiring a target vehicle object in the Nth frame of image data;
determining, for the target vehicle object, degree-of-association information with each vehicle object in the N +1 th frame of image data;
and determining the target vehicle object in the (N + 1) th frame of image data according to the association degree information.
Optionally, the determining, for the target vehicle object, the association degree information with each vehicle object in the N +1 th frame of image data includes:
for the target vehicle object, determining motion matching information and apparent matching information of each vehicle object in the (N + 1) th frame of image data;
and determining the association degree information of each vehicle object in the (N + 1) th frame of image data by adopting the motion matching information and the apparent matching information.
Optionally, the determining, from the one or more vehicle objects according to the first detection information and the second detection information, a parking vehicle object in which a parking event exists includes:
determining a change rate of a positioning frame overlapping area of the one or more vehicle objects according to the first detection information and the second detection information, wherein the change rate of the positioning frame overlapping area is a change rate of the positioning frame area of the one or more vehicle objects in the first frame image data and the positioning frame area in the second frame image data;
and determining the vehicle object with the positioning frame overlapping area change rate larger than the preset positioning frame overlapping area change rate as the parking vehicle object with the parking event.
Optionally, the first detection information or the second detection information includes any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
A device for detecting a parking event, the device comprising:
the video data acquisition module is used for acquiring video data;
the first frame image data detection module is used for detecting first frame image data of the video data in a detection period of the parking event to obtain one or more vehicle objects and determining first detection information of the one or more vehicle objects in the first frame image data;
the vehicle object tracking module is used for tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in second frame image data of the video data; wherein the second frame of image data is image data ordered after the first frame of image data in the video data;
and the parking vehicle object determining module is used for determining the parking vehicle object with the parking event from the one or more vehicle objects according to the first detection information and the second detection information.
An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of detecting a parking event as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of detecting a parking event as set forth above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, by acquiring the video data, in the detection period of the parking event, the first frame image data of the video data can be detected to obtain one or more vehicle objects, and the first detection information of the one or more vehicle objects is determined, and then the one or more vehicle objects can be tracked to obtain the second detection information of the one or more vehicle objects in the second frame image data of the video data, and the parking vehicle object with the parking event is determined from the one or more vehicle objects according to the first detection information and the second detection information, so that the judgment of the parking event is realized, the accuracy of the parking event detection method is improved, and the misjudgment is effectively avoided.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart illustrating steps of a method for detecting a parking event according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of another method for detecting a parking event according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an appearance feature extraction network structure according to an embodiment of the present invention;
FIG. 4a is a flowchart of parking event detection in a YOLOV3 network according to an embodiment of the present invention;
FIG. 4b is a flow chart of a parking event detection provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for detecting a parking event according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a method for detecting a parking event according to an embodiment of the present invention is shown, which may specifically include the following steps:
in practical application, the traffic video surveillance video can record the motion state of vehicles on roads, and the existence of parking events of vehicle objects in the video data can be further confirmed by acquiring the video data of the traffic video surveillance video which is being monitored or recorded.
Step 102, in a detection period of the parking event, detecting first frame image data of the video data to obtain one or more vehicle objects, and determining first detection information of the one or more vehicle objects in the first frame image data;
after acquiring the video data, in a detection period of the parking event, a first frame of image data in the video data may be detected, so that one or more vehicle objects may be obtained in the first frame of image data, and first detection information of the vehicle objects in the first frame of image data may be further determined.
The first frame of image data is one frame of image data in the video data.
In an example, the first detection information may include any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
In practical application, the parking event detection method can detect a moving vehicle by integrating manual design features, and comprises the following steps:
(1) obtaining a moving target in a monitoring video image by a background modeling method;
(2) adopting a historical sample of a monitoring video image as training data, and extracting the manually designed characteristics of the training data to train a vehicle target detection classifier;
(3) when judging whether parking behaviors exist in the monitoring video image, adopting a vehicle target detection classifier trained by training data to predict and judge the extracted moving target;
(4) when the moving object is a vehicle, the vehicle is further detected to be stopped through a parking judgment strategy.
In the image data, environmental factors such as illumination or shooting angle can cause a significant difference of the same vehicle object on two frames of image data, and the extracted manual technical features are easily influenced by the environmental factors such as illumination or shooting angle, and vehicle detection through the manual technical features is easily misjudged.
In an embodiment of the present invention, in the detection period of the parking event, the detecting the first frame image data of the video data to obtain one or more vehicle objects includes:
inputting first frame image data of the video data into a preset deep learning model in a detection period of the parking event, wherein the deep learning model is used for detecting the first frame image data by combining environmental information of the first frame image data; receiving one or more vehicle objects for the first frame of image data output by the deep learning model.
After the video data is acquired, the preset deep learning model can be combined with the environmental information of the image data to detect the image data, so that the first frame of image data of the video data can be input into the deep learning model to be detected, and then one or more vehicle objects in the first frame of image data output by the deep learning model can be received.
Due to the fact that the deep learning model can be combined with the environment information of the image data, the vehicle object can be determined more accurately, misjudgment is avoided, and robustness of the deep learning model in the detection environment and the detection scene is improved.
In an example, vehicle samples of different time and different environment can be collected, a sample set containing parking behavior is constructed, and a deep learning model for detecting image data can be further trained through the constructed sample set, wherein the vehicle samples of different time and different environment can comprise vehicle samples collected under various conditions of day, night, sunny day, cloudy day and rainy day,
because the constructed sample set comprises vehicle samples under different time and environments, the deep learning model trained by the sample set can be combined with the environmental information in the image data to detect the vehicle object.
In an example, the deep learning model may be trained over a YOLOV3 network:
the deep learning model is trained in the YOLOV3 network, a weight loading strategy, a freezing parameter strategy and a learning rate attenuation strategy can be adopted in the training process, and meanwhile, the total loss is calculated through a loss function.
1) Pre-weight loading strategy:
the weight trained by the related target data set is used as the initial loading weight, so that the convergence speed of the network can be increased, and the result of network identification can be more accurate under the same training round number.
2) Freezing parameter strategy:
the YOLOV3 network may be composed of two parts, darknet53 and YOLO, and darknet53 is a trained parameter, and sets freeze _ body to 2, which means: during the initial phase of training, the darknet53 part is not trained for the moment, and the YOLO part is trained first.
When the networks of the YOLO part converge to a certain degree (a certain degree may be when the total loss function slope gradually approaches to 0), the whole networks are unfrozen, and then the whole darknet53 and the YOLO part are finely adjusted together, so that a more accurate recognition effect can be achieved.
3) Learning rate decay strategy:
the smaller the learning rate is, the higher the achievable accuracy of network convergence is, but the longer the required training time is, the larger learning rate needs to be set at the beginning of training, and the learning rate is gradually reduced in the training process, so that the network accuracy and the training time can be ensured.
In one example, the loss (loss) is not reduced but the learning rate is reduced to 0.1 times by three consecutive epochs.
4) Loss function:
loss function: the YOLOv3 loss function is mainly composed of target position loss, target size loss, confidence loss and category loss, and formula (1) represents the total loss of YOLOv 3.
Loss of target positionLoss (Error)coordPart of (1) determined by the predicted x and y): and (3) calculating by adopting a binary cross entropy method, wherein the first line in the formula (2) represents the target position loss.
Loss of target size (Error)coordThe part of (1) determined by the predicted w and h): the second row in equation (2) represents the target size loss using the total square error calculation.
Loss of confidence (Error)iouDetermined by the predicted confidence): and (3) calculating by adopting a binary cross entropy method, wherein a first part on the right of the equation in the equation (3) represents the confidence coefficient loss when the image data has the target to be detected, and a second part on the right of the equation in the equation (3) represents the confidence coefficient loss when the image data does not have the target to be detected.
Class loss (Error)clsDetermined by the predicted probability for each category): and (4) calculating by using a binary cross entropy method, as shown in formula (4).
The total loss function is obtained by accumulating losses of all parts, and the specific calculation formula is as follows:
Loss=Errorcoord+Erroriou+Errorcls (1)
after the deep learning model is trained according to the method, the model weight file obtained in the model training step can be loaded to detect the input traffic scene video data, and the three YOLO convolutional layers are responsible for outputting the detection result. The sizes of y1, y2, and y3 of the convolutional layer outputs are respectively: 13 × 13 × 255, 26 × 26 × 255, 52 × 52 × 255.
Wherein 13 × 13, 26 × 26, 52 × 52 represent the size of three dimensions; 13 × 13 represents a total of 13 × 13-169 grid cells at this scale; 26 × 26 represents a total of 26 × 26-676 grid cells at this scale; 52 × 52 represents a total of 52 × 52 ═ 2704 grid cells at this scale.
255 have the following meanings: YOLOV3 sets a probability of predicting 3 boxes per grid cell, that five basic parameters (x, y, w, h, confidence) are required for each box, and then 80 classes (COCO dataset) are required, so 3 × (5+80) ═ 255.
after obtaining the one or more vehicle objects in the first frame of image data, the vehicle objects may be tracked until the second frame of image data, and thus, second detection information of the one or more vehicle objects in the first frame of image data in the second frame of image data may be obtained, wherein the second frame of image data may be image data that is sequenced after the first frame of image data in the video data and differs from the first frame of image data by a first preset number of frames.
In an example, the second detection information may include any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
In an embodiment of the present invention, the tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in a second frame of image data of the video data includes:
for each vehicle object in the first frame of image data, sequentially tracking in image data in the video data that is ordered after the first frame of image data to determine the vehicle object in a second frame of image data of the video data; second detection information of the vehicle object in the second frame image data is determined.
After obtaining one or more vehicle objects in the first frame of image data, since the vehicle objects may be in a moving state, a vehicle number may be established for each vehicle object in the first frame of image data, and image data of each vehicle object sequentially ordered in the video data after the first frame of image data may be tracked according to the vehicle number, so that the same vehicle object in different frames may be determined.
When the second frame of image data is tracked, the vehicle object containing the vehicle number exists in the vehicle objects detected by the second frame of image data, so that the vehicle object with the same vehicle number in the second frame of image data as that in the first frame of image data can be determined according to the vehicle number, and further second detection information of the vehicle object with the same number in the second frame of image data can be obtained.
And 104, determining a parking vehicle object with a parking event from the one or more vehicle objects according to the first detection information and the second detection information.
After the second detection information is determined, the first detection information of one or more vehicle objects in the first frame image data and the corresponding second detection information of the vehicle objects in the second frame image data are obtained, which is equivalent to the detection information of two frames of image data of the same vehicle separated by a first preset number of frames in the same video data, and further the parking vehicle object with the parking event can be determined through the first detection information and the second detection information of the same vehicle object in the two frames of image data.
As an example, the first detection information or the second detection information includes any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
In practical application, the parking vehicle object can be determined by analyzing the tracking track of each vehicle object obtained by vehicle tracking, and in a scene with dense vehicles, the integrity of the tracks of a plurality of vehicle objects is not high, the reliability of the tracks is not strong, and the parking event misjudgment is easily caused.
In an embodiment of the present invention, the determining, from the one or more vehicle objects, a parking vehicle object with a parking event according to the first detection information and the second detection information includes:
determining a change rate of a positioning frame overlapping area of the one or more vehicle objects according to the first detection information and the second detection information, wherein the change rate of the positioning frame overlapping area is a change rate of the positioning frame area of the one or more vehicle objects in the first frame image data and the positioning frame area in the second frame image data; and determining the vehicle object with the positioning frame overlapping area change rate larger than the preset positioning frame overlapping area change rate as the parking vehicle object with the parking event.
After the second detection information is determined, the change rate of the positioning frame overlapping area of one or more vehicle objects can be determined through the first detection information and the second detection information, and when the change rate of the positioning frame overlapping area of a certain vehicle object is larger than the preset change rate of the positioning frame overlapping area, the certain vehicle object can be determined to be a parking vehicle object with a parking event.
By determining the change rate of the overlapping area of the positioning frames of the same vehicle object in the first frame of image data and the second frame of image data, the multiple vehicle objects can be simultaneously detected in the same scene, and compared with the trajectory analysis, the method is simpler and more efficient, and the misjudgment of the parking event is effectively avoided.
In one example, the description of the parking event may include a physical description or an image description:
(1) physical description of parking event
The parking process of a vehicle running normally is mainly divided into three processes, namely a deceleration process, a stopping process and an acceleration process. Assuming that the acceleration of the vehicle is constant during the stopping and starting process, the motion process of the single vehicle stopping is described as follows using a model:
in the above formula, v0As initial speed of the vehicle, a1Acceleration during deceleration of the vehicle, a2Is the acceleration during deceleration of the vehicle.
(2) Parking event definition
By analyzing the parking motion models of a single vehicle and a plurality of vehicles, the conditions of the vehicles for parking events are as follows:
parking={vb(ti)=0} ti∈(Ti-1,T)
in the formula, vb(ti) is the speed of vehicle b at time ti, and T is the time period for parking.
(3) Image description of parking event
The motion characteristics of the parked vehicle object in the presence of a parking event are apparent, with the vehicle speed being about 0, while the detected position of the vehicle object is substantially unchanged in the video data.
When a parking detection algorithm is designed, a detection parameter which can be designed is vehicle running speed, and the running speed is calculated depending on vehicle number information and vehicle positioning frame position coordinates of a vehicle object obtained by vehicle target detection and vehicle tracking.
In the image expression, the calculation of the vehicle speed can be converted into the calculation of the change rate of the overlapping area of the positioning frames, the change rate of the overlapping area of the same vehicle object between the positioning frames spaced by a first preset frame number (spaced by a first preset time) is calculated, and the change rate of the overlapping area of the positioning frames of the same vehicle object before and after the measurement is defined as the IOU in the algorithmCEThe calculation formula of the parameter is as follows:
where area1 is the area of the positioning frame calculated by the positioning frame position coordinates of a certain vehicle in the first frame image data, area2 is the area of the positioning frame calculated by the positioning frame position coordinates of the vehicle in the second frame image data after T time (at intervals of K frames or at intervals of a first preset number of frames), and overlaparea1 is the overlapping area of the positioning frame position of the vehicle in the first frame image data and the positioning frame position of the second frame image data.
The parking event is expressed by image analysis as:
in the formula, is tiOverlap area, Thre, of vehicle b at time instant and preceding time instantCET is the time period of parking in order to determine the threshold value (i.e. the preset positioning frame overlap area change rate) for parking.
In an example, after determining the parking vehicle object, the vehicle number of the parking vehicle may be obtained, it may be further determined whether the vehicle number is a vehicle number recorded in a parking list, and if the vehicle number is not recorded, the vehicle number and the second detection information are recorded in the parking list, and a parking event is reported.
In the embodiment of the invention, by acquiring the video data, in the detection period of the parking event, the first frame image data of the video data can be detected to obtain one or more vehicle objects, and the first detection information of the one or more vehicle objects is determined, and then the one or more vehicle objects can be tracked to obtain the second detection information of the one or more vehicle objects in the second frame image data of the video data, and the parking vehicle object with the parking event is determined from the one or more vehicle objects according to the first detection information and the second detection information, so that the judgment of the parking event is realized, the accuracy of the parking event detection method is improved, and the misjudgment is effectively avoided.
Referring to fig. 2, a flowchart illustrating steps of another method for detecting a parking event according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 202, in a detection period of the parking event, detecting first frame image data of the video data to obtain one or more vehicle objects, and determining first detection information of the one or more vehicle objects in the first frame image data;
step 203, for each vehicle object in the first frame of image data, sequentially tracking in image data ordered after the first frame of image data in the video data to determine the vehicle object in a second frame of image data of the video data;
in an embodiment of the present invention, there are nth frame image data and N +1 th frame image data between the first frame image data and the second frame image data, and for each vehicle object in the first frame image data, tracking is performed in the image data ordered after the first frame image data in the video data in sequence to determine the vehicle object in the second frame image data of the video data, including:
acquiring a target vehicle object in the Nth frame of image data; determining, for the target vehicle object, degree-of-association information with each vehicle object in the N +1 th frame of image data; and determining the target vehicle object in the (N + 1) th frame of image data according to the association degree information.
After determining one of the first frame image data or the individual vehicle objects, tracking may be performed for each vehicle object, and by tracking, the same vehicle object may be determined in two adjacent frames or two frames spaced apart by a second preset frame number (the second preset frame number is smaller than the first preset frame number).
The association degree information is calculated for the detected vehicle objects of the image data corresponding to two frames before and after a second preset number of frames from the first frame image data, and when the association degree information between two vehicle objects in the two frame image data conforms to the preset association degree information, it can be determined that the two vehicle objects are the same vehicle object in different frames.
Specifically, the image data of the nth frame and the image data of the (N + 1) th frame exist between the image data of the first frame and the image data of the second frame, the target vehicle object in the image data of the nth frame may be acquired, the association degree information of the target vehicle object with each vehicle object in the image data of the (N + 1) th frame may be determined for the target vehicle object, therefore, the target vehicle object can be found in the (N + 1) th frame of image data according to the relevance information, the tracking of the target vehicle object in the N +1 th frame of image data from the (N) th frame of image data to the (N + 1) th frame of image data is realized, and through the tracking of the vehicle object between two frames with the interval of the second preset frame number, tracking from the first frame of image data to the second frame of image data may be accomplished for one or more vehicle objects in the first frame of image data to determine a tracked vehicle object in the second image.
In practical application, the same vehicle object of two frames of image data can be determined by performing motion matching on the vehicle object of two adjacent frames of image data, and when the parking event detection is performed by combining detection and tracking, the detection efficiency and the tracking performance cannot be considered at the same time.
In an embodiment of the present invention, the performing target tracking by using motion matching and appearance matching, wherein the determining, for the target vehicle object, the association degree information with each vehicle object in the N +1 th frame of image data includes:
for the target vehicle object, determining motion matching information and apparent matching information of each vehicle object in the (N + 1) th frame of image data; and determining the association degree information of each vehicle object in the (N + 1) th frame of image data by adopting the motion matching information and the apparent matching information.
After the target vehicle object of the image data of the nth frame is acquired, the motion matching information and the appearance matching information of the target vehicle object and each vehicle object in the image data of the (N + 1) th frame can be determined for the target vehicle object, and then the association degree information of the target vehicle object and each vehicle object in the image data of the (N + 1) th frame can be determined through the motion matching information and the appearance matching information.
Because the single motion matching cannot detect the efficiency and the tracking performance during tracking, the detection efficiency of the parking event can be ensured while the tracking performance is improved by combining the motion matching with the appearance matching.
In one example, multi-target tracking mainly addresses matching of the same target vehicle object from frame to frame, which may include motion matching, appearance matching, and composite matching.
(1) The matching algorithm is a Hungarian algorithm, and the process can be briefly described as follows:
a.C storing calculation results of distances between all object tracks i and object detection j in a matrix, and storing judgment on whether all object tracks i are associated with object detection j in a matrix B;
b. initializing an association set M, and initializing an object detection set D which is not yet detected;
c. circularly traversing each track successfully matched, and selecting a tracking track set meeting the conditions (firstly ensuring that the most recently appeared target is endowed with the maximum priority by using cascade matching)
d. Calculating a set successfully generated by association with the object detection j according to a minimum cost algorithm;
e. updating M into a successfully matched set, and removing the successfully matched object detection in U
f. And returning to the two sets of M and U and circulating the above processes.
(2) In practical application, the motion matching can be obtained in the following way to obtain the motion matching information:
performing motion prediction on the existing target (the target vehicle object obtained from the N-th frame of image data) by using Kalman filtering to obtain a result yiThe result and the detection result (detection information of a certain vehicle object in the N +1 th frame image data) d are calculatedjMahalanobis distance between them, the formula is as follows:
wherein d isjFor the position of the jth location box, yiPredicted position of target for i-th tracker, Si is detected position and average tracking positionCovariance matrix between the positions.
If the mahalanobis distance associated at a time is less than a specified threshold, it is said that the association between the target vehicle object obtained from the nth frame of image data and a certain vehicle object in the N +1 th frame of image data is successful (both are most likely to be the same vehicle):
(3) in practical application, the apparent matching can be obtained by the following method to obtain the apparent matching information:
appearance characteristic vectors detected by the object can be generated by adopting a cosine measurement mode, so that a clustering effect can be generated, namely, the cosine distance between the appearance characteristic vectors generated by the same object in different pictures is small.
In Deep SORT algorithm, the appearance metric between the target detection result and the target motion prediction result can be calculated by cosine metric:
wherein r isjIs the feature vector of the jth vehicle object detection result in the N +1 th frame of image data,the set of features that are the last 100 successful associations of the ith tracker.
In the process of calculating the feature vector, a convolutional neural network shown in fig. 3 is adopted to perform 128-dimensional appearance feature extraction on the detection target.
(4) In practical application, comprehensive matching can be obtained in the following way to obtain the association degree information:
the relevance information may be a relevance metric, which is a weighted combination of a mahalanobis distance metric and a cosine similarity metric:
Ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j)
where λ is a weighting coefficient, λ may be set to zero in the case of camera motion for capturing video data.
In an example, after the target vehicle object is determined in the N +1 th frame of image data, whether the N +1 th frame of image data is the second frame of image data is determined, when the N +1 th frame of image data is not the second frame of image data, tracking of the target vehicle object in the N +1 th frame of image data may be continuously performed, when the N +1 th frame of image data is the second frame of image data, tracking may be ended, and one or more vehicle objects detected by the first frame of image data are determined in the second frame of image data.
In the embodiment of the invention, by acquiring the video data, in the detection period of the parking event, the first frame image data of the video data can be detected to obtain one or more vehicle objects, and the first detection information of the one or more vehicle objects is determined, and then for each vehicle object in the first frame image data, the image data sequenced after the first frame image data in the video data can be sequentially tracked to determine the vehicle object in the second frame image data of the video data, and then the second detection information of the vehicle object in the second frame image data can be determined, so that the parking vehicle object with the parking event can be determined from the one or more vehicle objects according to the first detection information and the second detection information, the judgment of the parking event is realized, and the accuracy of the parking event detection method is improved, and the misjudgment is effectively avoided.
Fig. 4a is an example of a flow chart of parking event detection in the YOLOV3 network, and the following describes an embodiment of the present invention with reference to fig. 4 a:
1. construction of parking event data sets
The method can be used for collecting vehicle samples in different time and different environments, the construction of a parking event data set is realized by constructing a sample set containing parking behaviors, and a deep learning model for detecting image data can be further trained through the constructed parking event data set (sample set), wherein the vehicle samples in different time and different environments can comprise vehicle samples collected in various conditions of day, night, sunny days, cloudy days and rainy days.
2. Target detection based on deep learning
After the parking event dataset is constructed, deep learning-based target detection including YOLOV3 vehicle detection model training (deep learning model training) and YOLOV3 vehicle detection model prediction (detection of the first frame of image data by the deep learning model) can be performed according to the constructed parking event dataset.
3. Multi-target tracking
And tracking a plurality of vehicle objects in the first frame of image data by using a Deep Sort multi-vehicle tracking algorithm.
4. Parking discrimination strategy
And determining the parking vehicle object with the parking event in the one or more vehicle objects through the calculation of the change rate of the positioning frame overlapping area between K frames (separated by a first preset number of frames) of the same moving object (the one or more vehicle objects in the first frame image data and the determined vehicle object in the second frame image data).
5. Outputting the coordinate position information (i.e., the second detection information) of the stopped vehicle (the stopped vehicle object)
After determining that there is a parking vehicle object for the parking event, second detection information of the parking vehicle object for the parking event may be output, the second detection information may include a positioning frame coordinate, a positioning frame width parameter, a positioning frame height parameter,
fig. 4b is an example of a flow chart of parking event detection, and an embodiment of the present invention is described below with reference to fig. 4 b:
1) vehicle detection
The first frame of image data in the video image data may be detected by a YOLOv3 target detection algorithm, to obtain one or more vehicle objects, and determine first detection information of the one or more vehicle objects, where the first detection information may include coordinates of an upper left corner of the positioning frame, a width parameter of the positioning frame, and a height parameter.
2) Vehicle tracking
And tracking the vehicle object (acquiring the target vehicle object in the Nth frame of image data) detected by the YOLOv3 target detection algorithm by using a Deep _ Sort algorithm, and outputting the vehicle tracking ID (vehicle number) and the tracking detection frame position (positioning frame coordinates, positioning frame width parameters and positioning frame height parameters) in the current frame (the (N + 1) th frame of image data in the video data).
3) Whether the current frame is the initial frame
Judging whether the current frame (Nth frame image data) is an initial frame (first frame image data), if the current frame is the initial frame, recording tracking output information of each vehicle object in the current frame (the tracking output information can comprise a vehicle number, a positioning frame coordinate, a positioning frame width parameter and a positioning frame height parameter, when the current frame is determined to be the first frame image data, first detection information of one or more vehicle objects in the first frame image data can be determined), and if not, executing the fourth step.
4) Whether or not to space K frames (judge whether or not the (N + 1) th frame image data is the second frame image data)
And judging whether a current frame (the (N + 1) th frame of image data) is separated from an initial frame (the first frame of image data) by K frames (a first preset number of frames), if so, executing the fifth step, and otherwise, executing the first step.
5) Judging whether the target tracking ID is the same as the last record
And judging the ID information output by vehicle tracking in the current frame and the recorded vehicle tracking ID information, if the ID information and the recorded vehicle tracking ID information are the same (determining the vehicle object in the second frame image data and determining the second detection information of the vehicle object in the second frame image data), executing the sixth step, and otherwise, executing the first step.
6) Compute IOUCE
Calculating a rate of change of overlapping area of the positioning frame (IOU) based on the first detection information and the second detection informationCE)。
7) Determine IOUCEWhether greater than thre
Judging whether the current frame has the IOU of the vehicleCEThe value is larger than the preset positioning frame overlapping area change rate (thre), if yes, the eighth step is executed, otherwise, the first step is executed.
8) Judging whether the vehicle is a recorded parking vehicle
And judging whether the vehicle information meeting the conditions is recorded in the parking list, if not, recording the parking vehicle and reporting the event information, otherwise, performing the first step.
In practical application, the vehicle object with the change rate of the overlapping area of the positioning frame larger than the change rate of the overlapping area of the preset positioning frame is determined as the parking vehicle object with the parking event, after the parking vehicle object is determined, the vehicle number of the parking vehicle can be obtained, whether the vehicle number is the vehicle number recorded in the parking list or not can be further determined, if the vehicle number is not recorded, the vehicle number and the second detection information are recorded in the parking list, and the parking event is reported.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a schematic structural diagram of a parking event detection device according to an embodiment of the present invention is shown, and specifically includes the following modules:
a video data obtaining module 501, configured to obtain video data;
a first frame image data detection module 502, configured to, in a detection period of the parking event, detect first frame image data of the video data to obtain one or more vehicle objects, and determine first detection information of the one or more vehicle objects in the first frame image data;
a vehicle object tracking module 503, configured to track the one or more vehicle objects, so as to obtain second detection information of the one or more vehicle objects in a second frame of image data of the video data; wherein the second frame of image data is image data ordered after the first frame of image data in the video data;
a parking vehicle object determining module 504, configured to determine, from the one or more vehicle objects, a parking vehicle object in which a parking event exists according to the first detection information and the second detection information.
As an example, the first detection information or the second detection information includes any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
In an embodiment of the present invention, the first frame image data detection module 502 includes:
a first frame image data input sub-module, configured to input first frame image data of the video data into a preset deep learning model in a detection period of the parking event, where the deep learning model is configured to detect the first frame image data in combination with environmental information of the first frame image data;
a receiving sub-module to receive one or more vehicle objects for the first frame of image data output by the deep learning model.
In an embodiment of the present invention, the vehicle object tracking module 503 includes:
a vehicle object determination sub-module, configured to track, for each vehicle object in the first frame of image data, sequentially tracking in image data ordered after the first frame of image data in the video data to determine the vehicle object in a second frame of image data of the video data;
a second detection information determination sub-module for determining second detection information of the vehicular object in the second frame image data.
In an embodiment of the present invention, the vehicle object determination submodule, in which the nth frame image data and the (N + 1) th frame image data exist between the first frame image data and the second frame image data, includes:
a target vehicle object acquisition unit configured to acquire a target vehicle object in the nth frame of image data;
a relevance information determining unit for determining relevance information with respect to each vehicle object in the N +1 th frame of image data for the target vehicle object;
and the target vehicle object determining unit is used for determining the target vehicle object in the (N + 1) th frame of image data according to the association degree information.
In an embodiment of the present invention, the association degree information determining unit includes:
a matching information determination subunit operable to determine, for the target vehicle object, motion matching information and apparent matching information with respect to each vehicle object in the N +1 th frame image data;
and the association degree information determining subunit is used for determining association degree information of each vehicle object in the (N + 1) th frame of image data by adopting the motion matching information and the appearance matching information.
In an embodiment of the present invention, the parking vehicle object determination module 504 includes:
a positioning frame overlapping area change rate determining sub-module, configured to determine, according to the first detection information and the second detection information, a positioning frame overlapping area change rate of the one or more vehicle objects, respectively, where the positioning frame overlapping area change rate is a change rate of a positioning frame area of the one or more vehicle objects in the first frame of image data and a positioning frame area of the second frame of image data;
and the parking vehicle object determining submodule is used for determining the vehicle object with the positioning frame overlapping area change rate larger than the preset positioning frame overlapping area change rate as the parking vehicle object with the parking event.
In the embodiment of the invention, by acquiring the video data, in the detection period of the parking event, the first frame image data of the video data can be detected to obtain one or more vehicle objects, and the first detection information of the one or more vehicle objects is determined, and then the one or more vehicle objects can be tracked to obtain the second detection information of the one or more vehicle objects in the second frame image data of the video data, and the parking vehicle object with the parking event is determined from the one or more vehicle objects according to the first detection information and the second detection information, so that the judgment of the parking event is realized, the accuracy of the parking event detection method is improved, and the misjudgment is effectively avoided.
An embodiment of the present invention further provides an electronic device, which may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, wherein the computer program, when executed by the processor, implements the method for detecting a parking event as described above.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for detecting a parking event as above.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for detecting a parking event are described in detail, and a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. A method of detecting a parking event, the method comprising:
acquiring video data;
in the detection period of the parking event, detecting first frame image data of the video data to obtain one or more vehicle objects, and determining first detection information of the one or more vehicle objects in the first frame image data;
tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in second frame image data of the video data; wherein the second frame of image data is image data ordered after the first frame of image data in the video data;
and determining the parking vehicle object with the parking event from the one or more vehicle objects according to the first detection information and the second detection information.
2. The method of claim 1, wherein the detecting of the first frame of image data of the video data for one or more vehicle objects during the detection period of the parking event comprises:
inputting first frame image data of the video data into a preset deep learning model in a detection period of the parking event, wherein the deep learning model is used for detecting the first frame image data by combining environmental information of the first frame image data;
receiving one or more vehicle objects for the first frame of image data output by the deep learning model.
3. The method according to claim 1 or 2, wherein the tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in a second frame of image data of the video data comprises:
for each vehicle object in the first frame of image data, sequentially tracking in image data in the video data that is ordered after the first frame of image data to determine the vehicle object in a second frame of image data of the video data;
second detection information of the vehicle object in the second frame image data is determined.
4. The method of claim 3, wherein between the first frame of image data and the second frame of image data there is an nth frame of image data and an N +1 th frame of image data, and wherein for each vehicle object in the first frame of image data, tracking is performed in turn in image data ordered after the first frame of image data in the video data to determine the vehicle object in the second frame of image data of the video data, comprising:
acquiring a target vehicle object in the Nth frame of image data;
determining, for the target vehicle object, degree-of-association information with each vehicle object in the N +1 th frame of image data;
and determining the target vehicle object in the (N + 1) th frame of image data according to the association degree information.
5. The method according to claim 4, wherein the determining, for the target vehicle object, the degree of association information with each vehicle object in the N +1 th frame of image data comprises:
for the target vehicle object, determining motion matching information and apparent matching information of each vehicle object in the (N + 1) th frame of image data;
and determining the association degree information of each vehicle object in the (N + 1) th frame of image data by adopting the motion matching information and the apparent matching information.
6. The method according to claim 1 or 2, wherein the determining a parking vehicle object for which a parking event exists from the one or more vehicle objects based on the first detection information and the second detection information comprises:
determining a change rate of a positioning frame overlapping area of the one or more vehicle objects according to the first detection information and the second detection information, wherein the change rate of the positioning frame overlapping area is a change rate of the positioning frame area of the one or more vehicle objects in the first frame image data and the positioning frame area in the second frame image data;
and determining the vehicle object with the positioning frame overlapping area change rate larger than the preset positioning frame overlapping area change rate as the parking vehicle object with the parking event.
7. The method of claim 6, wherein the first detection information or the second detection information comprises any one or more of:
the coordinate of the positioning frame, the width parameter of the positioning frame and the height parameter of the positioning frame.
8. A device for detecting a parking event, the device comprising:
the video data acquisition module is used for acquiring video data;
the first frame image data detection module is used for detecting first frame image data of the video data in a detection period of the parking event to obtain one or more vehicle objects and determining first detection information of the one or more vehicle objects in the first frame image data;
the vehicle object tracking module is used for tracking the one or more vehicle objects to obtain second detection information of the one or more vehicle objects in second frame image data of the video data; wherein the second frame of image data is image data ordered after the first frame of image data in the video data;
and the parking vehicle object determining module is used for determining the parking vehicle object with the parking event from the one or more vehicle objects according to the first detection information and the second detection information.
9. An electronic device comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing the method of detecting a parking event according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of detecting a parking event according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010874151.5A CN112183204A (en) | 2020-08-26 | 2020-08-26 | Method and device for detecting parking event |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010874151.5A CN112183204A (en) | 2020-08-26 | 2020-08-26 | Method and device for detecting parking event |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112183204A true CN112183204A (en) | 2021-01-05 |
Family
ID=73925132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010874151.5A Pending CN112183204A (en) | 2020-08-26 | 2020-08-26 | Method and device for detecting parking event |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112183204A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114038232A (en) * | 2021-10-28 | 2022-02-11 | 超级视线科技有限公司 | Roadside parking management method and system based on edge end calculation and storage combination |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN108229366A (en) * | 2017-12-28 | 2018-06-29 | 北京航空航天大学 | Deep learning vehicle-installed obstacle detection method based on radar and fusing image data |
CN110472496A (en) * | 2019-07-08 | 2019-11-19 | 长安大学 | A kind of traffic video intelligent analysis method based on object detecting and tracking |
CN110517506A (en) * | 2019-08-26 | 2019-11-29 | 重庆同济同枥信息技术有限公司 | Method, apparatus and storage medium based on traffic video image detection Parking |
CN110543827A (en) * | 2019-08-07 | 2019-12-06 | 上海师范大学 | multi-class vehicle detection method based on Gaussian mixture model and deep learning |
CN111523447A (en) * | 2020-04-22 | 2020-08-11 | 北京邮电大学 | Vehicle tracking method, device, electronic equipment and storage medium |
-
2020
- 2020-08-26 CN CN202010874151.5A patent/CN112183204A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN108229366A (en) * | 2017-12-28 | 2018-06-29 | 北京航空航天大学 | Deep learning vehicle-installed obstacle detection method based on radar and fusing image data |
CN110472496A (en) * | 2019-07-08 | 2019-11-19 | 长安大学 | A kind of traffic video intelligent analysis method based on object detecting and tracking |
CN110543827A (en) * | 2019-08-07 | 2019-12-06 | 上海师范大学 | multi-class vehicle detection method based on Gaussian mixture model and deep learning |
CN110517506A (en) * | 2019-08-26 | 2019-11-29 | 重庆同济同枥信息技术有限公司 | Method, apparatus and storage medium based on traffic video image detection Parking |
CN111523447A (en) * | 2020-04-22 | 2020-08-11 | 北京邮电大学 | Vehicle tracking method, device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114038232A (en) * | 2021-10-28 | 2022-02-11 | 超级视线科技有限公司 | Roadside parking management method and system based on edge end calculation and storage combination |
CN114038232B (en) * | 2021-10-28 | 2022-09-20 | 超级视线科技有限公司 | Roadside parking management method and system based on edge end calculation and storage combination |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107818571B (en) | Ship automatic tracking method and system based on deep learning network and average drifting | |
KR101995107B1 (en) | Method and system for artificial intelligence based video surveillance using deep learning | |
US20170344855A1 (en) | Method of predicting traffic collisions and system thereof | |
JP2021514498A (en) | Target tracking method and device, storage medium | |
CN106934817B (en) | Multi-attribute-based multi-target tracking method and device | |
JP2016219004A (en) | Multi-object tracking using generic object proposals | |
CN104303193A (en) | Clustering-based object classification | |
CN102782734A (en) | Video surveillance system | |
US20090319560A1 (en) | System and method for multi-agent event detection and recognition | |
CN111931582A (en) | Image processing-based highway traffic incident detection method | |
CN111862145A (en) | Target tracking method based on multi-scale pedestrian detection | |
CN116311063A (en) | Personnel fine granularity tracking method and system based on face recognition under monitoring video | |
CN115482489A (en) | Improved YOLOv 3-based power distribution room pedestrian detection and trajectory tracking method and system | |
CN117292321B (en) | Motion detection method and device based on video monitoring and computer equipment | |
CN112183204A (en) | Method and device for detecting parking event | |
CN117765416A (en) | Microscopic track data mining method for non-motor vehicle | |
CN110503663B (en) | Random multi-target automatic detection tracking method based on frame extraction detection | |
CN117078718A (en) | Multi-target vehicle tracking method in expressway scene based on deep SORT | |
Zhang et al. | Vehicle detection and tracking in remote sensing satellite vidio based on dynamic association | |
Jiang et al. | Abnormal event detection based on trajectory clustering by 2-depth greedy search | |
Yadav et al. | An Efficient Yolov7 and Deep Sort are Used in a Deep Learning Model for Tracking Vehicle and Detection | |
CN113392678A (en) | Pedestrian detection method, device and storage medium | |
Marsiano et al. | Deep Learning-Based Anomaly Detection on Surveillance Videos: Recent Advances | |
Triwibowo et al. | Analysis of Classification and Calculation of Vehicle Type at APILL Intersection Using YOLO Method and Kalman Filter | |
Moayed et al. | Surveillance-based collision-time analysis of road-crossing pedestrians |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |