CN108133172B - Method for classifying moving objects in video and method and device for analyzing traffic flow - Google Patents

Method for classifying moving objects in video and method and device for analyzing traffic flow Download PDF

Info

Publication number
CN108133172B
CN108133172B CN201711138992.4A CN201711138992A CN108133172B CN 108133172 B CN108133172 B CN 108133172B CN 201711138992 A CN201711138992 A CN 201711138992A CN 108133172 B CN108133172 B CN 108133172B
Authority
CN
China
Prior art keywords
similarity
time
space
motion
motion tracks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711138992.4A
Other languages
Chinese (zh)
Other versions
CN108133172A (en
Inventor
宋景选
曹黎俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huadaoxing Technology Co ltd
Original Assignee
Beijing Huadaoxing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huadaoxing Technology Co ltd filed Critical Beijing Huadaoxing Technology Co ltd
Priority to CN201711138992.4A priority Critical patent/CN108133172B/en
Publication of CN108133172A publication Critical patent/CN108133172A/en
Application granted granted Critical
Publication of CN108133172B publication Critical patent/CN108133172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for classifying moving objects in a video, and a method and a device for analyzing traffic flow. The method comprises the following steps: extracting the motion trail of each target object in the video; performing similarity modeling of a space-time relation on the motion tracks of the target objects, and determining the space-time similarity between the motion tracks of the target objects; and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space. The method and the device can ensure the rapidity and the accuracy of video data monitoring and the effectiveness of abnormal event detection.

Description

Method for classifying moving objects in video and method and device for analyzing traffic flow
Technical Field
The invention relates to the field of pattern recognition, machine learning and computer vision, in particular to a method for classifying moving objects in a video and a method and a device for analyzing traffic flow.
Background
With the rapid development of digital network technology, video images become an important carrier for information transfer. By the end of 2011, only the number of food monitoring cameras in Guangdong province breaks through 110 thousands. Meanwhile, the monitoring data generated by these monitoring cameras is increasing. The large amount of rich motion information contained in video image sequences has attracted a great deal of interest.
The traditional monitoring method relies on manual monitoring of the video, which obviously increases more and more labor cost. Because the number of video paths monitored simultaneously by manpower is limited, for some huge systems, the number of monitoring paths is large, and a plurality of persons are required to monitor simultaneously. The more critical problem is that the monitoring video is monitored for a long time, so that the vision of monitoring personnel becomes tired, the attention is distracted, a lot of visual information in the picture can be 'invisible' by human eyes, the monitoring efficiency and accuracy are not high, and even the phenomena of missing report or false report can occur. In addition, for a large-scale camera network and massive monitoring data, useful data only account for a few parts, and it is almost impossible to search for the abnormality from the massive data by manpower. Therefore, although the human eyes can directly distinguish moving objects from the new sequence of video images and extract moving information, the requirement of social development cannot be met by only relying on the natural intelligence of human beings to acquire and process the moving information.
Based on this, the human cost of video data monitoring is reduced, human vision is replaced by computer vision, motion information is extracted, analyzed and understood from a video image sequence, and the efficiency, accuracy and effectiveness of monitoring are improved, so that the technical problem to be solved urgently is formed.
Disclosure of Invention
In view of the above, the present invention has been made to provide a method and apparatus for classifying moving objects in a video that overcomes or at least partially solves the above problems.
In a first aspect, an embodiment of the present invention provides a method for classifying moving objects in a video, including:
extracting the motion trail of each target object in the video;
performing similarity modeling of a space-time relation on the motion tracks of the target objects, and determining the space-time similarity between the motion tracks of the target objects;
and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
In some optional embodiments, the extracting the motion trajectory of each target object in the video specifically includes:
and detecting each target object from the video sequence, tracking each target object in a time sequence to obtain the spatial position of each target object in the time sequence, and acquiring the motion tracks of all the target objects.
In some optional embodiments, after obtaining the motion trajectories of all the target objects, the method further includes:
and preprocessing the motion trail of the target object to remove noise and the motion trail which does not meet the preset requirement.
In some optional embodiments, the modeling of the similarity of the spatio-temporal relationship of the motion trajectories of the target objects specifically includes:
respectively analyzing the space similarity and the time sequence relation information between every two target objects according to the motion tracks of the target objects;
and fusing the time sequence relationship information between every two motion tracks of each target object into the space similarity between every two motion tracks of each target object, and establishing a similarity model of the space-time relationship between every two motion tracks of each target object.
In some optional embodiments, analyzing the spatial similarity between each two motion trajectories of each target object specifically includes:
for a given track A and a given track B, calculating a spatial distance f (A, B) between the track A and the track B, and normalizing the spatial distance f (A, B) to obtain a final spatial similarity between the track A and the track B:
F(A,B)=exp(-f(A,B)/σ) (1)
in the above formula (1), σ is a normalized scale parameter.
In some optional embodiments, analyzing the time-series relationship information between each two motion trajectories of each target object specifically includes:
calculate the timing weight W between given track A and track B:
W=1/(1+exp(-C)) (2)
in the above formula (2), the calculation formula of the parameter C is as follows:
Figure BDA0001471086860000031
in the above formula (3), Δ d is the time-series coincidence ratio between the track A and the track B, and η is the track A and the track BRatio of short-time traces to long-time traces in trace B, and ηtThen is the timing length ratio threshold between track a and track B,
Figure BDA0001471086860000032
and
Figure BDA0001471086860000033
the time sequence lengths of the motion tracks A and B are respectively, and K is an exponential parameter.
In some optional embodiments, establishing a similarity model of a space-time relationship between each two motion trajectories of each target object specifically includes:
weighting the spatial similarity between every two target objects by utilizing the time sequence relation information between every two target objects to obtain a space-time similarity model between two motion tracks as follows:
Figure BDA0001471086860000034
in the above equation (4), F and w represent the spatial similarity and the timing weight between the trajectories, respectively, and λ is a scale factor.
In a second aspect, an embodiment of the present invention provides a method for analyzing traffic flow, including:
extracting the motion trail of each vehicle object in the video;
carrying out similarity modeling of a space-time relation on the motion tracks of the vehicles, and determining the space-time similarity between the motion tracks of the vehicles;
clustering the motion tracks of the vehicles by using the space-time similarity between the motion tracks of the vehicles to obtain a vehicle group with similar time and space;
and carrying out statistical analysis on the vehicles in each vehicle group to obtain the traffic flow information in the preset area.
In a third aspect, an embodiment of the present invention provides an apparatus for classifying a moving object in a video, including:
the acquisition module is used for extracting the motion trail of each target object in the video;
the modeling module is used for carrying out similarity modeling of a space-time relation on the motion tracks of the target objects and determining the space-time similarity between the motion tracks of the target objects;
and the clustering module is used for clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
In some optional embodiments, the obtaining module is specifically configured to:
and detecting each target object from the video sequence, tracking each target object in a time sequence to obtain the spatial position of each target object in the time sequence, and acquiring the motion tracks of all the target objects.
In some optional embodiments, the obtaining module is further configured to:
and preprocessing the motion trail of the target object to remove noise and the motion trail which does not meet the preset requirement.
In some optional embodiments, the modeling module comprises:
the spatial similarity analysis submodule is used for respectively analyzing the spatial similarity between every two target objects according to the motion tracks of the target objects;
the time sequence relation information analysis submodule is used for respectively analyzing the time sequence relation information between every two target objects for the motion tracks of the target objects;
and the modeling submodule is used for fusing the time sequence relation information between every two motion tracks of each target object into the space similarity between every two motion tracks of each target object and establishing a similarity model of the space-time relation between every two motion tracks of each target object.
In some optional embodiments, the spatial similarity analysis submodule is specifically configured to:
for a given track A and a given track B, calculating a spatial distance f (A, B) between the track A and the track B, and normalizing the spatial distance f (A, B) to obtain a final spatial similarity between the track A and the track B:
F(A,B)=exp(-f(A,B)/σ) (1)
in the above formula (1), σ is a normalized scale parameter.
In some optional embodiments, the timing relationship information analysis sub-module is specifically configured to:
calculate the timing weight W between given track A and track B:
W=1/(1+exp(-C)) (2)
in the above formula (2), the calculation formula of the parameter C is as follows:
Figure BDA0001471086860000051
in the above formula (3), Δ d is the time-series coincidence ratio between the trajectory A and the trajectory B, η is the ratio of the time-series short trajectory to the time-series long trajectory in the trajectory A and the trajectory B, and ηtThen is the timing length ratio threshold between track a and track B,
Figure BDA0001471086860000052
and
Figure BDA0001471086860000053
the time sequence lengths of the motion tracks A and B are respectively, and K is an exponential parameter.
In some optional embodiments, the modeling submodule is specifically configured to:
weighting the spatial similarity between every two target objects by utilizing the time sequence relation information between every two target objects to obtain a space-time similarity model between two motion tracks as follows:
Figure BDA0001471086860000054
in the above equation (4), F and w represent the spatial similarity and the timing weight between the trajectories, respectively, and λ is a scale factor.
In a fourth aspect, an embodiment of the present invention provides an apparatus for analyzing a traffic flow, including:
the acquisition module is used for extracting the motion trail of each vehicle object in the video;
the modeling module is used for carrying out similarity modeling of the space-time relationship on the motion tracks of the vehicles and determining the space-time similarity between the motion tracks of the vehicles;
the clustering module is used for clustering the motion tracks of the vehicles by utilizing the space-time similarity among the motion tracks of the vehicles to obtain a vehicle group similar in time and space;
and the analysis module is used for carrying out statistical analysis on the vehicles in each vehicle group to obtain the traffic flow information in the preset area.
In a fifth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the steps of:
extracting the motion trail of each target object in the video;
performing similarity modeling of a space-time relation on the motion tracks of the target objects, and determining the space-time similarity between the motion tracks of the target objects;
and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the spatial-temporal relation between the motion tracks is analyzed to obtain a spatial-temporal similarity model of the motion tracks, the motion tracks are clustered, and the dynamic group is detected and analyzed according to the clustering group, so that the labor cost is saved, and the rapidness, accuracy and effectiveness of dynamic group analysis can be better guaranteed.
The time sequence information between the motion tracks is fused into the space similarity, and the time-space similarity modeling is carried out, so that the motion tracks appearing in the same time sequence section and the motion tracks existing in different time sequence sections can be measured through a designed unified time-space similarity model, the time-space dynamic relation between the motion tracks in a longer time sequence range can be mined, more global dynamic analysis in the longer time sequence range can be better carried out on the dynamic group, and the more robust and higher-precision dynamic group detection performance can be obtained.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a method for classifying moving objects in a video according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a motion trajectory according to an embodiment of the present invention;
FIG. 3 is a flowchart of a similarity modeling method for a motion trajectory spatiotemporal relationship according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary timing relationship between motion trajectories according to an embodiment of the present invention;
fig. 5 is a flowchart of a specific implementation flow of the method for analyzing traffic flow according to the second embodiment of the present invention;
FIG. 6 is a flowchart of a method for analyzing spatial similarity of motion trajectories according to a second embodiment of the present invention;
FIG. 7 is a diagram illustrating a process of clustering motion trajectories according to a second embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an apparatus for classifying moving objects in a video according to an embodiment of the present invention;
FIG. 9 is a sub-module structure diagram of a modeling module in an embodiment of the invention;
fig. 10 is a schematic structural diagram of an analysis apparatus for traffic flow according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problems that manual detection of dynamic groups wastes manpower, the speed is low, the accuracy is low, report omission and report error are more likely to occur, and even some work is almost impossible to be completed by manpower in the prior art, the embodiment of the invention provides a dynamic group analysis method, which can quickly and accurately implement dynamic group monitoring analysis and ensure the high efficiency, reliability and effectiveness of the analysis.
Example one
An embodiment of the present invention provides a method for classifying moving objects in a video, a flow of which is shown in fig. 1, and the method includes the following steps:
step S101: and extracting the motion trail of each target object in the video.
And detecting each target object from the video sequence, tracking each target object in a time sequence to obtain the spatial position of each target object in the time sequence, and acquiring the motion tracks of all the target objects.
The target object may be an object appearing in the video to be analyzed, such as a vehicle or a crowd, or any other specific object to be analyzed.
In the above step S101, all moving target objects, which are key points, are detected from the video sequence, and the selection of the moving target object detection method takes into consideration the following factors:
1. the motion characteristics of the target object comprise the track characteristics and the speed characteristics of the motion of the target object;
2. the background characteristics comprise the speed of background change, the intensity of background light, the existence of shielding and the like;
3. the requirements of the detection system include real-time detection, detection precision, calculation amount and the like.
The detection method of the moving target object is selected by comprehensively considering the above factors, and commonly used methods for detecting the moving target object include a frame difference method, a background difference method, an optical flow method, and the like. For example, the moving speed of the target object is moderate, the target object does not move too fast or too slow, the background change is fast, the requirement on the real-time performance is high, and a frame difference method with a simple algorithm can be selected; when the background changes, a background difference method can be selected; the optical flow method may be selected when the accuracy is required to be high and a large amount of calculation can be performed.
The embodiment of the present invention may also implement detection on the target object by using other target object detection methods, which are not limited herein.
And selecting a detection method according to the analysis method, detecting each target object from the video sequence to serve as a key point, tracking the key point on a time sequence to obtain the spatial position of the key point on the time sequence, and acquiring the motion tracks of all the target objects.
FIG. 2 is an exemplary diagram of a motion trajectory, looking at trajectory A as a combination of a series of observation points:
Figure BDA0001471086860000081
wherein
Figure BDA0001471086860000082
Represents the spatiotemporal position of the ith observation point,
Figure BDA0001471086860000083
the coordinates of the ith observation point in the spatial dimension,
Figure BDA0001471086860000084
coordinates of the ith observation point in the time dimension.
Due to illumination change, shielding and the like, it is difficult to obtain a long motion trajectory and certain noise, so after obtaining the motion trajectories of all target objects, the motion trajectories of the target objects can be preferably preprocessed to remove noise and motion trajectories that do not meet preset requirements. The detection result is more accurate.
Step S102: and performing similarity modeling of the space-time relationship on the motion tracks of the target objects, and determining the space-time similarity between the motion tracks of the target objects.
This step S102 can be implemented in the following manner: respectively analyzing the spatial similarity between every two target objects and the time sequence relation information between every two target objects according to the motion tracks of the target objects; and fusing the time sequence relationship information between every two motion tracks of each target object into the space similarity between every two motion tracks of each target object, and establishing a similarity model of the space-time relationship between every two motion tracks of each target object.
The similarity modeling method of the motion trajectory space-time relationship can be shown by referring to fig. 3, and comprises the following steps:
step S1021: and analyzing the spatial similarity between every two motion tracks of each target object.
For a given track A and a given track B, calculating a spatial distance f (A, B) between the track A and the track B, and normalizing the spatial distance f (A, B) to obtain a final spatial similarity between the track A and the track B:
F(A,B)=exp(-f(A,B)/σ) (1)
in the above formula (1), σ is a normalized scale parameter.
The spatial similarity between two motion trajectories is measured by their spatial distance, which is commonly used, for example, in the following: euclidean distance (Euclidean distance), Mahalanobis distance (Mahalanobis distance), Minkowski distance (Minkowski distance), Hausdorff distance (Hausdorff distance), and cosine distance, among others. The appropriate similarity measure is selected based on the type and characteristics of the motion trajectory, and the particular needs of the analysis problem.
Step S1022: and analyzing the time sequence relation information between every two motion tracks of each target object.
In step S1022, the timing relationship information may be represented by timing weights between the motion trajectories; further, the timing weight W between a given track A and a given track B may be calculated by:
W=1/(1+exp(-C)) (2)
in the above formula (2), the calculation formula of the parameter C is as follows:
Figure BDA0001471086860000091
in the above formula (3), Δ d is the time-series coincidence ratio between the trajectory A and the trajectory B, η is the ratio of the time-series short trajectory to the time-series long trajectory in the trajectory A and the trajectory B, and ηtThen is the timing length ratio threshold between track a and track B,
Figure BDA0001471086860000101
and
Figure BDA0001471086860000102
the time sequence lengths of the motion tracks A and B are respectively, and K is an exponential parameter.
The timing relationship between the motion trajectories can be divided into two parts: and a time sequence overlapping part and a time sequence non-overlapping part, referring to fig. 4, A, B, C three motion tracks, wherein track a and track B have overlapping and non-overlapping parts in time sequence, track a and track C have non-overlapping parts in time sequence, and track B and track C have overlapping and non-overlapping parts in time sequence.
Wherein deltad is defined as the time sequence coincidence degree between the motion tracks, the motion track which appears at the beginning is defined as A, and the other track which appears later is defined as B during specific calculation, then
Δd=tA_end-tB_start (4)
In the above formula (4), tA_endAnd tB_startRespectively representing the end instant of track A and the start instant of track B, said instants being measured in particular by frames, e.g. t A_end27 indicates the number of frames in which track A endsIs 27.
Δ d >0, indicating that two motion trajectories overlap in time sequence, for example, the time sequence overlap ratio Δ d between trajectory a and trajectory B in fig. 4 is 27-9-18 >0, and the time sequence overlap ratio Δ d between trajectory B and trajectory C is 42-34-8 > 0; Δ d <0, indicating that there is no timing overlap between the two motion trajectories, e.g., 27-34-7 <0 timing overlap ratio Δ d between trajectory a and trajectory C in fig. 4.
It should be noted that the steps S1021 and S1022 are independent of each other, and there is no strict chronological order.
Step S1023: and establishing a similarity model of the space-time relationship between every two motion tracks of each target object.
In step S1023, the temporal relationship information between each two motion trajectories of each target object may be used to weight the spatial similarity between each two motion trajectories, so as to obtain a spatial-temporal similarity model between two motion trajectories:
Figure BDA0001471086860000103
in the above equation (5), F and w represent the spatial similarity and the timing weight between the trajectories, respectively, and λ is a scale factor.
According to the method, a similarity model of the space-time relationship between every two motion tracks of all the target objects is established.
Step S103: and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are close in time and space.
And clustering the motion tracks of the target objects by utilizing the space-time similarity among the motion tracks of the target objects, wherein the commonly used motion track clustering methods comprise division clustering, hierarchical clustering, density clustering, neural network clustering, statistical clustering and the like, and the suitable clustering method can be selected according to the actual condition of the target object to be analyzed, and other suitable clustering methods can also be selected.
Selecting a clustering method, and dividing the motion tracks with similar behaviors, namely clustering the motion tracks with closer spatial distance and shorter time sequence interval into one class; dividing the motion tracks with different behaviors, namely dividing the motion tracks with longer spatial distance or longer time sequence interval, and obtaining the groups of target objects which are close in time and space so as to be used for group analysis of the motion objects.
Step S104: and performing group analysis on the target object according to the group of the target object.
For example, the target objects may be subjected to global group analysis according to the group of target objects obtained by the above method, such as analyzing the group density, the moving speed, the motion law, and the like of the target objects. Meanwhile, abnormal events including position abnormality, azimuth abnormality, speed abnormality and the like can be detected through analysis.
In the method of the embodiment, the spatiotemporal relationship between the motion tracks is analyzed to obtain the spatiotemporal similarity model of the motion tracks, the motion tracks are clustered, and the dynamic group is detected and analyzed according to the clustering group, so that the labor cost is saved, and the rapidity, the accuracy and the effectiveness of the analysis of the dynamic group can be better ensured.
The time sequence information between the motion tracks is fused into the space similarity, and the time-space similarity modeling is carried out, so that the motion tracks appearing in the same time sequence section and the motion tracks existing in different time sequence sections can be measured through a designed unified time-space similarity model, the time-space dynamic relation between the motion tracks in a longer time sequence range can be mined, more global dynamic analysis in the longer time sequence range can be better carried out on the dynamic group, and the more robust and higher-precision dynamic group detection performance can be obtained.
Example two
The second embodiment of the present invention provides a specific implementation manner that the method for classifying moving objects in a video is applied to an application scenario of traffic flow analysis, where the implementation process is directed to the analysis of traffic flow, and the method for analyzing traffic flow, as shown in fig. 5, specifically includes the following steps:
step S501: an input video sequence is acquired.
And inputting a video sequence of traffic flow information shot by the road camera. The video sequence may be input in real time or may be integrated. The traffic flow analysis device acquires the video sequence of the traffic flow information shot by the input road camera, and further determines the time sequence length of the video sequence analyzed each time, wherein the time sequence length of the video sequence analyzed each time can be determined according to the following modes:
the first method is as follows: according to the requirement of the real-time performance of analysis, if the requirement of the real-time performance of analysis is high, for example, a vehicle close to a violation state, a violation vehicle or an accident vehicle needs to be detected in real time so as to be processed in time, the shorter time sequence length of the video sequence analyzed each time is set on the premise of ensuring that the number of motion tracks has enough representativeness; if only the density characteristic, the moving speed, the motion rule and other characteristics of the vehicle need to be analyzed, but the requirement on the real-time performance of the analysis is not high, the longer time sequence length of the video sequence analyzed each time can be set.
The second method comprises the following steps: determining according to the change speed of an actual video sequence, for example, if the change speed of the video sequence is high, the motion trajectory in the video sequence with a shorter time sequence length can meet the representative requirement, and the shorter time sequence length of the video sequence analyzed each time can be set; if the video sequence changes slowly, the time sequence length of each analyzed video sequence is set to be longer so as to meet the requirement that the motion track of the video sequence is representative.
The third method comprises the following steps: and comprehensively determining the time sequence length of the video sequence analyzed each time according to the real-time requirement and the change speed of the actual video sequence.
In this embodiment, the time sequence length of the traffic flow video analyzed each time can be set to 200 frames, the first 200 frames of video sequences are extracted, the motion tracks of the vehicles are extracted from the video sequences, preprocessing is performed, pairwise temporal-spatial similarity analysis is performed on the preprocessed motion tracks, and then clustering analysis is performed on all the tracks; and then, performing the same operation on the subsequent 200 frames of video sequences, and so on until all the video sequences are analyzed.
Step S502: and extracting the motion trail of each vehicle object in the video.
Selecting a vehicle detection method: since the vehicle may move fast, slowly, or even still, and the background changes due to light, occlusion, etc., the optical flow method may be selected to detect the vehicle in the video sequence.
The change of the gray scale of each pixel of the image is regarded as motion, the change rate of the gray scale is a velocity vector, and all the velocity vectors form a light velocity field. If there are no moving objects in the image, the optical flow vector of the entire image is uniformly varied. If a moving object exists in the image, the optical flow vector of the moving object is necessarily different from the optical flow vector of the neighborhood background, so that the position of the moving object can be detected. The optical flow method can detect vehicles moving slowly or even being stationary, and is less affected by environmental changes.
And detecting the vehicles in the video sequence by an optical flow method, tracking the vehicles on a time sequence to obtain the spatial positions of the vehicles on the time sequence, and acquiring the motion tracks of all the vehicles.
Step S503: and preprocessing the motion trail of each vehicle.
And preprocessing the motion trail of the vehicle to remove noise and the motion trail which does not meet the preset requirement. For example, the wandering trajectory is not in accordance with the actual situation and is preset to be an unsatisfactory movement trajectory.
Step S504: and analyzing the spatial similarity between every two motion tracks of each vehicle.
The modified Hausdorff distance may be used to measure the spatial distance between the vehicle motion profiles due to the unequal length and shape differences between the target vehicle motion profiles. The spatial similarity degree of the motion trail is mainly expressed by the spatial neighbor relation and the speed relation of the motion trail, so that the spatial Euclidean distance related to the spatial neighbor relation and the cosine distance related to the speed are introduced in the calculation process of the corrected Housdov distance.
For a given trajectory a and trajectory B, the method for analyzing the spatial similarity is shown in fig. 6, and comprises the following steps:
step S5041: and searching the nearest point on the track B corresponding to each observation point on the track A.
The trajectory is considered as a combination of a series of observation points:
Figure BDA0001471086860000131
wherein
Figure BDA0001471086860000132
Representing the spatiotemporal position of the ith observation point. For a given trajectory
Figure BDA0001471086860000133
And track
Figure BDA0001471086860000134
By aiming at each observation point on the motion trail A
Figure BDA0001471086860000141
Searching for a point epsilon (i) closest to the point on the motion trail B in the following way:
Figure BDA0001471086860000142
in the above-mentioned formula (6),
Figure BDA0001471086860000143
and
Figure BDA0001471086860000144
a coordinate point representing the ith observation point on the motion trajectory a, and j is the jth point on the motion trajectory B.
Step S5042: a modified housdov distance between trajectory a to trajectory B is calculated.
Searching all observation points on the track A on the track B by using the formula (6)
Figure BDA0001471086860000145
The corrected hausdorff distance between trajectory a and trajectory B after the closest point epsilon (i) is given by:
Figure BDA0001471086860000146
in the above formula (7), NAIs the number of observation points in the trajectory A, beta is the equilibrium coefficient of the space Euclidean and cosine distances in the formula, and viAnd vε(i)Are respectively
Figure BDA0001471086860000147
And
Figure BDA0001471086860000148
the moving speed of (2).
Step S5043: and searching the nearest point on the track A corresponding to each observation point on the track B.
Since the corrected housfow distance d (a, B) between the trajectory a and the trajectory B and the corrected housfow distance d (B, a) between the trajectory B and the trajectory a are asymmetric, it is necessary to calculate d (a, B) and d (B, a) separately when calculating the corrected housfow distance between the trajectory a and the trajectory B.
With the method of step S5041, each observation point on the corresponding trajectory B is searched for the closest point on the trajectory a.
Step S5044: a modified housdov distance between trajectory B to trajectory a is calculated.
Using the method of step S5042, a corrected hausdorff distance d (B, a) between trajectory B to trajectory a is calculated.
Steps S5041 to S5042 and steps S5043 to S5044 may be performed first, or steps S5041 to S5042 may be performed first, or steps S5043 to S5044 may be performed first, or steps S5041 to S5042 and steps S5043 to S5044 may be performed simultaneously.
Step S5045: the spatial distance between trajectory a and trajectory B is calculated.
Taking the minimum of the modified hausdorff distance d (a, B) between the trajectory a and the trajectory B and the modified hausdorff distance d (B, a) between the trajectory B and the trajectory a as the spatial distance between the trajectory a and the trajectory B:
f(A,B)=min(d(A,B),d(B,A)) (8)
step S5046: and calculating the spatial similarity between the track A and the track B.
And (3) performing exponential normalization processing on the spatial distance F (A, B) between the track A and the track B according to a formula (1) to obtain the spatial similarity F (A, B) between the track A and the track B.
Step S505: and analyzing the time sequence relation information between every two motion tracks of each vehicle.
Step S504 and step S505 are not in sequence, and any one of them may be performed first, or may be performed simultaneously.
And (3) calculating a time sequence weight influence parameter C between the two tracks according to a formula (3), and further calculating a space-time weight between the two tracks by using a formula (2).
Step S506: and performing similarity modeling of the space-time relationship on the motion tracks of the vehicles to determine the space-time similarity between the motion tracks of the vehicles.
And (5) establishing a motion trajectory space-time similarity model according to the formula (5).
Step S507: and clustering the motion tracks of the vehicles by utilizing the space-time similarity between the motion tracks of the vehicles to obtain a vehicle group with similar time and space.
The data of the movement track of the traffic flow is the result of slowly aggregating and evolving from a small group to a large group, so the embodiment adopts a hierarchical clustering method of low-upward to obtain a clustering group of the movement track of the traffic flow.
The embodiment of the invention can also realize clustering of the motion tracks of the vehicles by other clustering methods, which is not limited herein.
The specific clustering process of hierarchical clustering is shown in fig. 7a and 7b, that is, a single motion trajectory is taken as one class at the beginning of clustering, then corresponding trajectories are gradually fused into the same class according to the spatio-temporal compactness degree between the trajectories, and as the clustering hierarchy increases, smaller clusters are fused into larger clusters.
In order to automatically stop clustering in different scenes according to the conditions of the scenes, namely, the clustering number of hierarchical clustering is automatically determined by utilizing the compactness among motion tracks, the minimum intra-class difference and the maximum inter-class difference are introduced, and the clustering effect is optimal when the inter-class difference is maximum and the intra-class difference is minimum. The optimal cluster number is calculated as follows:
Figure BDA0001471086860000161
in the above formula (9), SBAnd SWRespectively representing the inter-class difference and the intra-class difference of the clustering group in the current hierarchical clustering state.
As the clustering hierarchy increases, the inter-class difference and the intra-class difference become larger, and C is calculated when each hierarchy is clusteredN,CNMaximum temporal clustering effect is optimal, CNAnd automatically stopping clustering when the maximum value is reached.
Fig. 7a is a clustering process diagram of low-level clustering of motion trajectories, and fig. 7b is a high-level clustering process diagram:
the first level of hierarchical clustering firstly carries out single motion track t1、t2、t3...
And in the second-level hierarchical clustering, two motion tracks with high space-time similarity are fused into the same type according to the closeness degree between every two motion tracks. Classifying two tracks with the space-time similarity less than or equal to a set threshold value of the space-time similarity of the current level into one class, for example, t, according to the space-time similarity model between every two motion tracks obtained in step S506, i.e. the space-time similarity between the two tracks calculated by formula (5)1And t2The space-time similarity between them is less than the threshold value, and is classified as C1,2,t3Finding tracks t with temporal similarity within a specified range3Fall into one category individually.
Third level hierarchical clustering, wherein the clusters obtained by the second level clustering are respectively judged between every two clusters, whether all tracks contained in one cluster and the other clusterThe space-time similarity between every two tracks contained in the group is less than or equal to a set space-time similarity threshold value of the current level, and if yes, the two groups are grouped into one class; if the spatial similarity of any two tracks is greater than the set space-time similarity threshold of the current level, the two groups cannot be grouped into one group. For example, group C1,2Containing the track t1、t2Respectively with t3The space-time similarity between the two is analyzed, and t is found1And t3Time-space similarity of (1), t2And t3If the space-time similarity is less than the threshold value of the space-time similarity of the current level, the group C1,2And t3Poly as one C1,2,3……
And by analogy, continuing higher-level hierarchical clustering, and simultaneously respectively calculating the ratio of the inter-class difference and the intra-class difference of the clustering classes in the current hierarchical clustering state during clustering of each hierarchy.
The N-2 level hierarchical clustering respectively obtains a group D according to the method of the third level hierarchical clustering1、D2And D3……。
Level N-1 hierarchical clustering, according to the above method, group D was compared1All traces and groups D contained in2All the space-time similarity of every two of the tracks contained in the method is less than or equal to the space-time similarity threshold value of the current level, and then D is determined1And D2Poly is a type D1,2(ii) a Group D3A group D is not found in which the spatiotemporal similarity between all the included tracks and all the included tracks is less than or equal to the spatiotemporal similarity threshold value of the current level3Fall into one category individually.
The Nth level hierarchical clustering finds a group D1,2And group D3All the space-time similarity between every two included tracks is less than or equal to the space-time similarity threshold value of the current level, and D is obtained1,2And D3Clustering into one type to obtain the track path of one type of cluster group1. And calculating the ratio C of the inter-class difference to the intra-class difference of the cluster group of this hierarchyNThe maximum, namely the clustering effect is optimal at the moment, the clustering is stopped,and obtaining an optimal clustering group.
In the method, the setting of the spatiotemporal similarity threshold of the current level is gradually reduced along with the increase of the clustering level.
Step S508: and carrying out statistical analysis on the vehicles in each vehicle group to obtain the traffic flow information in the preset area.
The traffic flow movement track clustering group obtained by the method carries out group global analysis on the traffic flow, knows traffic information such as traffic flow density, speed, movement direction, queuing degree and the like, and can effectively monitor violation behaviors and prevent traffic accidents by analyzing and detecting abnormal behaviors and giving an alarm in time.
For the traffic flow analysis task, as the urbanization process is accelerated, the traffic problem is increasingly serious, and the ultra-large-scale traffic flow data cannot be dealt with by a mode of only manually watching videos. In the embodiment, space-time similarity modeling is carried out on the vehicles moving on the road according to the space-time relation of the movement tracks, the movement tracks are clustered by using a hierarchical clustering method based on a space-time similarity model, and the clustering number is automatically determined according to the tightness among the movement tracks in each scene, so that intelligent analysis on mass traffic flow data is realized, the global information of the traffic flow can be better obtained, the specific information of the traffic flow of the monitoring video is obtained in a fast and accurate mode, and abnormal events are detected through analysis. The high efficiency, accuracy and effectiveness of the traffic flow detection are ensured.
Based on the same inventive concept, the embodiment of the invention also provides a device for classifying moving objects in videos, which can be applied to various fields of human behavior patterns, traffic logistics, emergency evacuation management, animal habit analysis, marketing, computer geometry, simulation and the like.
The structure of the device is shown in fig. 8, and comprises:
an obtaining module 801, configured to extract a motion trajectory of each target object in a video;
the modeling module 802 is configured to perform similarity modeling of a space-time relationship on the motion trajectories of the target objects, and determine a space-time similarity between the motion trajectories of the target objects;
the clustering module 803 is configured to cluster the motion trajectories of the target objects by using the spatial-temporal similarity between the motion trajectories of the target objects, so as to obtain a group of target objects that are similar in time and space.
Preferably, the obtaining module 801 is specifically configured to detect each target object from the video sequence, track each target object in a time sequence, obtain a spatial position of each target object in the time sequence, and obtain motion trajectories of all target objects.
Preferably, as shown in fig. 9, the modeling module 802 includes:
the spatial similarity analysis submodule 8021 is configured to analyze spatial similarities between each two of the motion trajectories of the target objects;
the time sequence relation information analysis submodule 8022 is configured to analyze time sequence relation information between each two of the motion tracks of the target objects;
the modeling submodule 8023 is configured to fuse the time-sequence relationship information between each two of the motion trajectories of each target object into the spatial similarity between each two of the motion trajectories, and establish a similarity model of the time-space relationship between each two of the motion trajectories of each target object.
Preferably, the spatial similarity analysis submodule 8021 is specifically configured to: for a given track A and a given track B, calculating a spatial distance f (A, B) between the track A and the track B, and normalizing the spatial distance f (A, B) to obtain a final spatial similarity between the track A and the track B:
F(A,B)=exp(-f(A,B)/σ) (1)
in the above formula (1), σ is a normalized scale parameter.
Preferably, the timing relationship information analysis sub-module 8022 is specifically configured to:
calculate the timing weight W between given track A and track B:
W=1/(1+exp(-C)) (2)
in the above formula (2), the calculation formula of the parameter C is as follows:
Figure BDA0001471086860000191
in the above formula (3), Δ d is the time-series coincidence ratio between the trajectory A and the trajectory B, η is the ratio of the time-series short trajectory to the time-series long trajectory in the trajectory A and the trajectory B, and ηtThen is the timing length ratio threshold between track a and track B,
Figure BDA0001471086860000192
and
Figure BDA0001471086860000193
the time sequence lengths of the motion tracks A and B are respectively, and K is an exponential parameter.
Preferably, the modeling submodule 8023 is specifically configured to:
weighting the spatial similarity between every two target objects by utilizing the time sequence relation information between every two target objects to obtain a space-time similarity model between two motion tracks as follows:
Figure BDA0001471086860000194
in the above equation (5), F and w represent the spatial similarity and the timing weight between the trajectories, respectively, and λ is a scale factor.
Based on the same inventive concept, an embodiment of the present invention further provides an apparatus for analyzing a traffic flow, where the apparatus is shown in fig. 10, and includes:
an obtaining module 1001, configured to extract a motion trajectory of each vehicle object in a video;
the modeling module 1002 is configured to perform similarity modeling of a space-time relationship on the motion trajectories of the vehicles, and determine a space-time similarity between the motion trajectories of the vehicles;
the clustering module 1003 is configured to cluster the motion trajectories of the vehicles by using the spatial-temporal similarity between the motion trajectories of the vehicles to obtain a vehicle group with similar time and space;
the analysis module 1004 is configured to perform statistical analysis on vehicles in each vehicle group to obtain traffic flow information in a preset area.
With regard to the apparatus for classifying moving objects in video and the apparatus for analyzing traffic flow in the above embodiments, the specific manner in which each module performs operations has been described in detail in the embodiments related to the method, and will not be elaborated herein.
Based on the same inventive concept, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor, enable the processor to perform the above-mentioned evidence obtaining method for electronic evidence, including:
extracting the motion trail of each target object in the video;
performing similarity modeling of a space-time relation on the motion tracks of the target objects, and determining the space-time similarity between the motion tracks of the target objects;
and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems or similar devices that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers and memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".

Claims (8)

1. A method for classifying moving objects in a video, comprising:
extracting the motion trail of each target object in the video;
respectively analyzing the space similarity and the time sequence relation information between every two target objects according to the motion tracks of the target objects;
performing exponential operation on the space similarity between every two target objects by utilizing the time sequence relation information of the motion tracks of the target objects according to the following formula (1) to obtain a space-time similarity model between the two motion tracks, and determining the space-time similarity between the motion tracks of the target objects:
Figure FDA0003458363800000011
in the above formula (1), S, F and w represent the space-time similarity, the space similarity and the time sequence weight between two motion tracks, respectively, and λ is a scale factor;
and clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
2. The method according to claim 1, wherein the extracting the motion trajectory of each target object in the video specifically comprises:
and detecting each target object from the video sequence, tracking each target object in a time sequence to obtain the spatial position of each target object in the time sequence, and acquiring the motion tracks of all the target objects.
3. The method according to claim 1, wherein analyzing spatial similarity between each two of the motion trajectories of the target objects specifically comprises:
for a given track A and a given track B, calculating a spatial distance f (A, B) between the track A and the track B, and normalizing the spatial distance f (A, B) to obtain a final spatial similarity between the track A and the track B:
F(Α,B)=exp(-f(Α,B)/σ) (2)
in the above formula (2), σ is a normalized scale parameter.
4. The method according to claim 1, wherein analyzing the time-series relationship information between each two of the motion trajectories of the target objects specifically comprises:
calculate the timing weight w between a given track a and a track B:
w=1/(1+exp(-C)) (3)
in the above formula (3), the calculation formula of the parameter C is as follows:
Figure FDA0003458363800000021
in the above formula (4), Δ d is the time-series coincidence ratio between the trajectory A and the trajectory B, η is the ratio of the time-series short trajectory to the time-series long trajectory in the trajectory A and the trajectory B, and ηtThen is the timing length ratio threshold between track a and track B,
Figure FDA0003458363800000024
and
Figure FDA0003458363800000025
the time sequence lengths of the motion tracks A and B are respectively, and k is an exponential parameter.
5. A method for analyzing a traffic flow, comprising:
extracting the motion trail of each vehicle object in the video;
respectively analyzing the space similarity and the time sequence relation information between every two target objects according to the motion tracks of the target objects;
the time sequence relationship information between every two of the motion tracks of each target object is utilized to carry out the spatial similarity between every two of the motion tracks according to the following formula (1)
Figure FDA0003458363800000022
Performing exponential operation to obtain a space-time similarity model between the two motion tracks, and determining the space-time similarity between the motion tracks of each vehicle:
Figure FDA0003458363800000023
in the above formula (1), S, F and w represent the space-time similarity, the space similarity and the time sequence weight between two motion tracks, respectively, and λ is a scale factor;
clustering the motion tracks of the vehicles by using the space-time similarity between the motion tracks of the vehicles to obtain a vehicle group with similar time and space;
and carrying out statistical analysis on the vehicles in each vehicle group to obtain the traffic flow information in the preset area.
6. An apparatus for classifying moving objects in a video, comprising:
the acquisition module is used for extracting the motion trail of each target object in the video;
the modeling module is used for respectively analyzing the space similarity and the time sequence relation information between every two target objects according to the motion tracks of the target objects; utilizing the time sequence relation information of the motion trail of each target object to carry out the spatial similarity between the two objects according to a formula
Figure FDA0003458363800000031
Performing exponential operation to obtain a space-time similarity model between the two motion tracks, and determining the space-time similarity between the motion tracks of each target object, wherein S, F and w in the formula respectively represent the space-time similarity, the space similarity and the time sequence weight between the two motion tracks, and lambda is a scale factor;
and the clustering module is used for clustering the motion tracks of the target objects by utilizing the space-time similarity between the motion tracks of the target objects to obtain a group of the target objects which are similar in time and space.
7. The apparatus of claim 6, wherein the modeling module comprises:
the spatial similarity analysis submodule is used for respectively analyzing the spatial similarity between every two target objects according to the motion tracks of the target objects;
the time sequence relation information analysis submodule is used for respectively analyzing the time sequence relation information between every two target objects for the motion tracks of the target objects;
and the modeling submodule is used for fusing the time sequence relation information between every two motion tracks of each target object into the space similarity between every two motion tracks of each target object and establishing a similarity model of the space-time relation between every two motion tracks of each target object.
8. An apparatus for analyzing a traffic flow, comprising:
the acquisition module is used for extracting the motion trail of each vehicle object in the video;
the modeling module is used for respectively analyzing the space similarity and the time sequence relation information between every two target objects according to the motion tracks of the target objects; utilizing the time sequence relation information of the motion trail of each target object to carry out the spatial similarity between the two objects according to a formula
Figure FDA0003458363800000032
Performing exponential operation to obtain a space-time similarity model between the two motion tracks, and determining the space-time similarity between the motion tracks of each vehicle, wherein S, F and w in the formula respectively represent the space-time similarity, the space similarity and the time sequence weight between the two motion tracks, and lambda is a scale factor;
the clustering module is used for clustering the motion tracks of the vehicles by utilizing the space-time similarity among the motion tracks of the vehicles to obtain a vehicle group similar in time and space;
and the analysis module is used for carrying out statistical analysis on the vehicles in each vehicle group to obtain the traffic flow information in the preset area.
CN201711138992.4A 2017-11-16 2017-11-16 Method for classifying moving objects in video and method and device for analyzing traffic flow Active CN108133172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711138992.4A CN108133172B (en) 2017-11-16 2017-11-16 Method for classifying moving objects in video and method and device for analyzing traffic flow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711138992.4A CN108133172B (en) 2017-11-16 2017-11-16 Method for classifying moving objects in video and method and device for analyzing traffic flow

Publications (2)

Publication Number Publication Date
CN108133172A CN108133172A (en) 2018-06-08
CN108133172B true CN108133172B (en) 2022-04-05

Family

ID=62389151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711138992.4A Active CN108133172B (en) 2017-11-16 2017-11-16 Method for classifying moving objects in video and method and device for analyzing traffic flow

Country Status (1)

Country Link
CN (1) CN108133172B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2788481C1 (en) * 2022-06-16 2023-01-19 Александр Сергеевич Потапов Method for automatic analysis of visual data and an intelligent portable video system for its implementation

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461106A (en) * 2018-10-11 2019-03-12 浙江公共安全技术研究院有限公司 A kind of multidimensional information perception processing method
EP3853789A1 (en) * 2018-10-16 2021-07-28 Huawei Technologies Co., Ltd. Improved trajectory matching based on use of quality indicators empowered by weighted confidence values
CN111209769B (en) * 2018-11-06 2024-03-08 深圳市商汤科技有限公司 Authentication system and method, electronic device and storage medium
CN109684916B (en) * 2018-11-13 2020-01-07 恒睿(重庆)人工智能技术研究院有限公司 Method, system, equipment and storage medium for detecting data abnormity based on path track
CN109784260A (en) * 2019-01-08 2019-05-21 深圳英飞拓科技股份有限公司 A kind of zone flow real-time statistical method and system based on video structural
CN110751164B (en) * 2019-03-01 2022-04-12 西安电子科技大学 Old man travel abnormity detection method based on location service
CN110727756A (en) * 2019-10-18 2020-01-24 北京明略软件系统有限公司 Management method and device of space-time trajectory data
CN112037245B (en) * 2020-07-22 2023-09-01 杭州海康威视数字技术股份有限公司 Method and system for determining similarity of tracked targets
CN111898592B (en) * 2020-09-29 2020-12-29 腾讯科技(深圳)有限公司 Track data processing method and device and computer readable storage medium
CN112562315B (en) * 2020-11-02 2022-04-01 鹏城实验室 Method, terminal and storage medium for acquiring traffic flow information
CN113112857A (en) * 2020-11-05 2021-07-13 包赛花 Intelligent parking lot vehicle parking guiding method and artificial intelligence server
CN112925948A (en) * 2021-02-05 2021-06-08 上海依图网络科技有限公司 Video processing method and device, medium, chip and electronic equipment thereof
CN113255518B (en) * 2021-05-25 2021-09-24 神威超算(北京)科技有限公司 Video abnormal event detection method and chip
CN113971782B (en) * 2021-12-21 2022-04-19 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134222A (en) * 2014-07-09 2014-11-05 郑州大学 Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN104657424A (en) * 2015-01-21 2015-05-27 段炼 Clustering method for interest point tracks under multiple temporal and spatial characteristic fusion
CN106383868A (en) * 2016-09-05 2017-02-08 电子科技大学 Road network-based spatio-temporal trajectory clustering method
CN107301254A (en) * 2017-08-24 2017-10-27 电子科技大学 A kind of road network hot spot region method for digging

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855361B2 (en) * 2010-12-30 2014-10-07 Pelco, Inc. Scene activity analysis using statistical and semantic features learnt from object trajectory data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134222A (en) * 2014-07-09 2014-11-05 郑州大学 Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN104657424A (en) * 2015-01-21 2015-05-27 段炼 Clustering method for interest point tracks under multiple temporal and spatial characteristic fusion
CN106383868A (en) * 2016-09-05 2017-02-08 电子科技大学 Road network-based spatio-temporal trajectory clustering method
CN107301254A (en) * 2017-08-24 2017-10-27 电子科技大学 A kind of road network hot spot region method for digging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
运动目标轨迹分类与识别研究;潘奇明;《中国优秀硕士学位论文全文数据(电子期刊)信息科技辑》;20060715(第7期);I138-354 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2788481C1 (en) * 2022-06-16 2023-01-19 Александр Сергеевич Потапов Method for automatic analysis of visual data and an intelligent portable video system for its implementation

Also Published As

Publication number Publication date
CN108133172A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108133172B (en) Method for classifying moving objects in video and method and device for analyzing traffic flow
KR101995107B1 (en) Method and system for artificial intelligence based video surveillance using deep learning
KR102189262B1 (en) Apparatus and method for collecting traffic information using edge computing
Henschel et al. Multiple people tracking using body and joint detections
Basharat et al. Learning object motion patterns for anomaly detection and improved object detection
Xu et al. Video anomaly detection and localization based on an adaptive intra-frame classification network
Ryan et al. Scene invariant multi camera crowd counting
Bansod et al. Crowd anomaly detection and localization using histogram of magnitude and momentum
US9811755B2 (en) Object monitoring system, object monitoring method, and monitoring target extraction program
CN107194360B (en) Reverse current object identifying method, apparatus and system
CN106295598A (en) A kind of across photographic head method for tracking target and device
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN112989962A (en) Track generation method and device, electronic equipment and storage medium
Soleimanitaleb et al. Single object tracking: A survey of methods, datasets, and evaluation metrics
KR20190088087A (en) method of providing categorized video processing for moving objects based on AI learning using moving information of objects
CN116311063A (en) Personnel fine granularity tracking method and system based on face recognition under monitoring video
Abdulghafoor et al. A novel real-time multiple objects detection and tracking framework for different challenges
KR101214858B1 (en) Moving object detecting apparatus and method using clustering
Orru et al. Detecting anomalies from video-sequences: a novel descriptor
Ryan et al. Scene invariant crowd counting and crowd occupancy analysis
Yadav et al. Video anomaly detection for pedestrian surveillance
Uke et al. Motion tracking system in video based on extensive feature set
Song et al. A low false negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method
Huang et al. Motion characteristics estimation of animals in video surveillance
Yang et al. Anomalous behavior detection in crowded scenes using clustering and spatio-temporal features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant