CN110503032B - Individual important place detection method based on track data of monitoring camera - Google Patents

Individual important place detection method based on track data of monitoring camera Download PDF

Info

Publication number
CN110503032B
CN110503032B CN201910775022.8A CN201910775022A CN110503032B CN 110503032 B CN110503032 B CN 110503032B CN 201910775022 A CN201910775022 A CN 201910775022A CN 110503032 B CN110503032 B CN 110503032B
Authority
CN
China
Prior art keywords
stay
individual
camera
time
staying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910775022.8A
Other languages
Chinese (zh)
Other versions
CN110503032A (en
Inventor
邓敏
谌恺祺
罗靓
石岩
刘慧敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910775022.8A priority Critical patent/CN110503032B/en
Publication of CN110503032A publication Critical patent/CN110503032A/en
Application granted granted Critical
Publication of CN110503032B publication Critical patent/CN110503032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting an individual important place based on track data of a monitoring camera, which comprises the following steps: step 1, extracting a geographical area which can represent an individual to stay for a period of time according to a camera time sequence and geographical position information recorded by trajectory data of the individual monitoring camera, and abstracting the geographical area into an individual stay point; step 2, performing spatial clustering according to the geographical position of the individual dwell point, abstracting the dwell point into a dwell area, and calculating the characteristic value of the dwell area; step 3, calculating the Mahalanobis distance corresponding to the staying area based on the characteristic value of the staying area, judging the abnormal value of the Mahalanobis distance corresponding to the staying area, and taking the staying area judged to be the abnormal value as the important place of the individual; and 4, classifying each important place of the individual according to the time period characteristics of the individual moving in different important places. The individual important place detection method provided by the invention can improve the accuracy of individual important place detection.

Description

Individual important place detection method based on track data of monitoring camera
Technical Field
The invention relates to the field of space-time data mining and space-time statistics, in particular to a method for detecting an individual important place based on track data of a monitoring camera.
Background
In modern smart cities, various sensors exist, which constantly record the position information of individuals, and in the prior art, the geographic position is usually expressed in the form of a mathematical position, and the position is calibrated by coordinates (latitude and longitude). However, the semantic information contained by a place is clearly more valuable to the user than a digital location, with different categories of places having different degrees of social significance to the user's personal daily life. For example, the 'residential area' reflects the rest and residence of the individual; the "office" reflects the work that the individual is doing. It may also imply that there may be a certain social relationship between two individuals if their "offices" are in close proximity. Therefore, the information and the value contained in the individual place are far richer than the information and the value contained in the single geographical position.
Currently, many geographic location-based services, whether government or enterprise, are developed and identifying important locations of individuals is of great importance to the establishment of such services. For example, a social-like service may recommend other individuals who may have a social relationship based on the location of the individual's workplace or primary entertainment venue; the interest recommendation service can also recommend nearby living facilities according to the individual home address and push advertisements in which the user is interested according to places frequently visited by the individual; the intelligent assistant-like service may automatically mute the phone at work, depending on the location of the individual, or pre-start the facilities at home before the individual arrives at home, etc. Detection techniques for individual venues are serving our daily lives in a variety of ways.
Next, firstly, the feasibility of the data detected in the individual important places is analyzed from the level of the data, and new opportunities and new challenges are brought by the development of the trajectory data acquisition mode, and then some defects existing in the current individual important place detection method are analyzed from the level of the technical method.
In the aspect of data, in order to meet the requirements of urban security control and urban management, 2000 thousands of monitoring cameras are arranged in a core urban area and an important security area in China, real-time monitoring and information recording are carried out, and reliable image data are provided for strengthening urban comprehensive management, preventing crime attack and sudden security disaster accidents.
However, in the practical application process, when a case occurs, the police department needs to manually call video image data of hundreds of cameras within one month to determine the traveling/escaping route of a suspect, and if the case is simply extracted manually, the case is a quite time-consuming and labor-consuming project. Now, thanks to the high-speed development of computer vision and artificial intelligence technologies, particularly breakthrough of cross-Camera Multi-Target Tracking (MTMC Tracking, Multi-Target Multi-Camera Tracking) and pedestrian Re-Identification (Person Re-Identification) technologies, pedestrians can be recognized according to characteristic information such as wearing, posture and hair style of the pedestrians, individual Tracking is achieved by combining methods such as particle filtering and bayesian filtering with time-space information, and accordingly individual Tracking behavior trajectory data are extracted automatically from hundreds of monitoring cameras efficiently and reliably. Such individual track data acquired using a large amount of video image data is referred to as video track data.
The data acquisition process belongs to implicit reasoning, namely, the precise geographic position is not directly read from the originally recorded data, and the implicit position information is obtained through certain reasoning. Compared with the traditional Global Positioning System (GPS) track data, the video track data has the following two characteristics:
first, high spatial and temporal resolution. The video track data can generate position information with ultrahigh time resolution and spatial resolution in the range covered by the camera. The monitoring video usually records 24 frames in 1 second, that is, the geographical position of the individual can be updated 24 times in 1 second, which is much higher than the recording frequency of the GPS. Meanwhile, the behavior characteristics of the individual can be recorded in more detail in the coverage range of the camera. When an individual appears outside the range covered by the camera, the time resolution and the space resolution of the video track are inferior to those of the traditional GPS track, but the monitoring blind area of the core area of the city is less at present, so that the quality of the whole video track data cannot be greatly influenced.
Secondly, the cost is low and the privacy of the individual is protected. Compared with GPS (global positioning system) track data, the video track data is easier to acquire and more convenient to analyze, and has lower cost and higher feasibility in the aspects of city management, safety management and the like. The reason is that the conventional GPS track data acquisition requires an individual to carry a GPS terminal, but this cannot be realized in many application scenarios, such as tracking of criminal suspects, but the acquisition of video track data can realize tracking of any individual only under the existing monitoring camera deployment control network on the premise that the privacy of the individual is not violated. Clearly, both the data acquisition cost and the degree of freedom are superior to GPS.
The characteristics also determine that the future video track data can replace the GPS track data, and the method is widely applied to various industries. However, this new data presents challenges to conventional trajectory analysis and mining techniques.
In the aspect of technical methods, currently, the mainstream technology for detecting individual important places based on trajectory data mostly adopts a GPS trajectory, and in the existing research methods, the detection methods for individual important places are roughly divided into three types:
the method is based on the moving mode rule of the individual, and the method utilizes a probability model to model the historical position access condition of the individual, for example, a researcher adopts an improved hidden Markov model to divide the whole time into a plurality of equal time intervals, and detects the important places of the individual by combining POI information through the transfer relation of the individual between different destinations among different time intervals. In addition to hidden Markov, recognition of individual activities and places using other probabilistic models such as Relational Markov Networks (Relational Markov Networks) has been studied.
Secondly, a method based on the spatial distribution of individual track points adopts the idea of density clustering, firstly, field analysis is carried out on each GPS point to form a field table, adjacent points of the individual GPS points are recorded in the field table, then, the field density of each GPS point is calculated according to the field table, and individual important places are screened through the comparison of the field density and a preset threshold value.
And thirdly, a method based on physical characteristics of the movement of the individual, in which researchers find that the individual has significantly different movement characteristics in different places, and conversely, the place of the activity of the individual can be inferred by various movement characteristics. Therefore, some researchers select two vectors of speed and acceleration to analyze, and analyze different places visited by individuals by calculating the mode of the two vectors and the change of the direction. For example, when an individual is in a store, the speed should be low and the direction should be random. According to a series of individual motion rules, researchers identify individual important places by extracting low-speed track segments and calculating different physical quantities.
The method based on the individual moving mode rule has the core that a probability model with strong generalization capability is trained, the training model needs a large amount of historical data, and the applicability is not high under the condition of insufficient historical data; the method based on the individual track point spatial distribution only considers the motion rule of an individual in space and does not consider the motion rule of the individual in time, so that the detected place lacks enough semantic information and has low reliability; in the method based on the individual movement physical characteristics, the movement rule of the individual in the place is considered from the microscopic point of view, and the time frequency of the individual visiting the place is not considered from the time global point of view, so the importance of the detected place is considered. Based on these defects, the accuracy of detection of an individual important place is not high.
Disclosure of Invention
The invention provides a method for detecting an individual important place based on track data of a monitoring camera, and aims to solve the problem that the detection precision of the existing individual important place is not high.
In order to achieve the above object, an embodiment of the present invention provides a method for detecting important places of individuals based on trajectory data of a monitoring camera, including:
step 1, calculating time difference and distance difference between track points according to a camera time sequence and geographical position information recorded by track data of an individual monitoring camera, identifying the time-space change of the track points, extracting a geographical area which can represent that the individual stays for a period of time, and abstracting the geographical area into individual stay points;
step 2, performing spatial clustering according to the geographical position of the individual dwell point, abstracting the dwell point into a dwell area, and calculating a characteristic value of the dwell area;
step 3, calculating the Mahalanobis distance corresponding to the staying area based on the characteristic value of the staying area, judging the abnormal value of the Mahalanobis distance corresponding to the staying area by using the box type diagram, and taking the staying area judged to be the abnormal value as the important place of the individual;
and 4, classifying each important place of the individual according to the time period characteristics of the individual moving in different important places.
Wherein the step 1 comprises:
step 1.1, according to the camera time sequence recorded by the track data of the individual monitoring cameras and the sequence of the time information recorded by the individuals detected by each camera in the track data of the individual monitoring cameras, from tiThe time begins to move to the next time tjScanning is carried out; wherein i is more than or equal to 1 and less than or equal to n, i is more than or equal to j and less than or equal to n, and n represents the total timestamp extraction number;
step 1.2, calculate tiCamera of time individual
Figure BDA0002174762410000041
And tjCamera of time individual
Figure BDA0002174762410000042
The euclidean distance between;
step 1.3, judge the camera
Figure BDA0002174762410000043
And
Figure BDA0002174762410000044
the Euclidean distance between the two cameras and a preset distance threshold distThreh, when the cameras are in use
Figure BDA0002174762410000045
And
Figure BDA0002174762410000046
is smaller than said distance threshold distThreh, the movement of the individual is within the allowed range,tiImmobility, tjContinue to the next time tj+1Scanning and let tj=tj+1Returning to step 1.2, when the camera is in use
Figure BDA0002174762410000047
And
Figure BDA0002174762410000048
when the Euclidean distance between the first and second nodes is greater than the distance threshold value distThrehh, the Euclidean distance is calculated by the formula
Figure BDA0002174762410000049
Calculating tiAnd tjThe length of the time period in between;
step 1.4, judging tiAnd tjLength of time period in between
Figure BDA00021747624100000410
And a predetermined time threshold timeThreh when
Figure BDA0002174762410000051
When the time threshold value timeThreh is larger than the time threshold value timeThreh, the individual is shown to be in the camera
Figure BDA0002174762410000052
And a camera
Figure BDA0002174762410000053
The activity between satisfies the condition of forming a stop point, and the actions are collected
Figure BDA0002174762410000054
Figure BDA0002174762410000055
As the kth stop point, when
Figure BDA0002174762410000056
When the time threshold value timeThreh is smaller than the time threshold value timeThreh, the individual is shown to be in the camera
Figure BDA0002174762410000057
And a camera
Figure BDA0002174762410000058
The activity between the two does not meet the condition of forming the stop point, and the stop point is not generated;
step 1.5, making i equal to j, judging the size relationship between i and n, returning to step 1.1 if i is less than n, and indicating that all the individual dwell points are extracted;
step 1.6, calculating each stop point StaykAverage position of middle camera (X)kmean,Ykmean) And using the average position as the corresponding stop point StaykThe geographic location of (c).
Wherein the step 1.2 comprises:
by the formula
Figure BDA0002174762410000059
Calculating camera
Figure BDA00021747624100000510
And a camera
Figure BDA00021747624100000511
The euclidean distance between;
wherein the content of the first and second substances,
Figure BDA00021747624100000512
indicating camera
Figure BDA00021747624100000513
And a camera
Figure BDA00021747624100000514
The euclidean distance between them,
Figure BDA00021747624100000515
Figure BDA00021747624100000516
respectively representing camera shotsHead with a rotatable shaft
Figure BDA00021747624100000517
Figure BDA00021747624100000518
The value of x in the spatial coordinates is,
Figure BDA00021747624100000519
Figure BDA00021747624100000520
respectively indicating camera
Figure BDA00021747624100000521
Figure BDA00021747624100000522
The y value in spatial coordinates.
Wherein the step 1.6 comprises:
by the formula
Figure BDA00021747624100000523
Calculating a stop Point StaykAverage x coordinate of middle camera and through formula
Figure BDA00021747624100000524
Calculating a stop Point StaykAverage y coordinate of the middle camera;
wherein, XkmeanRepresents a stop point StaykAverage x coordinate, Y of middle camerakmeanRepresents a stop point StaykAverage y-coordinate of middle camera, ckmDenotes the mth camera in the kth dwell point, Num () denotes the number of cameras returning to the corresponding dwell point, and sum () denotes the sum value returning to the corresponding coordinate.
Wherein the step 2 comprises:
step 2.1, clustering all the individual stop points by using a DBSCAN algorithm based on density clustering, and clustering to obtain a result containing a plurality of clustersmAnd edge points Noisen
Step 2.2, each Cluster is clusteredmAll as a staying area StayRemCalculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean) And the average position is taken as the corresponding staying area StayRemThe geographic location of (a);
and 2.3, dividing the total staying time of all staying points in the staying area by the scale of the staying area to obtain the staying degree of the staying area, and dividing the visit times of all cameras in the staying area by the scale of the staying area to obtain the visit degree of the staying area.
Wherein the step 2.1 comprises:
step 2.1.1, each stop point Stay in the data set is circularly traversedkIf StaykIs more than a preset number threshold MinPts and other stop points Stayi(i≠k)And stop point StaykThe distance between the two is less than the preset distance threshold value Eps, then Stay is determinedkAs a ClusterjOtherwise Stay will bekNoise as a Noise pointv
Step 2.1.2, a Cluster is givenjIf there is a Stay core pointkClusterjIs less than the distance threshold value Eps, Stay will bekAdding to the ClusterjTo (1);
step 2.1.3, iterating step 2.1.2 in a loop until no new stop points are added to any cluster.
Wherein the step 2.2 comprises:
by the formula StayRem=Clustrm={Staym1,Staym2...,StaymnWill each Cluster ClustermAll as a staying area StayRem
By the formula
Figure BDA0002174762410000061
And
Figure BDA0002174762410000062
calculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean);
Wherein, StayRemDenotes the m-th dwell region, ClustermDenotes the m-th cluster, StaymnDenotes the nth dwell point, X, of the mth dwell zonekmeanIndicating the region of residence StayRemAverage x-coordinate, Y, of the middle dwell pointkmeanIndicating the region of residence StayRemThe average y coordinate of the stop points, NumS () represents the number of stop points returned in the corresponding stop zone, and sum () represents the sum value returned for the corresponding coordinate.
Wherein the step 2.3 comprises:
by the formula
Figure BDA0002174762410000063
Calculating the degree of stay of the stay area;
by the formula
Figure BDA0002174762410000064
Calculating the visit degree of the staying area;
wherein, StayRatiomRepresenting the degree of Stay, Stay, of said Stay zonemnDenotes the nth dwell point, Stay, in the mth dwell zonemnAriT denotes the Access stop Point StaymnTime of (Stay)mnLevT denotes leaving the stop Point StaymnTime of (d), Num (Stay)mn) Represents a stop point StaymnThe number of middle cameras, Access ratiomRepresenting the degree of visit, Cam, of said dwell areamnkStop point StaymnMiddle k camera, CammnkFre denotes a camera CammnkThe number of accesses of (c).
Wherein the step 3 comprises:
step 3.1, establishing a binary vector x [ StayRatio ] composed of the staying degree and the visiting degree for each staying aream,AccessRatiom]Taking the binary vector as a sample, and taking all the stay areas of the individualsThe binary vector of (a) is taken as a sample space;
step 3.2, for the sample space, by formula
Figure BDA0002174762410000071
μX=E(μX1,μX2) And S-1=E{(X-μX)T(X-μX) And calculating the MaDist distance from each sample point to the center of the samplem
Where x represents a sample binary vector, μXDenotes the center of the sample, S-1Representing the covariance, E () representing the return of the expected value, μX1,μX2Representing the expected value of each dimension in the sample binary vector, and X represents the original sample;
and 3.3, performing box-type graph analysis on all Mahalanobis distance values of the individuals, regarding the values exceeding the upper limit of the box-type graph as abnormal values, screening the staying areas with the Mahalanobis distances as the abnormal values, and regarding the staying areas corresponding to the screened Mahalanobis distances as important places ImPlace of the individualsl
Wherein the step 4 comprises:
step 4.1, calculating the important place which spends the longest time in all the important places of the individual, and marking the important place as a first important place;
step 4.2, except the first important place, calculating the important place which takes the longest time in the daytime among all the important places of the individual, and marking the important place as a second important place;
and 4.3, calculating the important place which is the longest in night of all the important places of the individuals except the first important place and the second important place, and marking the important place as a third important place.
The scheme of the invention has at least the following beneficial effects:
in the embodiment of the invention, the motion rule of an individual under a monitoring camera is fully considered by utilizing a stop point extraction and clustering technology, so that the dependence on historical data volume is reduced; fully considering the time distribution characteristics of various activities of the individual at the site through the characteristic values of the staying areas; the characteristics of high space-time resolution and low space-time resolution of the video track data are combined to detect the individual important places, and the detected individual important places are classified to give semantic information to the individual important places, so that the accuracy of detecting the individual important places is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting important places of individuals based on trajectory data of a monitoring camera according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for detecting important places of individuals according to an embodiment of the present invention;
FIG. 3 is a spatial distribution plot of a portion of individual dwell points in an embodiment of the present invention;
FIG. 4 is a graph of a retention and accessibility profile in an embodiment of the invention;
FIG. 5 is a statistical distribution of the features of the dwell region in one embodiment of the present invention;
fig. 6 is a schematic diagram of a detection result of an important place in a specific example in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting important places of individuals based on trajectory data of a monitoring camera, including:
step 1, calculating time difference and distance difference between track points according to a camera time sequence and geographical position information recorded by track data of an individual monitoring camera, identifying the time-space change of the track points, extracting a geographical area which can represent that the individual stays for a period of time, and abstracting the geographical area into individual stay points.
The track data can be video track data so as to detect important places of individuals.
And 2, performing spatial clustering according to the geographical position of the individual stop point, abstracting the stop point into a stop area, and calculating the characteristic value of the stop area.
The characteristic values may be a degree of retention and a degree of access.
And 3, calculating the Mahalanobis distance corresponding to the staying area based on the characteristic value of the staying area, judging the abnormal value of the Mahalanobis distance corresponding to the staying area by using the box type diagram, and taking the staying area judged to be the abnormal value as the important place of the individual.
And 4, classifying each important place of the individual according to the time period characteristics of the individual moving in different important places.
In the embodiment of the invention, the motion rule of an individual under a monitoring camera is fully considered by utilizing a stop point extraction and clustering technology, so that the dependence on historical data volume is reduced; fully considering the time distribution characteristics of various activities of the individual at the site through the characteristic values of the staying areas; the characteristics of high space-time resolution and low space-time resolution of the video track data are combined to detect the individual important places, and the detected individual important places are classified to give semantic information to the individual important places, so that the accuracy of detecting the individual important places is effectively improved.
In an embodiment of the present invention, step 1 includes:
step 1.1According to the time sequence of the cameras recorded by the trajectory data of the individual monitoring cameras and the sequence of the time information recorded by the individuals detected by each camera in the trajectory data of the individual monitoring cameras, from tiThe time begins to move to the next time tjScanning is carried out; wherein i is more than or equal to 1 and less than or equal to n, i is more than or equal to j and less than or equal to n, and n represents the total number of the timestamp samples.
Step 1.2, calculate tiCamera of time individual
Figure BDA0002174762410000091
And tjCamera of time individual
Figure BDA0002174762410000092
The euclidean distance between.
In particular, it can be represented by the formula
Figure BDA0002174762410000093
Calculating camera
Figure BDA0002174762410000094
And a camera
Figure BDA0002174762410000095
The euclidean distance between.
Wherein the content of the first and second substances,
Figure BDA0002174762410000096
indicating camera
Figure BDA0002174762410000097
And a camera
Figure BDA0002174762410000098
The euclidean distance between them,
Figure BDA0002174762410000099
Figure BDA00021747624100000910
respectively indicating camera
Figure BDA00021747624100000911
Figure BDA00021747624100000912
The value of x in the spatial coordinates is,
Figure BDA00021747624100000913
Figure BDA00021747624100000914
respectively indicating camera
Figure BDA00021747624100000915
Figure BDA00021747624100000916
The y value in spatial coordinates.
Step 1.3, judge the camera
Figure BDA00021747624100000917
And
Figure BDA00021747624100000918
the Euclidean distance between the two cameras and a preset distance threshold distThreh, when the cameras are in use
Figure BDA00021747624100000919
And
Figure BDA00021747624100000920
is smaller than the distance threshold distThreh, the movement of the individual is within the allowed range, tiImmobility, tjContinue to the next time tj+1Scanning and let tj=tj+1Returning to step 1.2, when the camera is in use
Figure BDA00021747624100000921
And
Figure BDA00021747624100000922
when the Euclidean distance between the first and second nodes is greater than the distance threshold value distThrehh, the Euclidean distance is calculated by the formula
Figure BDA0002174762410000101
Calculating tiAnd tjThe length of the time period in between.
Step 1.4, judging tiAnd tjLength of time period in between
Figure BDA0002174762410000102
And a predetermined time threshold timeThreh when
Figure BDA0002174762410000103
When the time threshold value timeThreh is larger than the time threshold value timeThreh, the individual is shown to be in the camera
Figure BDA0002174762410000104
And a camera
Figure BDA0002174762410000105
The activity between satisfies the condition of forming a stop point, and the actions are collected
Figure BDA0002174762410000106
Figure BDA0002174762410000107
As the kth stop point, when
Figure BDA0002174762410000108
When the time threshold value timeThreh is smaller than the time threshold value timeThreh, the individual is shown to be in the camera
Figure BDA0002174762410000109
And a camera
Figure BDA00021747624100001010
The activity in between does not satisfy the condition for forming the stop point, and the stop point is not generated.
And step 1.5, making i equal to j, judging the size relationship between i and n, returning to the step 1.1 if i is less than n, and indicating that all the individual dwell points are extracted.
Step 1.6, calculating each stop point StaykAverage position of middle camera (X)kmean,Ykmean) And using the average position as the corresponding stop point StaykThe geographic location of (c).
In particular, it can be represented by the formula
Figure BDA00021747624100001011
Calculating a stop Point StaykAverage x coordinate of middle camera and through formula
Figure BDA00021747624100001012
Calculating a stop Point StaykAverage y-coordinate of the middle camera.
Wherein, XkmeanRepresents a stop point StaykAverage x coordinate, Y of middle camerakmeanRepresents a stop point StaykAverage y-coordinate of middle camera, ckmDenotes the mth camera in the kth dwell point, Num () denotes the number of cameras returning to the corresponding dwell point, and sum () denotes the sum value returning to the corresponding coordinate.
In an embodiment of the present invention, the step 2 includes the following steps:
step 2.1, clustering all the individual stop points by using a DBSCAN algorithm based on density clustering, and clustering to obtain a result containing a plurality of clustersmAnd edge points Noisen
Specifically, the implementation manner of step 2.1 includes the following steps:
step 2.1.1, each stop point Stay in the data set is circularly traversedkIf StaykIs more than a preset number threshold MinPts and other stop points Stayi(i≠k)And stop point StaykThe distance between the two is less than the preset distance threshold value Eps, then Stay is determinedkAs a ClusterjOtherwise Stay will bekNoise as a Noise pointv
Step 2.1.2, a Cluster is givenjIf there is a Stay core pointkClusterjIs less than the distance threshold value Eps, Stay will bekAdding to the ClusterjTo (1);
step 2.1.3, iterating step 2.1.2 in a loop until no new stop points are added to any cluster.
Step 2.2, each Cluster is clusteredmAll as a staying area StayRemCalculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean) And the average position is taken as the corresponding staying area StayRemThe geographic location of (c).
Specifically, the implementation manner of step 2.2 includes the following steps:
by the formula StayRem=Clustrm={Staym1,Staym2...,StaymnWill each Cluster ClustermAll as a staying area StayRem
By the formula
Figure BDA0002174762410000111
And
Figure BDA0002174762410000112
calculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean)。
Wherein, StayRemDenotes the m-th dwell region, ClustermDenotes the m-th cluster, StaymnDenotes the nth dwell point, X, of the mth dwell zonekmeanIndicating the region of residence StayRemAverage x-coordinate, Y, of the middle dwell pointkmeanIndicating the region of residence StayRemThe average y coordinate of the stop points, NumS () represents the number of stop points returned in the corresponding stop zone, and sum () represents the sum value returned for the corresponding coordinate.
And 2.3, dividing the total staying time of all staying points in the staying area by the scale of the staying area to obtain the staying degree of the staying area, and dividing the visit times of all cameras in the staying area by the scale of the staying area to obtain the visit degree of the staying area.
Specifically, the implementation manner of step 2.3 includes the following steps:
by the formula
Figure BDA0002174762410000113
Calculating the degree of stay of the stay area;
by the formula
Figure BDA0002174762410000114
And calculating the visit degree of the staying area.
Wherein, StayRatiomRepresenting the degree of Stay, Stay, of said Stay zonemnDenotes the nth dwell point, Stay, in the mth dwell zonemnAriT denotes the Access stop Point StaymnTime of (Stay)mnLevT denotes leaving the stop Point StaymnTime of (d), Num (Stay)mn) Represents a stop point StaymnThe number of middle cameras, Access ratiomRepresenting the degree of visit, Cam, of said dwell areamnkStop point StaymnMiddle k camera, CammnkFre denotes a camera CammnkThe number of accesses of (c).
In an embodiment of the present invention, the step 3 includes the following steps:
step 3.1, establishing a binary vector x [ StayRatio ] composed of the staying degree and the visiting degree for each staying aream,AccessRatiom]The binary vector is taken as a sample, and the binary vectors of all the dwell regions of the individual are taken as a sample space.
Step 3.2, for the sample space, by formula
Figure BDA0002174762410000121
μX=E(μX1,μX2) And S-1=E{(X-μX)T(X-μX) And calculating the MaDist distance from each sample point to the center of the samplem
Where x represents a sample binary vector, μXDenotes the center of the sample, S-1Representing the covariance, E () representing the return of the expected value, μX1,μX2Representing the expected value for each dimension in the sample binary vector, X representing the original sample.
Thus, each dwell region corresponds to a mahalanobis distance value from the center of the feature sample.
And 3.3, performing box-type graph analysis on all Mahalanobis distance values of the individuals, regarding the values exceeding the upper limit of the box-type graph as abnormal values, screening the staying areas with the Mahalanobis distances as the abnormal values, and regarding the staying areas corresponding to the screened Mahalanobis distances as important places ImPlace of the individualsl
In an embodiment of the present invention, the step 4 includes the following steps:
and 4.1, calculating the important place which takes the longest time among all important places by the individual, and marking the important place as the first important place.
And 4.2, calculating the important place which takes the longest time in the day of the individual in all the important places except the first important place, and marking the important place as a second important place.
And 4.3, calculating the important place which is the longest in night of all the important places of the individuals except the first important place and the second important place, and marking the important place as a third important place.
It should be noted that, according to the related theories of criminology and sociology, various social activities of the social individuals have obvious time characteristics, so that each important place of the individual can be classified according to the time period characteristics of the activities of the individual in different important places, and semantic information of the individual important places is given. Here, as a preferred example, the first important place may be "home", the second important place may be "work", and the third important place may be "entertainment".
Here, the specific implementation of the present invention is described by using the trajectory data of 50 monitoring cameras of beijing city in china, and as shown in fig. 2, the specific implementation steps of the present invention for solving the related problems will be specifically described below with reference to this example:
and step 21, extracting the stop points according to the space-time distance between the track points. For the track data of the monitoring camera, with 200 meters as a distance threshold and 30 minutes as a time threshold, the method in step 1 in fig. 1 is adopted to calculate the stopping point of the individual, and the result is shown in fig. 3.
And step 22, clustering the stop points by using a DBSCAN algorithm, and extracting the stop areas. And (3) with 300 meters as a distance threshold and 2 as a field sample number threshold, performing DBSCAN clustering on the dwell points, and extracting the dwell areas according to the method in the step 2 in the figure 1.
And step 23, calculating a characteristic value of the staying area. The eigenvalues (degree of retention and degree of accessibility) of the stay zone were calculated using the method in step 2 in fig. 1, and the results are shown in fig. 4.
And 24, calculating the Mahalanobis distance and extracting the important places. Specifically, mahalanobis distance corresponding to each staying area is calculated according to the method in step 3 in fig. 1, and an abnormal value is determined by using a box diagram, so that important places of individuals are extracted, and the result is shown in fig. 5.
Step 25, classifying the important places of all individuals. The classification is performed according to the method in step 4 of fig. 1, and the result is shown in fig. 6.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method for detecting an individual important place based on track data of a monitoring camera is characterized by comprising the following steps:
step 1, calculating time difference and distance difference between track points according to a camera time sequence and geographical position information recorded by track data of an individual monitoring camera, identifying the time-space change of the track points, extracting a geographical area which can represent that the individual stays for a period of time, and abstracting the geographical area into individual stay points;
step 2, performing spatial clustering according to the geographical position of the individual dwell point, abstracting the dwell point into a dwell area, and calculating a characteristic value of the dwell area;
step 3, calculating the Mahalanobis distance corresponding to the staying area based on the characteristic value of the staying area, judging the abnormal value of the Mahalanobis distance corresponding to the staying area by using the box type diagram, and taking the staying area judged to be the abnormal value as the important place of the individual;
and 4, classifying each important place of the individual according to the time period characteristics of the individual moving in different important places.
2. The method of claim 1, wherein step 1 comprises:
step 1.1, according to the camera time sequence recorded by the track data of the individual monitoring cameras and the sequence of the time information recorded by the individuals detected by each camera in the track data of the individual monitoring cameras, from tiThe time begins to move to the next time tjScanning is carried out; wherein i is more than or equal to 1 and less than or equal to n, i is more than or equal to j and less than or equal to n, and n represents the total timestamp extraction number;
step 1.2, calculate tiCamera of time individual
Figure FDA0002174762400000011
And tjCamera of time individual
Figure FDA0002174762400000012
The euclidean distance between;
step 1.3, judge the camera
Figure FDA0002174762400000013
And
Figure FDA0002174762400000014
betweenThe Euclidean distance and a preset distance threshold value distThreh, when the camera is used
Figure FDA0002174762400000015
And
Figure FDA0002174762400000016
is smaller than the distance threshold distThreh, the movement of the individual is within the allowed range, tiImmobility, tjContinue to the next time tj+1Scanning and let tj=tj+1Returning to step 1.2, when the camera is in use
Figure FDA0002174762400000017
And
Figure FDA0002174762400000018
when the Euclidean distance between the first and second nodes is greater than the distance threshold value distThrehh, the Euclidean distance is calculated by the formula
Figure FDA0002174762400000019
Calculating tiAnd tjThe length of the time period in between;
step 1.4, judging tiAnd tjLength of time period in between
Figure FDA00021747624000000110
And a predetermined time threshold timeThreh when
Figure FDA00021747624000000111
When the time threshold value timeThreh is larger than the time threshold value timeThreh, the individual is shown to be in the camera
Figure FDA00021747624000000112
And a camera
Figure FDA00021747624000000113
The activity in between meets the conditions for forming a dwell point,will be assembled
Figure FDA00021747624000000114
Figure FDA0002174762400000021
As the kth stop point, when
Figure FDA0002174762400000022
When the time threshold value timeThreh is smaller than the time threshold value timeThreh, the individual is shown to be in the camera
Figure FDA0002174762400000023
And a camera
Figure FDA0002174762400000024
The activity between the two does not meet the condition of forming the stop point, and the stop point is not generated;
step 1.5, making i equal to j, judging the size relationship between i and n, returning to step 1.1 if i is less than n, and indicating that all the individual dwell points are extracted;
step 1.6, calculating each stop point StaykAverage position of middle camera (X)kmean,Ykmean) And using the average position as the corresponding stop point StaykThe geographic location of (c).
3. The method according to claim 2, characterized in that said step 1.2 comprises:
by the formula
Figure FDA0002174762400000025
Calculating camera
Figure FDA0002174762400000026
And a camera
Figure FDA0002174762400000027
The euclidean distance between;
wherein the content of the first and second substances,
Figure FDA0002174762400000028
indicating camera
Figure FDA0002174762400000029
And a camera
Figure FDA00021747624000000210
The euclidean distance between them,
Figure FDA00021747624000000211
respectively indicating camera
Figure FDA00021747624000000212
The value of x in the spatial coordinates is,
Figure FDA00021747624000000213
respectively indicating camera
Figure FDA00021747624000000214
The y value in spatial coordinates.
4. The method according to claim 2, characterized in that said step 1.6 comprises:
by the formula
Figure FDA00021747624000000215
Calculating a stop Point StaykAverage x coordinate of middle camera and through formula
Figure FDA00021747624000000216
Calculating a stop Point StaykAverage y coordinate of the middle camera;
wherein, XkmeanRepresents a stop point StaykAverage x coordinate, Y of middle camerakmeanRepresents a stop point StaykAverage y-coordinate of middle camera, ckmTo representNum () represents the number of cameras returning to the corresponding dwell point, and sum () represents the sum value returning to the corresponding coordinate.
5. The method of claim 1, wherein the step 2 comprises:
step 2.1, clustering all the individual stop points by using a DBSCAN algorithm based on density clustering, and clustering to obtain a result containing a plurality of clustersmAnd edge points Noisen
Step 2.2, each Cluster is clusteredmAll as a staying area StayRemCalculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean) And the average position is taken as the corresponding staying area StayRemThe geographic location of (a);
and 2.3, dividing the total staying time of all staying points in the staying area by the scale of the staying area to obtain the staying degree of the staying area, and dividing the visit times of all cameras in the staying area by the scale of the staying area to obtain the visit degree of the staying area.
6. The method according to claim 5, characterized in that said step 2.1 comprises:
step 2.1.1, each stop point Stay in the data set is circularly traversedkIf StaykIs more than a preset number threshold MinPts and other stop points Stayi(i≠k)And stop point StaykThe distance between the two is less than the preset distance threshold value Eps, then Stay is determinedkAs a ClusterjOtherwise Stay will bekNoise as a Noise pointv
Step 2.1.2, a Cluster is givenjIf there is a Stay core pointkClusterjIs less than the distance threshold value Eps, Stay will bekAdding to the ClusterjTo (1);
step 2.1.3, iterating step 2.1.2 in a loop until no new stop points are added to any cluster.
7. The method according to claim 5, characterized in that said step 2.2 comprises:
by the formula StayRem=Clustrm={Staym1,Staym2...,StaymnWill each Cluster ClustermAll as a staying area StayRem
By the formula
Figure FDA0002174762400000031
And
Figure FDA0002174762400000032
calculating a stop point Stay in each clusterkAverage position (X) ofkmean,Ykmean);
Wherein, StayRemDenotes the m-th dwell region, ClustermDenotes the m-th cluster, StaymnDenotes the nth dwell point, X, of the mth dwell zonekmeanIndicating the region of residence StayRemAverage x-coordinate, Y, of the middle dwell pointkmeanIndicating the region of residence StayRemThe average y coordinate of the stop points, NumS () represents the number of stop points returned in the corresponding stop zone, and sum () represents the sum value returned for the corresponding coordinate.
8. The method according to claim 5, wherein the step 2.3 comprises:
by the formula
Figure FDA0002174762400000033
Calculating the degree of stay of the stay area;
by the formula
Figure FDA0002174762400000034
Calculating the visit degree of the staying area;
wherein, StayRatiomRepresenting the degree of Stay, Stay, of said Stay zonemnDenotes the nth dwell point, Stay, in the mth dwell zonemnAriT denotes the Access stop Point StaymnTime of (Stay)mnLevT denotes leaving the stop Point StaymnTime of (d), Num (Stay)mn) Represents a stop point StaymnThe number of middle cameras, Access ratiomRepresenting the degree of visit, Cam, of said dwell areamnkStop point StaymnMiddle k camera, CammnkFre denotes a camera CammnkThe number of accesses of (c).
9. The method of claim 1, wherein step 3 comprises:
step 3.1, establishing a binary vector x [ StayRatio ] composed of the staying degree and the visiting degree for each staying aream,AccessRatiom]Taking the binary vector as a sample, and taking the binary vectors of all the stay areas of the individual as a sample space;
step 3.2, for the sample space, by formula
Figure FDA0002174762400000041
μX=E(μX1,μX2) And S-1=E{(X-μX)T(X-μX) And calculating the MaDist distance from each sample point to the center of the samplem
Where x represents a sample binary vector, μXDenotes the center of the sample, S-1Representing the covariance, E () representing the return of the expected value, μX1,μX2Representing the expected value of each dimension in the sample binary vector, and X represents the original sample;
and 3.3, performing box-type graph analysis on all Mahalanobis distance values of the individuals, regarding the values exceeding the upper limit of the box-type graph as abnormal values, screening the staying areas with the Mahalanobis distances as the abnormal values, and regarding the staying areas corresponding to the screened Mahalanobis distances as important places ImPlace of the individualsl
10. The method of claim 1, wherein the step 4 comprises:
step 4.1, calculating the important place which spends the longest time in all the important places of the individual, and marking the important place as a first important place;
step 4.2, except the first important place, calculating the important place which takes the longest time in the daytime among all the important places of the individual, and marking the important place as a second important place;
and 4.3, calculating the important place which is the longest in night of all the important places of the individuals except the first important place and the second important place, and marking the important place as a third important place.
CN201910775022.8A 2019-08-21 2019-08-21 Individual important place detection method based on track data of monitoring camera Active CN110503032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910775022.8A CN110503032B (en) 2019-08-21 2019-08-21 Individual important place detection method based on track data of monitoring camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910775022.8A CN110503032B (en) 2019-08-21 2019-08-21 Individual important place detection method based on track data of monitoring camera

Publications (2)

Publication Number Publication Date
CN110503032A CN110503032A (en) 2019-11-26
CN110503032B true CN110503032B (en) 2021-08-31

Family

ID=68588501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910775022.8A Active CN110503032B (en) 2019-08-21 2019-08-21 Individual important place detection method based on track data of monitoring camera

Country Status (1)

Country Link
CN (1) CN110503032B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340331B (en) * 2020-02-10 2023-11-14 泰华智慧产业集团股份有限公司 Analysis method and system for residence behavior of supervisor in city management work
CN112380461A (en) * 2020-11-20 2021-02-19 华南理工大学 Pedestrian retrieval method based on GPS track
CN112465078B (en) * 2021-02-03 2021-04-16 成都点泽智能科技有限公司 Cross-camera pedestrian track processing method, computer equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339417A (en) * 2016-08-15 2017-01-18 浙江大学 Detection method for user group behavior rules based on stay places in mobile trajectory
CN107122461A (en) * 2017-04-27 2017-09-01 东软集团股份有限公司 One kind trip method of trajectory clustering, device and equipment
CN107589435A (en) * 2017-09-05 2018-01-16 成都新橙北斗智联有限公司 A kind of Big Dipper GPS track stops analysis method
CN108509434A (en) * 2017-02-23 2018-09-07 中国移动通信有限公司研究院 A kind of method for digging and device of group of subscribers
CN109005515A (en) * 2018-09-05 2018-12-14 武汉大学 A method of the user behavior pattern portrait based on motion track information
CN109784422A (en) * 2019-01-31 2019-05-21 南京邮电大学 A kind of user trajectory method for detecting abnormality of internet of things oriented mobile terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8612134B2 (en) * 2010-02-23 2013-12-17 Microsoft Corporation Mining correlation between locations using location history
US20150339371A1 (en) * 2012-06-28 2015-11-26 Nokia Corporation Method and apparatus for classifying significant places into place categories

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339417A (en) * 2016-08-15 2017-01-18 浙江大学 Detection method for user group behavior rules based on stay places in mobile trajectory
CN108509434A (en) * 2017-02-23 2018-09-07 中国移动通信有限公司研究院 A kind of method for digging and device of group of subscribers
CN107122461A (en) * 2017-04-27 2017-09-01 东软集团股份有限公司 One kind trip method of trajectory clustering, device and equipment
CN107589435A (en) * 2017-09-05 2018-01-16 成都新橙北斗智联有限公司 A kind of Big Dipper GPS track stops analysis method
CN109005515A (en) * 2018-09-05 2018-12-14 武汉大学 A method of the user behavior pattern portrait based on motion track information
CN109784422A (en) * 2019-01-31 2019-05-21 南京邮电大学 A kind of user trajectory method for detecting abnormality of internet of things oriented mobile terminal device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic Trip-separation Method using Sensor Data Continuously Collected by Smartphone;Hiroki Ohashi et al.;《2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC)》;20141011;第2984-2990页 *
Prediction of Destinations and Routes in Urban Trips with Automated Identification of Place Types and Stay Points;Francisco Dantas N. Neto et al.;《Proceedings XVI GEOINFO》;20151202;第80-91页 *
停留点空间聚类在景区热点分析中的应用;张文元 等;《计算机工程与应用》;20181231;第54卷(第4期);第263-270页 *

Also Published As

Publication number Publication date
CN110503032A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
Zhang et al. Detecting urban anomalies using multiple spatio-temporal data sources
CN110503032B (en) Individual important place detection method based on track data of monitoring camera
US20150379355A1 (en) A surveillance system
Wang et al. Estimating traffic flow in large road networks based on multi-source traffic data
CN105574506A (en) Intelligent face tracking system and method based on depth learning and large-scale clustering
JP2004531842A (en) Method for surveillance and monitoring systems
JP2004537790A (en) Moving object evaluation system and method
JP2004534315A (en) Method and system for monitoring moving objects
Lau et al. Extracting point of interest and classifying environment for low sampling crowd sensing smartphone sensor data
Huo et al. Short-term estimation and prediction of pedestrian density in urban hot spots based on mobile phone data
Wirz et al. Towards an online detection of pedestrian flocks in urban canyons by smoothed spatio-temporal clustering of GPS trajectories
Athanesious et al. Detecting abnormal events in traffic video surveillance using superorientation optical flow feature
Minnikhanov et al. Detection of traffic anomalies for a safety system of smart city
Aryal et al. Discovery of patterns in spatio-temporal data using clustering techniques
Jiang et al. A framework of travel mode identification fusing deep learning and map-matching algorithm
Caceres et al. Supervised land use inference from mobility patterns
Markou et al. Use of taxi-trip data in analysis of demand patterns for detection and explanation of anomalies
Ren et al. Detecting and locating of traffic incidents in a road segment based on lane-changing characteristics
Guo et al. Spatial-temporal trajectory anomaly detection based on an improved spectral clustering algorithm
Wang et al. Detection Anomaly in Video Based on Deep Support Vector Data Description
Pokusaev et al. Anomalies in transport data
Cui et al. Perspectives on stability and mobility of passenger's travel behavior through smart card data
Kalantari et al. Developing a fractal model for spatial mapping of crime hotspots
Liu et al. Towards Efficient Traffic Incident Detection via Explicit Edge-Level Incident Modeling
Zhou et al. Regional Crowd Status Analysis based on GeoVideo and Multimedia Data Collaboration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant