CN109064484A - Crowd movement's Activity recognition method with momentum Fusion Features is divided based on subgroup - Google Patents

Crowd movement's Activity recognition method with momentum Fusion Features is divided based on subgroup Download PDF

Info

Publication number
CN109064484A
CN109064484A CN201810236397.2A CN201810236397A CN109064484A CN 109064484 A CN109064484 A CN 109064484A CN 201810236397 A CN201810236397 A CN 201810236397A CN 109064484 A CN109064484 A CN 109064484A
Authority
CN
China
Prior art keywords
point
group
sub
signature tracking
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810236397.2A
Other languages
Chinese (zh)
Other versions
CN109064484B (en
Inventor
陈志�
陈璐
岳文静
周传
刘玲
龚凯
掌静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201810236397.2A priority Critical patent/CN109064484B/en
Publication of CN109064484A publication Critical patent/CN109064484A/en
Application granted granted Critical
Publication of CN109064484B publication Critical patent/CN109064484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention discloses a kind of crowd movement's Activity recognition methods divided based on subgroup with momentum Fusion Features, this method is tracked first with angle point and the method for background modeling, obtain the space time information of moving target in video image frame, the area of space information of population distribution in Utilization prospects, spatially adjoining crowd is divided into several sub-groups, sub-group is further divided by the motion relevance in a period of time, obtains the sub-group with Movement consistency;Secondly on the basis of sub-group segmentation, three momentum features of crowd movement is extracted and are merged;Finally it is trained the pixel characteristic of the feature of fusion and video frame as the input of differential cyclic convolution neural network, using the method for handmarking by training video fragment label at different description vocabulary, with the result of the data point reuse differential cyclic convolution neural network of tape label, good training result is obtained, the motor behavior that can effectively identify crowd, reaches preferable effect.

Description

Crowd movement's Activity recognition method with momentum Fusion Features is divided based on subgroup
Technical field
The present invention relates to a kind of crowd movement's Activity recognition methods divided based on subgroup with momentum Fusion Features, mainly Crowd movement track is extracted using Harris Corner Detection Algorithm, mixed Gaussian background modeling extracts the foreground features of scene, into Row subgroup divides.Momentum feature extraction is carried out on the basis of subgroup, will extract the input of three-momentum feature video data To differential cyclic convolution neural metwork training, it is converted into crowd behaviour label, reaches the target of crowd movement's Activity recognition, belongs to Image procossing, video detection and artificial intelligence interleaving techniques application field.
Background technique
The purpose of crowd movement's Activity recognition is to pass through motion profile and foreground extraction from sequence image for dense population It is divided into subgroup, crowd movement's Activity recognition is carried out on the basis of subgroup.To the activity recognition of group level increasingly at For a hot issue of computer vision field, in intelligent video monitoring, public safety, sports etc. have extensively Application.Mainly there are Harris Corner Detection Algorithm, mixed Gaussian for crowd movement's Activity recognition algorithm in video image frame Background modeling method, momentum Feature fusion.
(1) Harris's Corner Detection Algorithm: the algorithm is carried out on any direction on the image using a fixed window Latter two situation, the pixel grey scale variation degree in window, when there are the cunnings on any direction are compared before sliding and slide in sliding Grey scale change degree is larger when dynamic, then it is assumed that there are angle points in the window.Angle point while retaining image graphics important feature, The data volume that information can be effectively reduced keeps the content of its information very high, effectively improves the speed of calculating, is conducive to figure The reliable matching of picture, so that being treated as possibility in real time.
(2) mixed Gaussian background modeling method: its basic thought is to compare input picture with background model, according to The information such as difference judge not meet the abnormal conditions of background model, to distinguish foreground pixel and background pixel.This method The distribution situation that gray value in an image is indicated with grey level histogram, utilizes this statistical result, it is assumed that picture in image sequence The distribution Normal Distribution function of plain gray value, is split image.
(3) momentum Feature fusion: this method is from crowd's subgroup level, using the group of crowd massing as grinding Study carefully target, constructs the momentum feature as unit of subgroup.All sons in scene are detected by greatest hope optimization algorithm Group extracts the momentum features such as the communality, stability and conflicting of group for the group detected, constitutes towards subgroup The momentum feature of group, group's momentum feature can improve the dependence due to personal momentum feature to scene, with group's unit structure The independent momentum feature of scene is built, the robustness and scalability of crowd movement's behavioural analysis are promoted.
Summary of the invention
Present invention aims at crowd movement's Activity recognition methods, propose a kind of based on subgroup division and momentum feature Crowd movement's Activity recognition method of fusion is solved and is divided caused by crowd's overlapping in microcosmic segmentation under the crowd is dense scene The problem of crowd behaviour details caused by segmentation granularity is excessive in fault and macroscopic view segmentation is ignored, and propose on this basis Carry out group movement momentum Fusion Features training pattern, can effectively identify the motor behavior of crowd.
It is of the present invention a kind of to be divided based on subgroup and crowd movement's Activity recognition methods of momentum Fusion Features includes Following steps:
Step 1): user inputs continuous video, video is divided into continuous video frame, by each above-mentioned video frame Motion information per single pedestrian as signature tracking a point P, point P is with a four dimensional vector P=(Px,Py,Pv,Pd) carry out table Show, the Px、PyIndicate the space coordinate of character trace point, PvIndicate the displacement of the point, PdIndicate the movement of the point Direction, PdValue isThe point set of all signature tracking points of picture frame is denoted as OI={ P1,P2,P3,P4};
Step 2): the moving characteristic of sub-group is determined by momentum feature, with the intracorporal signature tracking point of sub-group and subgroup Based on, define three kinds of different momentum features: direction of motion consistency, spatial stability, crowd's abrasion interference;Every height It include H signature tracking point, i.e. C in groupk=(P1,P2,...,PH);
Step 3): calculating the average value of the description factor in continuous 5 frame, constructs a vector with three average values The image of a triple channel is collectively constituted, 224 × 224 × 3 dimension datas is formed and is input to differential recurrence Convolutional neural networks DRCNN is trained, and is converted into 4096 dimensional feature vectors, the differential cyclic convolution neural network be by The shot and long term that (Group -16 the Visual Geometry) model of VGG -16 and 3 layer heaps are folded remembers recurrent neural network LSTM It is connected in end-to-end model, crowd behaviour label is finally converted for feature vector using output function, using artificial mark According to behavior main body, behavior scene, the difference of behavior itself, labeled as difference are occurred for training video segment by the method for note Description vocabulary, with the data point reuse differential cyclic convolution neural network of tape label as a result, realizing crowd movement's Activity recognition.
Wherein,
The step 1) specifically:
Step 1.1): obtaining the location information of signature tracking point in successive video frames by Harris's Corner Detection Algorithm, The foreground features of target group are obtained, Harris's angle point track algorithm is to carry out any side on the image using a fixed window Latter two situation, the pixel grey scale variation degree in window, when there are any directions are compared before sliding and slide in upward sliding On sliding when grey scale change degree it is larger, then it is assumed that there are angle points in the window, by signature tracking each in successive video frames The position of point is together in series, and obtains the motion profile T of each signature tracking point, all signature tracking point motion profile collection are combined into TI={ T1,T2,T3,T4};
Step 1.2): foreground extraction is carried out using mixed Gaussian background modeling method, for pixel in current video frame Gray value, when the difference of the mean value of the S Gaussian Profile in Gaussian Mixture background meets formula: Be considered as successful match, i.e., the pixel is background, wherein I (x, y, t) indicate the pixel (x, y) in the pixel value of t moment, Indicate that the average gray value of the S Gaussian Profile t moment, λ indicate the multiple coefficient of standard deviation,Indicate the S Gaussian Profile t The variance of moment gray value, the space size and distance relation with surrounding group that target group are obtained by foreground extraction, Referred to as prospect patch, patch set are denoted as BI={ B1,B2,B3,...,Bk, it is marked off spatially by relationship change spatially The individual closed on;
Step 1.3): using the two kinds of space time informations marked off, pod is divided;
The step 1.3) specifically:
Step 1.3.1): by the set O comprising signature tracking pointIIf being divided into the subset comprising doing, it is expressed as OI= {CI,FI, the CI={ C1,C2,...,CKIt is the point set that picture frame has Movement consistency after dividing, it constitutes crowd and draws The sub-group divided, point set FIIt is then the point being removed, the spatial information of patch is indicated with rectangular area, with patch boundary pixel position It sets coordinate value and marks off a rectangular area, the coordinate of the rectangle lower right corner and upper left angle point is obtained, if the coordinate of signature tracking point P Position is in the profile of patch, then the point is divided into the point of the patch, otherwise rejects the point;
Step 1.3.2): the attribute P of signature tracking point PdIt indicates the movement relation of signature tracking point between frame and frame, calculates The displacement vector values of signature tracking point P and the cosine value of X-axis angle, obtain angular separation θ, 0~2 π of angular separation are divided into 12 equal portions, each section are respectively labeled as Di(i=1,2,3..., 12) is PdAssignment, specific division methods formula are as follows:
By the division of patch range and the constraint of the direction of motion, the point of signature tracking similar in movement tendency is divided into one A sub-group;
Step 1.3.3): abnormal point is corrected, in K point of proximity for calculating signature tracking point P, the attribute value P of each point of proximityd The frequency of occurrences, the maximum mark value of the frequency of occurrences are Di, signature tracking point PdIt is denoted as Dj, for all signature tracking points, work as i+ 1=j or i-1=j, by the P of PdValue is modified to Di;Rejecting abnormalities point: it calculates and is transported in the L point of proximity of signature tracking point P with it The number I of the dynamic identical point of trend, sets critical value M, a M≤L, as I < M, P point is considered abnormal point, from sub-group Ck Middle rejecting.
The step 2) specifically:
Step 2.1): the coordinate of each signature tracking point and position in sub-group direction of motion consistency feature extraction: are calculated Average value is pipetted, the center-of-mass coordinate and average displacement of each sub-group in continuous several frames is obtained, acquires the totality of each sub-group Movement tendency vectorAccording to formula:The overall movement for calculating each sub-group becomes The velocity-dependent of gesture vector and the vector, whereinIndicate the movement tendency vector of each sub-group,Indicate k-th of neighbour The movement tendency vector of contact, N represents the sub-group number marked off, describedIt is higher to be worth bigger expression velocity-dependent;_
Step 2.2): spatial stability momentum feature extraction: spatial stability refers to each signature tracking o'clock in a timing Between stable neighbours are kept in range, keep specific topological structure within a certain period of time;Define sub-group in each feature with Track point PiT moment stability by formula:It acquires, wherein N indicates to draw The sub-group number separated, PiIndicate ith feature trace point, | N1(Pi)\N1(Pi)T| it indicates 1 into T moment sub-group A signature tracking point abutment points in maintain to stablize the mean number of the constant point of syntople with it, K indicates most adjacent The number of point;Stability of each signature tracking point at a distance from the abutment points in its neighborhood can use formula in sub-group:It indicates, wherein N indicates the number of the sub-group marked off, PiIndicate ith feature trace point,Generation The average distance of table signature tracking point and its k point of proximity, above two stability is added, and constitutes subgroup monolithic stability Property, with ω (Ck) indicate;
Step 2.3): crowd's abrasion interference momentum feature extraction: conflicting calculation formula are as follows:Wherein N indicates the number of the sub-group marked off, PiIndicate ith feature trace point,Indicate the movement tendency vector of each sub-group,Indicate the movement of k-th of abutment points Trend vector, avrg (Nother(Pi)) represent in the adjoining of the signature tracking point in a sub-group comprising in other sub-groups The average value of signature tracking point, α and β are weight coefficient.
Step 1.2) Plays difference multiple coefficient lambda takes 2.5.
The utility model has the advantages that a kind of crowd movement's behavior knowledge divided based on subgroup with momentum Fusion Features proposed by the present invention Other method, specifically has the beneficial effect that:
1, the present invention is to carrying out subgroup with crowd under the crowd is dense scene and divide from the angle of sub-group to be analyzed not The difficulty that only can solve the disengaging movement individual from pod can also obtain group being considered as an entirety as research The internal feature ignored when object.
2, the present invention proposes a kind of algorithm based on space-time restriction, by the temporal movement in athletic group between individual The foundation that relationship and propinquity spatially are divided as group, two kinds of conditions mutually constrain, and group, which is divided into, has movement The sub-group of consistency.This algorithm can be suitable for the monitoring video flow of different groups density and observation visual angle, and realize letter Single, the speed of service is fast.
3, the present invention is trained the three dimensional video data taken out using differential cyclic convolution neural network, by feature The step of extraction and parameter learning, separates, group's video suitable for various resolution ratio and mobile context.
Detailed description of the invention
Fig. 1 is subgroup cluster partition algorithm flow chart.
Fig. 2 is momentum Feature Fusion Algorithm flow chart.
Specific embodiment
The some embodiments of attached drawing of the present invention are described in more detail below.
User inputs 15 seconds continuous videos, and a picture frame, i.e. D were divided into every 0.5 secondI={ D1,D2,D3,..., D30, DtFor the picture frame of t moment, the crowd with Movement consistency is subjected to subgroup division under the crowd is dense scene, Algorithm flow chart is as shown in Figure 1.
Preceding 15 frame video image frame is taken, per single pedestrian as a tracking characteristics trace point, with a four dimensional vector P =(Px,Py,Pv,Pd) indicate the motion information of signature tracking point P, wherein Px,PyIndicate the space coordinate of signature tracking point, PvTable Show the displacement of the point, PdIndicate that the direction of motion of the point, value are15 are obtained by Harris Corner Detection Algorithm The co-ordinate position information of signature tracking point in video frame connects the coordinate position of signature tracking point each in successive video frames The motion profile T of each signature tracking point is obtained, all signature tracking point motion profile collection are combined into TI={ T1,T2,T3,T4, Every track includes the set T of several signature tracking pointsi={ P1,P2,P3,...,Pk, continuous motion profile embodies one section The movement relation of time in-group.
Foreground extraction is carried out using mixed Gaussian background modeling method, for the gray value of pixel in current video frame, such as The difference of the mean value of k-th Gaussian Profile meets formula in fruit and Gaussian Mixture background:It is considered as Successful match, the i.e. pixel are background, and wherein λ is the multiple coefficient of standard deviation, and λ value is 2.5.It is obtained by foreground extraction The space size of target group and distance relation with surrounding group, referred to as prospect patch, patch set are denoted as BI={ B1, B2,B3,...,Bk, the individual spatially closed on is marked off by relationship change spatially.
Using the two kinds of space time informations marked off, pod is divided.By the set O comprising signature tracking pointI If being divided into the subset comprising doing, it is expressed as OI={ CI,FI, wherein CI={ C1,C2,...,CKIt is picture frame by dividing Afterwards with the point set of Movement consistency, the sub-group that crowd divides, point set F are constitutedIIt is then the point being removed.It is indicated with rectangle frame The spatial information of patch marks off a rectangular area with patch boundary location of pixels coordinate value, obtains the rectangle lower right corner and a left side The point is divided into the patch if the coordinate position of signature tracking point P is in the profile of patch by the coordinate of upper angle point Point, otherwise reject the point.The attribute P of signature tracking point PdIndicate the movement relation of signature tracking point between frame and frame.First Calculate the motion vector of signature tracking point PWith the cosine value of X-axis angle, to obtain angular separation θ.It will be square 12 equal portions are divided into 0~2 π of angle, each section is respectively labeled as Di(i=1,2,3 ..., 12) is PdAssignment is specific to divide Method are as follows:
By the division of patch range and the constraint of the direction of motion, the point of signature tracking similar in these movement tendencies is divided At a sub-group.
Abnormal point is modified and is rejected.Firstly, carrying out the amendment of abnormal point, K for calculating signature tracking point P face In near point, the attribute value P of each point of proximitydThe maximum mark value of the frequency of occurrences is Di, signature tracking point PdIt is denoted as Dj, for all Signature tracking point, if i+1=j or i-1=j, then just by the P of PdValue is modified to Di;Secondly, abnormity point elimination is calculated The number I of the identical point of trend is moved in the L point of proximity of signature tracking point P.A critical value M (M≤L) is set, M Value is 5, if I < M, P point is considered abnormal point, from sub-group CkMiddle rejecting.
If the moving characteristic momentum feature of sub-group determines, based on the intracorporal signature tracking point of sub-group and subgroup, Define three kinds of different momentum features: direction of motion consistency, spatial stability, crowd's abrasion interference.Assuming that each subgroup It include H signature tracking point, i.e. C in bodyk=(P1,P2,...,PH)。
Direction of motion consistency feature extraction: the coordinate of each signature tracking point and displacement in sub-group are calculated and is averaged Value obtains the center-of-mass coordinate and average displacement of each sub-group in continuous several frames, acquires the overall movement trend of each sub-group VectorAccording to formula:Calculate the overall movement trend vector of each sub-group with The velocity-dependent of the vector, whereinIndicate the movement tendency vector of each sub-group,Indicate the fortune of k-th of abutment points Dynamic trend vector, N represents the sub-group number marked off, describedIt is higher to be worth bigger expression velocity-dependent.
Spatial stability momentum feature extraction: spatial stability refers to that each signature tracking point is protected within certain time Keep steady fixed neighbours, keeps specific topological structure within a certain period of time;Define each signature tracking point P in sub-groupiIn t It carves, the field of K most abutment points compositions is Nt(Pi), stability can be by such as formula:It acquires, wherein N indicates the sub-group number marked off, PiIt indicates i-th Signature tracking point, | N1(Pi)\N1(Pi)T| expression is tieed up in 1 to a characteristic point in T moment sub-group abutment points with it It keeps steady and determines the mean number of the constant point of syntople, K indicates the number of most abutment points;In sub-group each signature tracking point with The stability of the distance of abutment points in its neighborhood can use formula:It indicates, wherein N expression marks off Sub-group number, PiIndicate ith feature trace point,The average distance of signature tracking point and its k point of proximity is represented, Above two stability is added, subgroup overall stability is constituted, with ω (Ck) indicate.
Group's abrasion interference momentum feature extraction: conflicting calculation formula are as follows: conflicting calculation formula are as follows:Wherein N indicates the number of the sub-group marked off, PiIndicate ith feature trace point, whereinIndicate the movement tendency vector of each sub-group,Indicate k-th of abutment points Movement tendency vector, avrg (Nother(Pi)) represent in the adjoining of the signature tracking point in a sub-group comprising other subgroups The average value of signature tracking point in body, α and β are weight coefficient.
The mean value calculation of the description factor in preceding 5 frame is come out, constructs a vector with three average valuesSimilar to the rgb value of pixel, the image of a triple channel is collectively constituted, form 224 × 224 × 3 dimension datas are input to differential cyclic convolution neural network (DRCNN) and are trained, and are converted into 4 096 as feature vector.It is wherein micro- The shot and long term memory for dividing cyclic convolution neural network VGG -16 (Group -16 Visual Geometry) model and 3 layer heaps to fold Recurrent neural network (Long Short-Term Memory, LSTM) is connected in end-to-end model, and it is trained accurate to improve Rate finally uses output function to convert crowd behaviour label for feature vector, using the method for handmarking by training video According to behavior main body, behavior scene occur for segment, and the difference of behavior itself is marked labeled as different description vocabulary with band The data point reuse differential cyclic convolution neural network of note as a result, realize crowd movement's Activity recognition.

Claims (5)

1. a kind of crowd movement's Activity recognition method divided based on subgroup with momentum Fusion Features, which is characterized in that the party Method the following steps are included:
Step 1): user inputs continuous video, video is divided into continuous video frame, by the above-mentioned every list of each video frame A pedestrian as signature tracking a point P, point P motion information with a four dimensional vector P=(Px, Py, Pv, Pd) indicate, institute State Px、PyIndicate the space coordinate of character trace point, PvIndicate the displacement of the point, PdIndicate the direction of motion of the point, PdValue isThe point set of all signature tracking points of picture frame is denoted as OI={ P1, P2, P3, P4};
Step 2): the moving characteristic of sub-group is determined by momentum feature, using the intracorporal signature tracking point of sub-group and subgroup as base Plinth defines three kinds of different momentum features: direction of motion consistency, spatial stability, crowd's abrasion interference;Each sub-group In include H signature tracking point, i.e. Ck=(P1, P2..., PH);
Step 3): calculating the average value of the description factor in continuous 5 frame, constructs a vector with three average valuesω (C), ρ (C)), the image of a triple channel is collectively constituted, 224 × 224 × 3 dimension datas is formed and is input to differential recursive convolution mind It is trained through network DRCNN, is converted into 4096 dimensional feature vectors, the differential cyclic convolution neural network is by VGG-16 mould The shot and long term memory recurrent neural network LSTM that type and 3 layer heaps are folded is connected in end-to-end model, finally uses output function Crowd behaviour label is converted by feature vector, is led training video segment according to behavior using the method for handmarking Body, behavior scene, the difference of behavior itself are followed labeled as different description vocabulary with the data point reuse differential of tape label Ring convolutional neural networks as a result, realize crowd movement's Activity recognition.
2. a kind of crowd movement Activity recognition side divided based on subgroup with momentum Fusion Features according to claim 1 Method, which is characterized in that the step 1) specifically:
Step 1.1): obtaining the location information of signature tracking point in successive video frames by Harris's Corner Detection Algorithm, obtains The foreground features of target group, Harris's angle point track algorithm are carried out on any direction on the image using a fixed window Sliding, compare with sliding latter two situation before sliding, pixel grey scale variation degree in window, when there are on any direction Grey scale change degree is larger when sliding, then it is assumed that there are angle points in the window, by signature tracking point each in successive video frames Position is together in series, and obtains the motion profile T of each signature tracking point, and all signature tracking point motion profile collection are combined into TI= {T1, T2, T3, T4};
Step 1.2): foreground extraction is carried out using mixed Gaussian background modeling method, for the gray scale of pixel in current video frame Value, when the difference of the mean value of the S Gaussian Profile in Gaussian Mixture background meets formula:Just recognize For successful match, i.e., the pixel is background, wherein I (x, y, t) indicate the pixel (x, y) in the pixel value of t moment,It indicates The average gray value of the S Gaussian Profile t moment, λ indicate the multiple coefficient of standard deviation,Indicate the S Gaussian Profile t moment The variance of gray value, by the space size of foreground extraction acquisition target group and the distance relation with surrounding group, referred to as Prospect patch, patch set are denoted as BI={ B1, B2, B3..., Bk, it is marked off by relationship change spatially and is spatially closed on Individual;
Step 1.3): using the two kinds of space time informations marked off, pod is divided;
3. a kind of crowd movement Activity recognition side divided based on subgroup with momentum Fusion Features according to claim 1 Method, which is characterized in that the step 1.3) specifically:
Step 1.3.1): by the set O comprising signature tracking pointIIf being divided into the subset comprising doing, it is expressed as OI={ CI, FI, the CI={ C1, C2..., CKIt is that picture frame divides after dividing with the point set of Movement consistency, composition crowd Sub-group, point set FIIt is then the point being removed, the spatial information of patch is indicated with rectangular area, with patch boundary location of pixels seat Scale value marks off a rectangular area, the coordinate of the rectangle lower right corner and upper left angle point is obtained, if the coordinate position of signature tracking point P In profile in patch, then the point is divided into the point of the patch, otherwise rejects the point;
Step 1.3.2): the attribute P of signature tracking point PdIt indicates the movement relation of signature tracking point between frame and frame, calculates feature The displacement vector values of trace point P and the cosine value of X-axis angle, obtain angular separation θ, 0~2 π of angular separation are divided into 12 etc. Part, each section is respectively labeled as Di(i=1,2,3..., 12) is PdAssignment, specific division methods formula are as follows:
By the division of patch range and the constraint of the direction of motion, the point of signature tracking similar in movement tendency is divided into a son Group;
Step 1.3.3): abnormal point is corrected, in K point of proximity for calculating signature tracking point P, the attribute value P of each point of proximitydOccur Frequency, the maximum mark value of the frequency of occurrences are Di, signature tracking point PdIt is denoted as Dj, for all signature tracking points, work as i+1=j Or i-1=j, by the P of PdValue is modified to Di;Rejecting abnormalities point: it calculates and moves in the L point of proximity of signature tracking point P The number I of the identical point of gesture sets critical value M, a M≤L, as I < M, P point is considered abnormal point, from sub-group CkIn It rejects.
4. a kind of crowd movement Activity recognition side divided based on subgroup with momentum Fusion Features according to claim 1 Method, which is characterized in that the step 2) specifically:
Step 2.1): direction of motion consistency feature extraction: the coordinate of each signature tracking point and displacement in sub-group are calculated and is taken Average value obtains the center-of-mass coordinate and average displacement of each sub-group in continuous several frames, acquires the overall movement of each sub-group Trend vectorAccording to formula:Calculate the overall movement trend of each sub-group to The velocity-dependent of amount and the vector, whereinIndicate the movement tendency vector of each sub-group,Indicate k-th of abutment points Movement tendency vector, N represents the sub-group number that marks off, describedIt is higher to be worth bigger expression velocity-dependent;_
Step 2.2): spatial stability momentum feature extraction: spatial stability refers to each signature tracking point in certain time model The stable neighbours of interior holding are enclosed, keep specific topological structure within a certain period of time;Define each signature tracking point in sub-group PiT moment stability by formula:It acquires, wherein N expression marks off Sub-group number, PiIndicate ith feature trace point, | N1(Pi)\N1(Pi)T| it indicates 1 to one in T moment sub-group Maintain to stablize the mean number of the constant point of syntople in the abutment points of a signature tracking point with it, K indicates most abutment points Number;Stability of each signature tracking point at a distance from the abutment points in its neighborhood can use formula in sub-group:It indicates, wherein N indicates the number of the sub-group marked off, PiIndicate ith feature trace point,Generation The average distance of table signature tracking point and its k point of proximity, above two stability is added, and constitutes subgroup monolithic stability Property, with ω (Ck) indicate;
Step 2.3): crowd's abrasion interference momentum feature extraction: conflicting calculation formula are as follows:Wherein N indicates the number of the sub-group marked off, PiIndicate ith feature trace point,Indicate the movement tendency vector of each sub-group,Indicate the movement of k-th of abutment points Trend vector, avrg (Nother(Pi)) represent in the adjoining of the signature tracking point in a sub-group comprising special in other sub-groups The average value of trace point is levied, α and β are weight coefficient.
5. a kind of crowd movement Activity recognition side divided based on subgroup with momentum Fusion Features according to claim 2 Method, which is characterized in that step 1.2) Plays difference multiple coefficient lambda takes 2.5.
CN201810236397.2A 2018-03-21 2018-03-21 Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics Active CN109064484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810236397.2A CN109064484B (en) 2018-03-21 2018-03-21 Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810236397.2A CN109064484B (en) 2018-03-21 2018-03-21 Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics

Publications (2)

Publication Number Publication Date
CN109064484A true CN109064484A (en) 2018-12-21
CN109064484B CN109064484B (en) 2022-02-08

Family

ID=64819919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810236397.2A Active CN109064484B (en) 2018-03-21 2018-03-21 Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics

Country Status (1)

Country Link
CN (1) CN109064484B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829382A (en) * 2018-12-30 2019-05-31 北京宇琪云联科技发展有限公司 The abnormal object early warning tracing system and method for Behavior-based control feature intelligent analysis
CN109918996A (en) * 2019-01-17 2019-06-21 平安科技(深圳)有限公司 The illegal action identification method of personnel, system, computer equipment and storage medium
CN110263690A (en) * 2019-06-12 2019-09-20 成都信息工程大学 A kind of group behavior feature extraction based on small group and description method and system
CN110443789A (en) * 2019-08-01 2019-11-12 四川大学华西医院 A kind of foundation and application method of immunofixation electrophoresis figure automatic identification model
CN110472604A (en) * 2019-08-20 2019-11-19 中国计量大学 A kind of pedestrian based on video and crowd behaviour recognition methods
CN111160150A (en) * 2019-12-16 2020-05-15 盐城吉大智能终端产业研究院有限公司 Video monitoring crowd behavior identification method based on depth residual error neural network convolution
CN111429185A (en) * 2020-03-27 2020-07-17 京东城市(北京)数字科技有限公司 Crowd portrait prediction method, device, equipment and storage medium
CN111933298A (en) * 2020-08-14 2020-11-13 医渡云(北京)技术有限公司 Crowd relation determination method, device, electronic equipment and medium
CN112850436A (en) * 2019-11-28 2021-05-28 宁波微科光电股份有限公司 Pedestrian trend detection method and system of elevator intelligent light curtain
CN113283387A (en) * 2021-06-23 2021-08-20 华东交通大学 Group abnormal behavior detection method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529465A (en) * 2016-11-07 2017-03-22 燕山大学 Pedestrian cause and effect relation identification method based on momentum kinetic model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529465A (en) * 2016-11-07 2017-03-22 燕山大学 Pedestrian cause and effect relation identification method based on momentum kinetic model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAKAN BILEN等: "Dynamic Image Networks for Action Recognition", 《CVPR》 *
成金庚等: "结合群组动量特征与卷积神经网络的人群行为分析", 《科学技术与工程》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829382A (en) * 2018-12-30 2019-05-31 北京宇琪云联科技发展有限公司 The abnormal object early warning tracing system and method for Behavior-based control feature intelligent analysis
CN109829382B (en) * 2018-12-30 2020-04-24 北京宇琪云联科技发展有限公司 Abnormal target early warning tracking system and method based on intelligent behavior characteristic analysis
CN109918996A (en) * 2019-01-17 2019-06-21 平安科技(深圳)有限公司 The illegal action identification method of personnel, system, computer equipment and storage medium
CN110263690A (en) * 2019-06-12 2019-09-20 成都信息工程大学 A kind of group behavior feature extraction based on small group and description method and system
CN110443789A (en) * 2019-08-01 2019-11-12 四川大学华西医院 A kind of foundation and application method of immunofixation electrophoresis figure automatic identification model
CN110443789B (en) * 2019-08-01 2021-11-26 四川大学华西医院 Method for establishing and using immune fixed electrophoretogram automatic identification model
CN110472604B (en) * 2019-08-20 2021-05-14 中国计量大学 Pedestrian and crowd behavior identification method based on video
CN110472604A (en) * 2019-08-20 2019-11-19 中国计量大学 A kind of pedestrian based on video and crowd behaviour recognition methods
CN112850436A (en) * 2019-11-28 2021-05-28 宁波微科光电股份有限公司 Pedestrian trend detection method and system of elevator intelligent light curtain
CN111160150A (en) * 2019-12-16 2020-05-15 盐城吉大智能终端产业研究院有限公司 Video monitoring crowd behavior identification method based on depth residual error neural network convolution
CN111429185A (en) * 2020-03-27 2020-07-17 京东城市(北京)数字科技有限公司 Crowd portrait prediction method, device, equipment and storage medium
CN111429185B (en) * 2020-03-27 2023-06-02 京东城市(北京)数字科技有限公司 Crowd figure prediction method, device, equipment and storage medium
CN111933298A (en) * 2020-08-14 2020-11-13 医渡云(北京)技术有限公司 Crowd relation determination method, device, electronic equipment and medium
CN111933298B (en) * 2020-08-14 2024-02-13 医渡云(北京)技术有限公司 Crowd relation determining method and device, electronic equipment and medium
CN113283387A (en) * 2021-06-23 2021-08-20 华东交通大学 Group abnormal behavior detection method and device

Also Published As

Publication number Publication date
CN109064484B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN109064484A (en) Crowd movement&#39;s Activity recognition method with momentum Fusion Features is divided based on subgroup
CN104077605B (en) A kind of pedestrian&#39;s search recognition methods based on color topological structure
CN104867161B (en) A kind of method for processing video frequency and device
CN105528794B (en) Moving target detecting method based on mixed Gauss model and super-pixel segmentation
CN109035293B (en) Method suitable for segmenting remarkable human body example in video image
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
CN105893936B (en) A kind of Activity recognition method based on HOIRM and Local Feature Fusion
CN105320917B (en) A kind of pedestrian detection and tracking based on head-shoulder contour and BP neural network
CN107103326A (en) The collaboration conspicuousness detection method clustered based on super-pixel
CN101599179A (en) Method for automatically generating field motion wonderful scene highlights
CN103929685A (en) Video abstract generating and indexing method
CN107153824A (en) Across video pedestrian recognition methods again based on figure cluster
CN107273905A (en) A kind of target active contour tracing method of combination movable information
CN105046720B (en) The behavior dividing method represented based on human body motion capture data character string
CN107341445A (en) The panorama of pedestrian target describes method and system under monitoring scene
CN107230188A (en) A kind of method of video motion shadow removing
CN109766822A (en) Gesture identification method neural network based and system
CN103853794B (en) Pedestrian retrieval method based on part association
Komorowski et al. Deepball: Deep neural-network ball detector
CN108830882A (en) Video abnormal behaviour real-time detection method
CN110363197A (en) Based on the video area-of-interest exacting method for improving visual background extraction model
CN105956604A (en) Action identification method based on two layers of space-time neighborhood characteristics
Fang et al. Pedestrian attributes recognition in surveillance scenarios with hierarchical multi-task CNN models
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance
Zhang et al. An Improved Computational Approach for Salient Region Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant