CN101835034A - Crowd characteristic counting system - Google Patents

Crowd characteristic counting system Download PDF

Info

Publication number
CN101835034A
CN101835034A CN 201010184253 CN201010184253A CN101835034A CN 101835034 A CN101835034 A CN 101835034A CN 201010184253 CN201010184253 CN 201010184253 CN 201010184253 A CN201010184253 A CN 201010184253A CN 101835034 A CN101835034 A CN 101835034A
Authority
CN
China
Prior art keywords
video
audio
crowd
characteristic
counting system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010184253
Other languages
Chinese (zh)
Other versions
CN101835034B (en
Inventor
王巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2010101842530A priority Critical patent/CN101835034B/en
Publication of CN101835034A publication Critical patent/CN101835034A/en
Application granted granted Critical
Publication of CN101835034B publication Critical patent/CN101835034B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of computer vision and artificial intelligence, in particular to the field of intelligent video analysis. The invention provides a crowd characteristic counting system in order to solve the problems of low accuracy and frequency target loss which are encountered when the conventional crowd characteristic counting system is used for counting in analysis places of wide vision field and large number of individuals. The system comprises a video and audio acquisition module, a video and audio quality improvement module, a video and audio characteristic extraction module, a crowd characteristic counting mould and a video and audio real-time control platform, wherein during characteristic counting, a track segmental counting method is adopted for counting the characteristics when the density of a crowd is smaller than a set threshold, and a block matching method is adopted for counting the characteristics when the density of the crowd is greater than the set threshold. Through the application of video and audio quality improvement, field depth real-time updating, the track segmental counting method and the motion block matching method, the accurate counting of the crowd characteristics is realized and the common drawbacks of target loss and difficult matching and tracing of the conventional system are overcome.

Description

A kind of crowd characteristic counting system
Technical field
The present invention relates to computer vision field and artificial intelligence field, particularly the intelligent video monitoring field has proposed a kind of crowd characteristic counting system.
Background technology
In recent years, along with development of science and technology, the public safety video monitoring system is the powerful measure that urban society's public security is initiatively grasped and hit, as AT STATION, ground such as harbour, airport, harbour, urban transportation thoroughfare and gateway sets up the public safety video monitoring system, bring into play the advantage of its modern technologies risk prevention instruments, stablize significant safeguarding society and politics and public security.
In some application scenarios, need carry out statistical analysis to crowd characteristic, as crowd density, instantaneous number, demographics, crowd's flow velocity statistics etc., to guarantee public safety or to implement better management.
Guaranteeing aspect the public safety, as gateway, school instruction building, subway mouth are monitored, occurring trampling, in time reporting to the police during jam situation and avoid taking place security incident; Government buildings or important square are monitored, when crowd massing occurring, in time reported to the police with more early the control state of affairs etc.
Implementing better management aspect, as crowd characteristic counting system is set up on ground such as park, ticketing spot, sight spot, by in a period of time to the statistics of crowd characteristic, obtain situation such as variable density, flow of the people variation along with season, different operating day and day off, different weather situation crowd, and formulate better control measures according to these statisticses, as when ticketing spot, park on day off number is increased sharply, increasing open inlet number; Some tangs increase staff to add dredging of strong man, car in the scenic spot; Perhaps the existing hardware facility is carried out more rational transformation etc.
Traditional video monitoring system only is equivalent to the effect of a video acquisition browser, the monitor staff need carry out artificial observation for a long time, only whether take emergency measures with the subjectivity decision, and the supervisor seldom can keep attentiveness for a long time, and this manual monitoring more can not obtain information such as crowd density, regional instantaneous number, flow of the people, flow velocity.
Intelligent video monitoring system replaces human brain with computer, can more reasonably utilize human resources, raise the management level, realize efficient monitoring, yet coupling that the present video monitoring system overwhelming majority is based on moving target is followed the tracks of and is done intellectual analysis, but general crowd characteristic is analyzed the place, the visual field is big and a moving target bulk area is less, individual amount is numerous, so distinguishes very difficulty between the target, is difficult to prove effective with the object matching tracing in the conventional video monitoring.
Summary of the invention
The objective of the invention is to solve the problems of the technologies described above, a kind of crowd characteristic counting system is provided, with solve present crowd characteristic counting system is big in the visual field, individual amount is numerous analysis place when adding up accuracy low, often the problem of track rejection appears.
In order to solve the problems of the technologies described above, according to specific embodiment provided by the invention, the present invention has announced following technical scheme:
A kind of crowd characteristic counting system comprises:
Look the audio collection module, be used for the video/audio signal of gathering is handled, obtain digital video sequences and digital audio sequence;
Look the audio quality hoisting module, be used for video and speech digit sequence are carried out the quality lifting;
Look the audio feature extraction module, be used for the digital video sequence is carried out feature extraction, characteristic matching, the speech digit sequence is carried out feature extraction, characteristic matching, tagsort;
The crowd characteristic statistical module comprises depth of field updating submodule and characteristic statistics submodule, is used to carry out depth of field real-time update and crowd characteristic statistics;
Look the real-time management and control platform of audio frequency, be used for receiving and look the audio analysis result, and according to analysis result issue management and control order;
Wherein, above-mentioned characteristic statistics submodule is used for the instantaneous density of statistical regions, regional instantaneous number, two-way personnel's count value, two-way personnel's flow speed value etc.;
Wherein, when adding up two-way personnel's count value, when crowd density during less than setting threshold, the screening moving target, with the Moving Target segment processing: the length of definition statistical regions is LengthofRegion, and A direction target is designated as TargetA[n] (0<n≤Na), Na is an A direction target number, B direction target is designated as TargetB[n] (0<n≤Nb), Nb is a B direction target number, Target[n] .dy represents the length of target trajectory;
Two-way personnel's count value NODA, NODB are as follows:
NODA = NODA + Σ n = 1 Na T arg etA [ n ] . dy LengthofRegion
NODB = NODB + Σ n = 1 Nb T arg etB [ n ] . dy LengthofRegion
When complete the passing by whole when zone of single moving target, the count value of single moving target is 1, and adds 1 in personnel's count value;
When crowd density during greater than setting threshold, the fritter that image is divided into the m*n pixel, with these fritters as process object, be complementary by certain foreground blocks and eight pieces on every side, determine the direction of motion of this foreground blocks, further what pieces are certain direction have in the statistical picture, in conjunction with depth of field size, be single people area occupied size in video pictures, finally obtain count value.
Above-mentioned depth of field updating submodule is used to screen qualified motion unit, carries out depth of field real-time update according to the trend of a different time sections sample bulk area size variation.
Above-mentioned depth of field updating submodule, with the assembly of single individual definition for people and shadow, the physical significance of the depth of field is the size of individual goal area occupied in video pictures.
The above-mentioned audio collection module of looking, original incoming video signal can be from the analog video signal of video camera, video recording or other equipment arbitrary resolutions or the encoded video streams that comes by Network Transmission; Original input audio signal can be analogue audio frequency or digital audio stream;
The above-mentioned audio collection module of looking is carried out data processing to video/audio signal respectively, if input is an analog signal, at first will be converted to digital signal through A/D; If input is an encoding stream, decodes through decoder, and convert the form that needs to.
The above-mentioned audio quality hoisting module of looking further comprises:
The noise remove submodule uses adjustable Alpha's mean filter that video sequence, tonic train are carried out noise remove;
Signal enhancer module is used adjustable power transform method that video sequence, tonic train are carried out signal and is strengthened.
The above-mentioned audio feature extraction module of looking is divided into parallel two:
For video sequence, the described audio feature extraction module of looking further comprises:
Video image foreground extraction submodule is used to extract the prospect of video image;
Set up background model based on Gauss model or codebook method, thereby each frame input picture and background frame are compared the prospect of obtaining;
Video object coupling is followed the tracks of submodule, is used for video sequence is carried out object matching, uses contour feature and the multivariate joint probability histogram feature combines effectively and carry out object matching accurately;
For tonic train, the described audio feature extraction module of looking further comprises:
Voice extract submodule, are used to extract phonetic feature;
The voice match submodule is used for tonic train is carried out object matching, and the feature that tonic train is extracted is mated with voice object before and obtained the voice object, and new speech characteristics of objects more;
Voice classification submodule is used for audio object is classified.
The audio analysis result is looked in the above-mentioned real-time management and control platform reception of audio frequency of looking, and issues various management and control orders according to analysis result; Simultaneously, the management and control platform be responsible for output look the audio collection order, for terminal intelligent analysis configuration system parameters and parameter of regularity, to video/audio browse, store, work such as retrieval.
Above-mentioned crowd characteristic counting system, after front end carries out the video/audio signal collection, can carry out processing such as data processing, quality lifting, feature extraction, characteristic statistics to audiovisual information at front end, and analysis result is sent to the rear end, the rear end is according to analysis result issue management and control order.
Above-mentioned crowd characteristic counting system, the audiovisual information that front end is sent carries out processing such as data processing, quality lifting, feature extraction, characteristic statistics in the rear end, and according to analysis result issue management and control order.
Above-mentioned crowd characteristic counting system, carrying out video/audio after front end carries out the video/audio signal collection handles and feature extraction work, and feature stream sent to the rear end, further add up after the data flow of rear end receiving front-end, finish analytical work, and according to analysis result issue management and control order.
Compared with prior art, the present invention has following advantage:
At first, the present invention proposes trajectory segment statistic law and based on the notion of the statistic law of moving mass search.Different is with the general intelligence video analysis, it is bigger that crowd characteristic is analyzed the general visual field, place, and a moving target bulk area is less, individual amount is numerous, so distinguishes very difficulty between the target, is difficult to prove effective with the object matching tracing in the conventional video monitoring.
When the two-way personnel's count value of statistics, when density during less than certain value, think that foreground target is less at this moment, and distance is far away each other, carrying out the object matching tracking by the conventional method of audiovisual information intelligent analysis and control system when, utilization trajectory segment statistic law is carried out the crowd characteristic analysis; And, think that foreground target is more, and distance is very near each other when close value during greater than certain value, usually take place between the target to merge mutually, separate, block, so the present invention proposes crowd characteristic statistical method based on the moving mass search.
Secondly, the present invention utilizes the depth of field rationally to upgrade in real time, makes the crowd characteristic statistics more accurate.In the present invention, define the assembly of single individuality for people and its shadow, variation along with weather in the middle of one day, especially out of doors, single individuality shared pixel count in picture can constantly change (for example shadow is less at noon the time, and what shadow drew during in the morning with the dusk is very long), so, the present invention constantly carries out the rational depth of field and upgrades when carrying out characteristic statistics, make that the crowd characteristic statistics is more accurate.
The 3rd, in the tracking and matching process of the present invention, mainly used profile and multivariate joint probability histogram feature to moving target.Think that at first the profile of same moving target intersects in two two field pictures of being separated by, this hypothesis is reasonable and simple.If when two moving target profiles intersect in two two field pictures of front and back, just think that this is same target; Yet; owing to usually can block mutually between the moving target in the video pictures; so it is crossing to have a plurality of contour of object many times in two two field pictures; at this moment in order to find out correct match objects; use the multivariate joint probability histogram to get rid of again, the multivariate joint probability histogram has guaranteed the confidence level of matching result.And contour feature and histogram feature all have translation invariance, and practice shows, this is a kind of characteristic matching mode very effectively and accurately.
The 4th, record object merging process of the present invention makes algorithm intelligent more, and the situation that often has target occlusion in the video monitoring picture takes place, and this blocking directly causes the track of target object to rupture, for intelligent monitoring has brought very big inconvenience.Given this, when the position of a plurality of targets in video pictures overlaps, we are defined as a new target with it, and note the track and the attribute of a plurality of targets that occurrence positions overlaps, to guarantee when certain a moment, these targets were separated, can pick up separately coupling according to these records, improve the track continuity of target object so to a certain extent greatly.
The 5th, the present invention at first carries out noise remove, enhancing etc. to signal and handles in earlier stage to improve the value of signal before video/audio signal is carried out analyzing and processing, handles for post analysis and gets ready, and can reduce wrong report effectively, fail to report.
The obtaining of signal (digitlization) and transmission course can inevitably produce noise and (influenced by environmental condition and sensing components and parts sole mass and produce noise, interference mainly due to used transmission channel in transmission course is subjected to noise pollution), the process of noise remove is exactly the process to signal restoring.
And the purpose that signal strengthens is the details of having been blured in order to manifest, especially for relatively poor, rather dark or overgenerous signal, and interested feature in the outstanding signal.
Signal noise is removed and the final purpose of signal increase all is in order to improve signal, and contribution has been made in this effective running to whole crowd characteristic counting system.
The 6th, native system has three kinds of mode of operations available, back-end analysis, frontal chromatography, distributed analysis, and it is single to have solved existing intelligent video analysis system works pattern, can not realize the problem of transmitting as required, storing as required.
Wherein, back-end analysis can be carried out upgrading to traditional supervisory control system easily, only the present invention need be connected in series between video/audio signal and the display screen to get final product.
For saving network bandwidth resources, the present invention comprises frontal chromatography and distributed analytical model.
Frontal chromatography only need transmit alarm signal, has saved network bandwidth resources greatly.
Distributed analysis only need transmission feature stream (less than the video-voice frequency flow amount 1/50), when saving bandwidth, task is distributed to front-end and back-end, make whole system have analysis ability efficiently, the rear end does not have heavy Processing tasks, does not need the wholesale hardware investment;
Front end and distributed analysis have realized " as required " monitoring: only occurring under the situation of reporting to the police, may need that just relevant video/audio signal is sent to the rear end and write down or store, and only need transmission data seldom generally speaking.
According to applied environment, the available network bandwidth resource what or the preparation investment amount what, three kinds of patterns are optional.
At last, the present invention can realize with pure software or software and hardware combining dual mode, when the software and hardware combining working method, provides embedded video crowd characteristic statistical server, installs simply, guarantees that with the computing of DSP computing replacement computer supervisory control system is reliable and stable.By bottom and algorithm optimization, improved video analysis speed, realized the multifunctional high speed intelligent video analysis of dynamic real-time massive video data, effectively dissolve the sharp contradiction between growth of video monitoring information magnanimity and the limited energy of monitor staff.
Description of drawings
Fig. 1 system logic structure figure
Fig. 2 looks the audio collection module
Fig. 3 looks the audio quality hoisting module
Fig. 4 looks the audio feature extraction module
Fig. 5 crowd characteristic statistical module
The two-way count value statistics of Fig. 6
Fig. 7 block search figure
Fig. 8 is based on stream of people's statistical chart of block search
Fig. 9 system front end analytical work ideograph
Figure 10 system back-end analysis working mode figure
The distributed analytical work ideograph of Figure 11 system
Embodiment
For above-mentioned purpose of the present invention, feature and advantage are become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Crowd characteristic counting system comprises as the lower part, as shown in Figure 1:
Look the audio collection module, be used for the video/audio signal of gathering is handled, obtain digital video sequences and digital audio sequence;
Look the audio quality hoisting module, be used for video and speech digit sequence are carried out the quality lifting;
Look the audio feature extraction module, be used for the digital video sequence is carried out feature extraction, characteristic matching, the speech digit sequence is carried out feature extraction, characteristic matching, tagsort;
Look the audio frequency characteristics statistical module, comprise depth of field updating submodule and characteristic statistics submodule, be used to carry out depth of field real-time update and crowd characteristic statistics;
Look the real-time management and control platform of audio frequency, be used for receiving and look the audio analysis result, and according to analysis result issue management and control order.
Native system comprises looks the audio collection module, is used to obtain digital video sequences and digital audio sequence.Wherein original incoming video signal can be from the analog video signal of video camera, video recording or other equipment arbitrary resolutions or the encoded video streams that comes by Network Transmission; Original input audio signal also can be analogue audio frequency or digital audio stream.Different according to the source, the video acquisition process is divided into A/D or decoding, two parts of format conversion; Same, simulated audio signal will pass through the A/D digitized processing, and coded audio stream needs decoding processing, as shown in Figure 2.
When vision signal was carried out acquisition process, preposition A/D conversion and decoder if input is an analog signal, at first will be converted to digital signal through A/D, if input is the code stream through the mpeg4/h.264/h.263/AVS coding, and at first will be through decoder decode; Digital video signal after decoding or A/D conversion, by different analyze demands, the YUV4:2:2/RGB digital image sequence that is converted to the QCIF/CIF/D1 size is stand-by.
When audio signal is carried out acquisition process, if input is an analog signal, at first to be converted to digital signal through A/D, if input is the code stream through the mpeg1/mpeg2/mpeg4ACC coding, at first will be through decoder decode
Look the audio quality hoisting module, be used for before feature extraction, carrying out the quality lifting, as shown in Figure 3 looking tonic train.
The present invention at first carries out noise remove, enhancing etc. to signal and handles in earlier stage to improve the value of signal, for post analysis is handled ready before video/audio signal is carried out analyzing and processing.
The obtaining of signal (digitlization) and transmission course can inevitably produce noise.As influenced by environmental condition and sensing components and parts sole mass and produce noise, the interference mainly due to used transmission channel in transmission course is subjected to noise pollution.The process of noise remove is exactly the process to signal restoring.
And the purpose that signal strengthens is the details of having been blured in order to manifest, especially for relatively poor, rather dark or overgenerous signal, and interested feature in the outstanding signal.
Signal noise is removed and the final purpose of signal increase all is in order to improve signal, and contribution has been made in this effective running to whole crowd characteristic counting system.
Look the audio quality hoisting module, further comprise:
The noise remove submodule uses adjustable Alpha's mean filter that video sequence, tonic train are carried out noise remove;
Signal enhancer module is used adjustable power transform method that video sequence, tonic train are carried out signal and is strengthened.
A. the denoising of adjustable Alpha's mean filter:
Figure GSA00000119834400091
0≤d≤(n-1) adjustable wherein
For vision signal,
Figure GSA00000119834400092
Be illustrated in point (x y) locates to remove pixel gray value behind the noise, N represent central point (x, y), size is the rectangle subimage window of m * n, G (i) is illustrated in gray values of pixel points in the subwindow; The meaning of above-mentioned formula is: remove gray value G (i) the highest d/2 pixel and d/2 minimum pixel in the N field.Use G r(i) represent a remaining mn-d pixel, by mean value conduct (x, the gray value after denoising y) of these residual pixel points.
For audio signal,
Figure GSA00000119834400101
Be illustrated in the amplitude after noise is removed at time t place, N represents central point at t, and length is the consonant frequency range of n, G (i) expression i amplitude constantly; The meaning of above-mentioned formula is: remove amplitude G (i) the highest d/2 moment point and d/2 minimum moment point in the N field.Use G r(i) represent a remaining n-d moment point, by the mean value of these residue moment point as t denoising constantly after amplitude.
When d=0, the regression of Alpha's mean filter is the arithmetic equal value filter, and the noise that suppresses the even random distribution of gaussian sum is had good effect; When d=mn-1, the regression of Alpha's mean filter is a median filter, to suppressing salt-pepper noise good effect is arranged.When d gets other values, revised Alpha's mean filter comprise under the situation of multiple noise very suitable, Gaussian noise and the salt-pepper noise situation of mixing for example.
B. adjustable power conversion enhancing signal
The citation form of power conversion is:
S=cR γ, wherein c and γ are positive constant
R is primary signal (two dimensional image or one dimension voice), and S is signal after strengthening, signal after adjustment γ parameter can be enhanced.With the image is example, and dark partially image (as night) o'clock can obtain the lifting of contrast in γ>1, and image (as the greasy weather) o'clock can obtain the lifting of contrast in γ<1 partially in vain.With voice is example, and the less voice (scene is far away from the audio collection transducer) of amplitude o'clock can obtain in γ>1 strengthening preferably, and the voice that amplitude is bigger (scene is nearer from the audio collection transducer) o'clock can obtain strengthening preferably in γ>1.
Looking the audio feature extraction module is one of nucleus module of the present invention.Look the audio feature extraction module and be divided into parallel two, respectively video and speech digit sequence are carried out feature extraction, characteristic matching or tagsort processing, as shown in Figure 4.
For video sequence, look the audio feature extraction module and further comprise:
Video image foreground extraction submodule, video image foreground extraction submodule is used to extract the prospect of video image; Set up background model based on Gauss model or codebook method, thereby each frame input picture and background frame are compared the prospect of obtaining;
Video object coupling is followed the tracks of submodule, is used for video sequence is carried out object matching, uses contour feature and the multivariate joint probability histogram feature combines effectively and carry out object matching accurately;
For tonic train, look the audio feature extraction module and further comprise:
Voice extract submodule, are used to extract phonetic feature;
The voice match submodule is used for tonic train is carried out object matching, and the feature that tonic train is extracted is mated with voice object before and obtained the voice object, and new speech characteristics of objects more;
Voice classification submodule is used for audio object is classified.
Display foreground extracts submodule, utilizes Gauss's modeling to set up background image, subtracts each other the prospect of obtaining with each frame input picture and background image.Initial 200 frames of video flowing input are not done testing, only are used for setting up background model.If background image is B (x, y), if the distribution of each gray values of pixel points in a period of time in the image all is Gaussian Profile, simultaneously, consider the influence (as the red flag that waves, branch of flickering etc.) of dynamic background, set up K Gauss model for each pixel, Gauss model has three parameters, is respectively average μ k, variances sigma k, weights omega k, 1≤k≤K.
A). model parameter initialize: the initial variance σ that establishes each first model of pixel 1(x y) is a bigger value, weights ω 1(x y) is a less value, 0<ω 1(x, y)<1, initial average μ 1(x y) is the input first two field picture value I 0(x, y):
μ 1(x,y)=I 0(x,y)
B). modelling and renewal: during with the n frame (x, (x's input picture gray value I y) y) is complementary with existing model, if can mate then put average and variance and the priority that has this model of value renewal now with this; Otherwise set up a new model at this point, as initial value, and establish a bigger variance and less weights, reach upper limit K up to k with the gray value of input picture, if k>during K, with the model of newly-established model replacement priority minimum.
The Model Matching rule is:
abs(μ k(x,y)-I t(x,y))≤2.5σ k(x,y),1≤k≤K
The model modification formula is:
μ t + 1 k ( x , y ) = ( 1 - α ) μ t k ( x , y ) + α I t ( x , y )
σ t + 1 k 2 ( x , y ) = ( 1 - α ) σ t k 2 ( x , y ) + α ( I t ( x , y ) - μ t k ( x , y ) ) 2
ω t + 1 k ( x , y ) = ( 1 - α ) ω t k ( x , y ) + α ( M k ( x , y ) )
Wherein, α is a turnover rate, 0<α<1,1≤k≤K, and when the model of the 1st Satisfying Matching Conditions is k, M k(x, y)=1, otherwise M k(x, y)=0.
C). model ordering: when the model number of a pixel is k, and k>1 o'clock, to this k model according to priority size sort, the priority computing formula is ω k(x, y)/σ k(x y), in when coupling, begins coupling from the model of priority maximum, if the model of first Satisfying Matching Conditions is k, then k promptly puts the Matching Model in this moment for this, does not need the Model Matching little with priority ratio k again.
D) foreground extraction: when the video flowing of input during greater than 200 frames, the beginning testing will be by the average μ of Matching Model k(x, the y) gray value of picture point as a setting, promptly B (x y), draws background image and is:
B(x,y)=μ k(x,y)
Thereby foreground image is:
F ( x , y ) = 1 abs ( I ( x , y ) - B ( x , y ) ) ≤ 2.5 σ k ( x , y ) 0 abs ( I ( x , y ) - B ( x , y ) ) > 2.5 σ k ( x , y )
Wherein, (x y) is input picture to I, and (x y) is background image, σ to B k(x y) is (x, y) variance of some Matching Model.
At last, the two-value foreground image that obtains is carried out simple morphology handle, purpose is that the image bridge joint that will be interrupted gets up and eliminates incoherent details.
Need to prove that modelling is operated in initial 200 frames to be carried out, and model modification work is to be applied in the crowd characteristic statistic processes always, has so also just guaranteed to obtain when light changes the real-time background image accurately that obtains.
Object matching is followed the tracks of submodule and is used for carrying out object matching to looking tonic train, it is the prospect that each two field picture is detected, with back one frame detected prospect according to profile/external surrounding frame intersect, index such as grey level histogram mates, if continuous a few frame all detects same prospect, then it being defined as a target also numbers for it, continue to be complementary with foreground image, can further obtain the movement properties of target this moment, as movement locus, translational speed etc.; Same, as to extract for tonic train feature is mated with before voice object and to be obtained the voice object, more the feature of new speech object.
In the tracking and matching process of the present invention, profile and multivariate joint probability histogram feature have mainly been used to moving target.
The profile of promptly at first supposing same object intersects in two two field pictures of being separated by, and this hypothesis is reasonable and simple.
Then, if in two two field pictures when having a plurality of contour of object to intersect, use the multivariate joint probability histogram to get rid of again.The multivariate joint probability histogram has guaranteed the confidence level of matching result more.
And contour feature and histogram feature all have translation invariance, and practice shows, this is a kind of characteristic matching mode very effectively and accurately.
Object matching:
1. establish existing N target T1, T2, T3......Tn, present frame detects M prospect F1, F2, F3......Fm;
2. judge that whether a F1 and N objective contour intersects: if F1 only intersects with a target Ti, and F1 and Ti histogram the match is successful thinks that promptly F1 is exactly Ti, with the attribute renewal Ti of F1; If F1 does not intersect with any one Ti, promptly set up fresh target T (n+1) with F1; If F1 and a plurality of target intersect, think then and this moment the target merging taken place that set up a new target, its attribute of mark is merging, the numbering of each target before record merges.
3.F2, F3.......Fm repeating step 2, if a plurality of prospects intersect with target T, think then and this moment the target separation taken place that set up a new target, its attribute of mark is for separating, record separates the numbering of preceding each target.
The target classification submodule is used for audio object and carries out target classification.
Tonic train after promoting through quality extracts phonetic feature after the windowing, obtain the zero-crossing rate of voice signal, amplitude, the most basic speech parameter of this group of short-time energy in short-term, is used for the management and control platform and carries out early warning and judge.
If input voice V (n), window function is W (n), and window long [0, N-1] makes V w(n)=and W (n) * W (n), obtain:
Short-time energy (Short Time Energy):
Figure GSA00000119834400141
Amplitude (Short Time Amplitude) in short-term:
Figure GSA00000119834400142
Zero-crossing rate (Zero-Crossing Rate):
Figure GSA00000119834400143
Figure GSA00000119834400144
Be sign function.
According to voice attributes the voice object is also classified.As by same,, the voice object is classified as attributes such as the frequency of voice, amplitudes according to voice attributes.
According to zero-crossing rate, can be divided into low frequency, intermediate frequency and high frequency (50Hz/100Hz/500Hz/1000Hz/10000Hz);
According to the energy difference, can be divided into the different-energy grade.
The characteristic statistics module is a nucleus module of the present invention, and the security area Region (can be to delimit the security area to detect or full frame detection) in conjunction with prospect that extracts and management and control module settings carries out following characteristic statistics analysis.As shown in Figure 5.
The characteristic statistics module is divided into the depth of field and upgrades and characteristic statistics two parts
1. the depth of field is upgraded
Variation along with weather in the middle of one day, especially out of doors, single people shared pixel count in picture can constantly change, and (for example shadow is less at noon the time, what shadow drew during in the morning with the dusk is very long), because in the present invention, the depth of field defines the assembly of single individuality, so need be brought in constant renewal in along with weather for people and its shadow.In the present invention, the depth of field is upgraded with characteristic statistics and is carried out simultaneously, and the depth of field is upgraded most important.Among the present invention, depth of field step of updating is as follows:
Being demarcated by the user is depth of field initialize;
Choose and be used for the target sample that the depth of field is upgraded, choosing sample rules is that sample area is suitable with the initial area of user's demarcation;
Less than normal when the area that a plurality of samples are arranged continuously, and sample size less than normal reduces the area of the current depth of field when reaching certain threshold value; Bigger than normal when the area that a plurality of samples are arranged continuously, and sample size bigger than normal increases the area of the current depth of field when reaching certain threshold value; In addition, when all getting less than qualified sample in continuous a period of time, the initial depth of field value of composing with the user replaces existing depth of field value.
2. the characteristic statistics characteristic statistics was divided into for three steps, mainly added up six values.
Instantaneous density D ensity of statistical regions and regional instantaneous number NumberOfRegion (NOR)
Density is the ratio of interior foreground area AreaOfForeground (AOF) in zone and overall area area A reaOfRegion (AOR):
Density = AOF AOR × 100 %
The instantaneous number NOR in zone is the merchant of interior prospect gross area Area Of Region (AOF) in zone and depth of field area Depth.Area, that is:
NOR = AOF Depth . Area
Add up two-way personnel's count value NumberOfDirectA (NODA), NumberOfDirectB (NODB)
The density size that calculates as stated above has different processing methods: when density value<30%, think that foreground target is each other apart from far away at this moment, think and to carry out object matching tracking effectively by the conventional method of audiovisual information intelligent analysis and control system, on this basis at scene particularity, to the trajectory segment statistics of each target; When density value>30%, distinguish very difficulty between the target, be difficult to prove effective with the object matching tracing, so the present invention proposes the crowd characteristic statistical method of searching for based on moving mass.As shown in Figure 6.
A. trajectory segment statistic law
Owing to analyze in the scene in common people's group character, the tracking of target trajectory is discontinuous often, the trajectory segment statistic law is divided into the track of target the segment one by one of every k frame, target trajectory only needs the k frame, and just can participate in two-way personnel counts work continuously, in the present invention, the every K frame statistics of trajectory segment statistic law once.
A direction target is designated as TargetA[n] (0<n≤Na), Na is an A direction target number, and B direction target is designated as TargetB[n] (0<n≤Nb), Nb is a B direction target number, Target[n] .dy represents the length of target trajectory.
Two-way personnel's count value NODA, NODB are as follows:
NODA = NODA + Σ n = 1 Na T arg etA [ n ] . dy LengthofRegion
NODB = NODB + Σ n = 1 Nb T arg etB [ n ] . dy LengthofRegion
The physical significance of above fraction is: suppose that when a people passes the security area counting is 1, then as its Target[n that passes by] behind the .dy, should count
Figure GSA00000119834400163
The individual is an example with the A direction,
Figure GSA00000119834400164
Represent all Na target interior people's numerical value that increases of k frame in the past.
Therefore, in the method, when supposing that a people passes by zone 1/4 distance, count value increases by 1/4, and during the distance in the zone 1/2 of passing by, count value increases by 1/2, by that analogy.Counting work to single individuality enters the zone from individuality and has just begun, and when it finally left the zone, what count value was just complete was increased to 1.
B. based on the crowd characteristic statistic law of moving mass search
Method based on the piece coupling is applicable in the bigger scene of density.The fritter that image is divided into m*n, each piece has the YUV colouring information, whole pieces for the n two field picture, do color matching with adjacent eight pieces of correspondence position in the n-1 two field picture, thereby determine the direction of motion of each piece, further, the direction of motion of whole pieces in the statistical picture, depth of field size (being single people area occupied size in video pictures) in conjunction with the user sets finally obtains two-way personnel's number.As shown in Figure 7.
Statistics is calculated by following formula after obtaining piece total number Block Num Of B (BNOB) that piece total number Block Num Of A (BNOA) that the direction of motion is A and the direction of motion is B:
NODA = NODA + BNOA × m × n Depth . Area × LengthOfBlock LengthOfRegion
NODB = NODB + BNOB × m × n Depth . Area × LengthOfBlock LengthOfRegion
Wherein, m and n are the branch block sizes, and Depth.Area is current depth of field area, are that LengthOfBlock is the length of piece on the AB direction, and LengthOfRegion is the total length in the AB direction of security area.With above-mentioned first formula, previous fraction
Figure GSA00000119834400172
In, molecule is that the direction of motion is the piece gross area of A, denominator depth of field area is represented everyone shared number of pixels, the number added value of Shang dynasty Table A direction; The fraction in back Physical significance be: suppose that when a people passes the security area counting is 1, then after it passes by LengthOfBlock, should count The individual, as shown in Figure 8; Two fractions multiply each other, and what obtain is instantaneous counting increment.
Add up two-way personnel's flow speed value VelocityOfDirectA (VODA), VelocityOfDirectB (VODB)
Speed is the distance that object was passed by in a period of time.In the present invention, the statistics of flow velocity is that every k frame carries out once, that is to say flow velocity of every k frame update.The statistical method of flow velocity is as follows:
Choose K tracing point arranged target as sample object, and determine their direction, suppose that the sample object number of A direction and B direction is respectively Na and Nb;
Total SA of distance and SB that the target of Na A direction of calculating and the target of Nb B direction move;
The required processing time of known k two field picture is TimeDiff, and unit is second (s), and according to distance and time, the mean flow rate that gets outgoing direction A and direction B is shown in the following formula, and flow velocity unit is meter per second (m/s):
VelocityOfDirectA = SA Na × TimeDiff
VelocityOfDirectB = SB Na × TimeDiff
The management and control platform receives the video analysis result, and the booster action that makes full use of audio frequency is issued various management and control orders.
As: flow of the people, the flow velocity in certain monitoring place, scenic spot in statistic record a period of time offer the user with data and are used to formulate better control measures;
A certain moment density value sharply rises and when following voice unusual (high-frequency, height be amplitude or high-energy in short-term), thinks that colony may take place assembles incident and then warning in detecting picture;
When not having video unusual, automatically Video Detection sensitivity is heightened when having detected voice unusual (high-frequency, height be amplitude or high-energy in short-term);
Simultaneously, the management and control platform be responsible for output look the audio collection order, for terminal intelligent analysis configuration system parameters and parameter of regularity, to video/audio browse, store, work such as retrieval.Concrete as: select the real-time monitor video image of multiple display mode (multiple picture segmentation demonstration/full screen display) remote browse multichannel, multi-channel video is selected, equipment query, Yun Jing control (the PTZ control/presetting bit setting/setting etc. of cruising), real-time display alarm information, play warning video/the stop video of reporting to the police, check the warning sectional drawing, according to condition (equipment/time/incident/state etc.) inquire about warning message, video recording (video recording/alarm linkage video recording in real time/manually video recording/cycle video recording/timing video recording), the video recording retrieval, play video recording, video recording is derived, electronic chart, the query manipulation daily record.
Crowd characteristic counting system has three kinds of mode of operations:
1. frontal chromatography: crowd characteristic counting system is done the intellectual analysis management and control to it after the video/audio signal collecting device.After front end carries out the video/audio signal collection, can carry out processing such as data processing, quality lifting, feature extraction, characteristic statistics to audiovisual information at front end, and analysis result is sent to the rear end, the rear end is according to analysis result issue management and control order.As shown in Figure 9.
2. back-end analysis: crowd characteristic counting system was done the intellectual analysis management and control to it before display screen on the audiovisual information.Above-mentioned crowd characteristic counting system, the audiovisual information that front end is sent carries out processing such as data processing, quality lifting, feature extraction, characteristic statistics in the rear end, and according to analysis result issue management and control order.As shown in figure 10.
3. distributed analysis: promptly crowd characteristic counting system is looked audio collection and feature extraction work after the video/audio signal collecting device, and feature stream sent to the rear end, further discern after the data flow of rear end receiving front-end, finish analytical work, and according to analysis result issue management and control order.As shown in figure 11.
Wherein, back-end analysis can be carried out upgrading to traditional supervisory control system easily, only the present invention need be connected in series between video/audio signal and the display screen to get final product.
For saving network bandwidth resources, the present invention comprises frontal chromatography and distributed analytical model.
Frontal chromatography only need transmit alarm signal, has saved network bandwidth resources greatly.
Distributed analysis only need transmission feature stream (less than the video-voice frequency flow amount 1/50), when saving bandwidth, task is distributed to front-end and back-end, make whole system have analysis ability efficiently, the rear end does not have heavy Processing tasks, does not need the wholesale hardware investment;
Front end and distributed analysis have realized " as required " monitoring: only occurring under the situation of reporting to the police, may need that just relevant video/audio signal is sent to the rear end and write down or store, and only need transmission data seldom generally speaking.
According to applied environment, the available network bandwidth resource what or the preparation investment amount what, three kinds of patterns are optional.
The invention discloses a kind of crowd characteristic counting system, comprise and look the audio collection module, look the audio quality hoisting module, look the audio feature extraction module, the crowd characteristic statistical module, look the real-time management and control platform of audio frequency.Wherein, when crowd density during, adopt the trajectory segment statistic law when carrying out characteristic statistics less than setting threshold; When crowd density during, adopt the moving mass matching method to carry out characteristic statistics greater than setting threshold.There are three kinds of mode of operations in system: frontal chromatography, back-end analysis, distributed analysis.Native system according to promote by looking audio quality, the application of depth of field real-time update and trajectory segment statistics, moving mass matching method, realized accurate crowd characteristic statistics, and overcome legacy system and occur the shortcoming that track rejection, coupling are followed the tracks of difficulty easily; The invention provides three kinds of mode of operations, frontal chromatography pattern, back-end analysis pattern, distributed analytical model, solved existing intelligent analysis system mode of operation single, can not realize transmitting as required, the problem of storage as required, and have excellent adaptability.

Claims (10)

1. a crowd characteristic counting system is characterized in that, comprising:
Look the audio collection module, be used for the video/audio signal of gathering is handled, obtain digital video sequences and digital audio sequence;
Look the audio quality hoisting module, be used for video and speech digit sequence are carried out the quality lifting;
Look the audio feature extraction module, be used for the digital video sequence is carried out feature extraction, characteristic matching, the speech digit sequence is carried out feature extraction, characteristic matching, tagsort;
The crowd characteristic statistical module comprises depth of field updating submodule and characteristic statistics submodule, is used to carry out depth of field real-time update and crowd characteristic statistics;
Look the real-time management and control platform of audio frequency, be used for receiving and look the audio analysis result, and according to analysis result issue management and control order;
Wherein, described characteristic statistics submodule is used for the instantaneous density of statistical regions, regional instantaneous number, two-way personnel's count value, two-way personnel's flow speed value etc.;
Wherein, when adding up two-way personnel's count value, when crowd density during less than setting threshold, with the Moving Target segment processing: the length of definition statistical regions is LengthofRegion, A direction target is designated as TargetA[n] (0<n≤Na), Na is an A direction target number, and B direction target is designated as TargetB[n] (0<n≤Nb), Nb is a B direction target number, Target[n] .dy represents the length of target trajectory in special time;
Two-way personnel's count value NODA, NODB are as follows:
NODA = NODA + Σ N = 1 Na T arg etA [ n ] . dy LengthofRegion
NODB = NODB + Σ N = 1 Nb T arg etB [ n ] . dy LengthofRegion
When complete the passing by whole when zone of single moving target, the count value of single moving target is 1, and adds 1 in personnel's count value;
When crowd density during greater than setting threshold, the fritter that image is divided into the m*n pixel, with these fritters as process object, be complementary by certain foreground blocks and eight pieces on every side, determine the direction of motion of this foreground blocks, further what pieces are certain direction have in the statistical picture, in conjunction with depth of field size, be single people area occupied size in video pictures, finally obtain count value.
2. crowd characteristic counting system according to claim 1 is characterized in that:
Described depth of field updating submodule is used to screen qualified motion unit, carries out depth of field real-time update according to the trend of a different time sections sample bulk area size variation.
3. crowd characteristic counting system according to claim 1 and 2 is characterized in that:
Described depth of field updating submodule, with the assembly of single individual definition for people and shadow, the physical significance of the depth of field is the size of individual goal area occupied in video pictures.
4. crowd characteristic counting system according to claim 1 is characterized in that:
The described audio collection module of looking, original incoming video signal can be from the analog video signal of video camera, video recording or other equipment arbitrary resolutions or the encoded video streams that comes by Network Transmission; Original input audio signal can be analogue audio frequency or digital audio stream;
The described audio collection module of looking is carried out data processing to video/audio signal respectively, if input is an analog signal, at first will be converted to digital signal through A/D; If input is an encoding stream, decodes through decoder, and convert the form that needs to.
5. crowd characteristic counting system according to claim 1 is characterized in that, the described audio quality hoisting module of looking further comprises:
The noise remove submodule uses adjustable Alpha's mean filter that video sequence, tonic train are carried out noise remove;
Signal enhancer module is used adjustable power transform method that video sequence, tonic train are carried out signal and is strengthened.
6. crowd characteristic counting system according to claim 1 is characterized in that, the described audio feature extraction module of looking is divided into parallel two:
For video sequence, the described audio feature extraction module of looking further comprises:
Video image foreground extraction submodule is used to extract the prospect of video image; Set up background model based on Gauss model or codebook method, thereby each frame input picture and background frame are compared the prospect of obtaining;
Video object coupling is followed the tracks of submodule, is used for video sequence is carried out object matching, uses contour feature and the multivariate joint probability histogram feature combines effectively and carry out object matching accurately;
For tonic train, the described audio feature extraction module of looking further comprises:
Voice extract submodule, are used to extract phonetic feature;
The voice match submodule is used for tonic train is carried out object matching, and the feature that tonic train is extracted is mated with voice object before and obtained the voice object, and new speech characteristics of objects more;
Voice classification submodule is used for audio object is classified.
7. crowd characteristic counting system according to claim 1 is characterized in that, the described real-time management and control platform of audio frequency of looking receives and looks the audio analysis result, issues various management and control orders according to analysis result; Simultaneously, the management and control platform be responsible for output look the audio collection order, for terminal intelligent analysis configuration system parameters and parameter of regularity, to video/audio browse, store, work such as retrieval.
8. crowd characteristic counting system according to claim 1, it is characterized in that, described crowd characteristic counting system, after front end carries out the video/audio signal collection, can carry out processing such as data processing, quality lifting, feature extraction, characteristic statistics to audiovisual information at front end, and analysis result sent to the rear end, the rear end is according to analysis result issue management and control order.
9. crowd characteristic counting system according to claim 1, it is characterized in that, described crowd characteristic counting system, the audiovisual information that front end is sent carries out processing such as data processing, quality lifting, feature extraction, characteristic statistics in the rear end, and according to analysis result issue management and control order.
10. crowd characteristic counting system according to claim 1, it is characterized in that, described crowd characteristic counting system, carrying out video/audio after front end carries out the video/audio signal collection handles and feature extraction work, and feature stream sent to the rear end, further add up after the data flow of rear end receiving front-end, finish analytical work, and according to analysis result issue management and control order.
CN2010101842530A 2010-05-27 2010-05-27 Crowd characteristic counting system Expired - Fee Related CN101835034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101842530A CN101835034B (en) 2010-05-27 2010-05-27 Crowd characteristic counting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101842530A CN101835034B (en) 2010-05-27 2010-05-27 Crowd characteristic counting system

Publications (2)

Publication Number Publication Date
CN101835034A true CN101835034A (en) 2010-09-15
CN101835034B CN101835034B (en) 2011-12-14

Family

ID=42718939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101842530A Expired - Fee Related CN101835034B (en) 2010-05-27 2010-05-27 Crowd characteristic counting system

Country Status (1)

Country Link
CN (1) CN101835034B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254401A (en) * 2011-05-11 2011-11-23 北京城市系统工程研究中心 Intelligent analyzing method for passenger flow motion
CN102868874A (en) * 2012-09-21 2013-01-09 浙江宇视科技有限公司 Intelligent analysis service migration method and device
CN104658007A (en) * 2013-11-25 2015-05-27 华为技术有限公司 Identifying method and device for actual moving targets
CN105335782A (en) * 2014-05-26 2016-02-17 富士通株式会社 Image-based target object counting method and apparatus
CN106464846A (en) * 2014-06-02 2017-02-22 派视尔株式会社 Camera, DVR, and image monitoring system comprising same
CN106888365A (en) * 2017-02-22 2017-06-23 成都华安视讯科技有限公司 A kind of dynamic environment monitoring system and method
CN108388838A (en) * 2018-01-26 2018-08-10 重庆邮电大学 Unmanned plane population surveillance system and monitoring method over the ground
CN110971826A (en) * 2019-12-06 2020-04-07 长沙千视通智能科技有限公司 Video front-end monitoring device and method
CN112305945A (en) * 2020-10-21 2021-02-02 泰州程顺制冷设备有限公司 Protection mechanism malfunction avoiding platform
CN112989865A (en) * 2019-12-02 2021-06-18 山东浪潮人工智能研究院有限公司 Crowd attention focus judgment method based on human head posture judgment
CN115240142A (en) * 2022-07-28 2022-10-25 杭州海宴科技有限公司 Cross-media-based abnormal behavior early warning system and method for crowd in outdoor key places
CN116248830A (en) * 2022-12-17 2023-06-09 航天行云科技有限公司 Wild animal identification method, terminal and system based on space-based Internet of things
CN117058627A (en) * 2023-10-13 2023-11-14 阳光学院 Public place crowd safety distance monitoring method, medium and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663491B (en) * 2012-03-13 2014-09-03 浙江工业大学 Method for counting high density population based on SURF characteristic

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1352780A (en) * 1999-11-03 2002-06-05 特许科技有限公司 Image processing techniques for a video based traffic monitoring system and methods therefor
CN101366045A (en) * 2005-11-23 2009-02-11 实物视频影像公司 Object density estimation in vedio
CN101431664A (en) * 2007-11-06 2009-05-13 同济大学 Automatic detection method and system for intensity of passenger flow based on video image
JP2010079419A (en) * 2008-09-24 2010-04-08 Mitsubishi Electric Corp Number-of-person counter and number-of-person counting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1352780A (en) * 1999-11-03 2002-06-05 特许科技有限公司 Image processing techniques for a video based traffic monitoring system and methods therefor
CN101366045A (en) * 2005-11-23 2009-02-11 实物视频影像公司 Object density estimation in vedio
CN101431664A (en) * 2007-11-06 2009-05-13 同济大学 Automatic detection method and system for intensity of passenger flow based on video image
JP2010079419A (en) * 2008-09-24 2010-04-08 Mitsubishi Electric Corp Number-of-person counter and number-of-person counting method

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254401A (en) * 2011-05-11 2011-11-23 北京城市系统工程研究中心 Intelligent analyzing method for passenger flow motion
CN102868874A (en) * 2012-09-21 2013-01-09 浙江宇视科技有限公司 Intelligent analysis service migration method and device
CN102868874B (en) * 2012-09-21 2016-02-03 浙江宇视科技有限公司 A kind of intellectual analysis business migration method and device
CN104658007A (en) * 2013-11-25 2015-05-27 华为技术有限公司 Identifying method and device for actual moving targets
CN105335782A (en) * 2014-05-26 2016-02-17 富士通株式会社 Image-based target object counting method and apparatus
CN106464846A (en) * 2014-06-02 2017-02-22 派视尔株式会社 Camera, DVR, and image monitoring system comprising same
CN106888365A (en) * 2017-02-22 2017-06-23 成都华安视讯科技有限公司 A kind of dynamic environment monitoring system and method
CN108388838B (en) * 2018-01-26 2021-07-09 重庆邮电大学 Unmanned aerial vehicle ground crowd monitoring system and monitoring method
CN108388838A (en) * 2018-01-26 2018-08-10 重庆邮电大学 Unmanned plane population surveillance system and monitoring method over the ground
CN112989865B (en) * 2019-12-02 2023-05-30 山东浪潮科学研究院有限公司 Crowd attention focus judging method based on head gesture judgment
CN112989865A (en) * 2019-12-02 2021-06-18 山东浪潮人工智能研究院有限公司 Crowd attention focus judgment method based on human head posture judgment
CN110971826A (en) * 2019-12-06 2020-04-07 长沙千视通智能科技有限公司 Video front-end monitoring device and method
CN112305945A (en) * 2020-10-21 2021-02-02 泰州程顺制冷设备有限公司 Protection mechanism malfunction avoiding platform
CN115240142A (en) * 2022-07-28 2022-10-25 杭州海宴科技有限公司 Cross-media-based abnormal behavior early warning system and method for crowd in outdoor key places
CN115240142B (en) * 2022-07-28 2023-07-28 杭州海宴科技有限公司 Outdoor key place crowd abnormal behavior early warning system and method based on cross media
CN116248830A (en) * 2022-12-17 2023-06-09 航天行云科技有限公司 Wild animal identification method, terminal and system based on space-based Internet of things
CN117058627A (en) * 2023-10-13 2023-11-14 阳光学院 Public place crowd safety distance monitoring method, medium and system
CN117058627B (en) * 2023-10-13 2023-12-26 阳光学院 Public place crowd safety distance monitoring method, medium and system

Also Published As

Publication number Publication date
CN101835034B (en) 2011-12-14

Similar Documents

Publication Publication Date Title
CN101835034B (en) Crowd characteristic counting system
CN101799876B (en) Video/audio intelligent analysis management control system
CN101846576B (en) Video-based liquid leakage analyzing and alarming system
CN101859436B (en) Large-amplitude regular movement background intelligent analysis and control system
CN107229894B (en) Intelligent video monitoring method and system based on computer vision analysis technology
CN101833838B (en) Large-range fire disaster analyzing and early warning system
CN102136059B (en) Video- analysis-base smoke detecting method
CN101635835A (en) Intelligent video monitoring method and system thereof
CN101727672A (en) Method for detecting, tracking and identifying object abandoning/stealing event
CN111462488A (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN103871082A (en) Method for counting people stream based on security and protection video image
CN104063883A (en) Surveillance video abstract generating method based on combination of object and key frames
CN104244113A (en) Method for generating video abstract on basis of deep learning technology
CN101221663A (en) Intelligent monitoring and alarming method based on movement object detection
CN102254394A (en) Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis
CN102902960A (en) Leave-behind object detection method based on Gaussian modelling and target contour
Rabbouch et al. A vision-based statistical methodology for automatically modeling continuous urban traffic flows
CN104463125A (en) DSP-based automatic face detecting and tracking device and method
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN103500456B (en) A kind of method for tracing object based on dynamic Bayesian network network and equipment
Telagarapu et al. A novel traffic-tracking system using morphological and Blob analysis
CN113255550B (en) Pedestrian garbage throwing frequency counting method based on video
CN105095891A (en) Human face capturing method, device and system
CN105930814A (en) Method for detecting personnel abnormal gathering behavior on the basis of video monitoring platform
Faisal et al. Automated traffic detection system based on image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20100915

Assignee: Beijing Boostiv Technology Co., Ltd.

Assignor: Wang Wei

Contract record no.: 2012110000149

Denomination of invention: Crowd characteristic counting system

Granted publication date: 20111214

License type: Exclusive License

Record date: 20120829

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111214

Termination date: 20180527

CF01 Termination of patent right due to non-payment of annual fee