CN103839085B - A kind of detection method of compartment exception crowd density - Google Patents
A kind of detection method of compartment exception crowd density Download PDFInfo
- Publication number
- CN103839085B CN103839085B CN201410094075.0A CN201410094075A CN103839085B CN 103839085 B CN103839085 B CN 103839085B CN 201410094075 A CN201410094075 A CN 201410094075A CN 103839085 B CN103839085 B CN 103839085B
- Authority
- CN
- China
- Prior art keywords
- crowd density
- compartment
- feature
- image
- density
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000004927 fusion Effects 0.000 claims abstract description 27
- 230000002159 abnormal effect Effects 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 5
- 239000013598 vector Substances 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000007637 random forest analysis Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 2
- 238000010801 machine learning Methods 0.000 claims description 2
- 238000005286 illumination Methods 0.000 abstract description 3
- 230000002596 correlated effect Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of detection method of compartment exception crowd density, this method includes the following steps:Several compartment sample images of acquisition with different crowd density rating, and obtain its multi-modal fusion feature;Training obtains crowd density grader;Obtain the multi-modal fusion feature of image to be detected;According to crowd density grader, the crowd density grade that image to be detected corresponds to compartment is obtained, judges whether the crowd density in the compartment is abnormal accordingly;Automatic record correlated crowd density anomaly information.The present invention utilizes multi-modal fusion feature, automatic study and the abnormal crowd density scene of identification, so as to solve the problems, such as real-time, automatic identification and recording exceptional crowd density in train travelling process.It is insensitive that this method blocks train scene crowd and illumination, camera slightly distort, and is detected suitable for the train exception crowd density of 360 cameras or gunlock camera.
Description
Technical field
The invention belongs to technical field of video processing more particularly to a kind of according to compartment video, in real time, automatically divide
Analysis is with the presence or absence of the method for abnormal crowd density.
Background technology
At present, domestic almost all of subway train is assembled with video frequency graphic monitoring system.Train at runtime, video
Monitoring system records the situation in compartment and stores relevant video automatically.The situation of train supervision is at present:Subway train is daily
The volume of the flow of passengers differs greatly each period, compartment relative closure, and the environment in compartment is relative complex, video image
Acquisition is limited, and according to changes in environmental conditions quickly, camera has 360 cameras and gunlock camera, image data storage for illumination
Amount is big, and the video data volume that a train was runed 3 hours in one day is more than 10Gbit, and a city or area are often more simultaneously
Train operation, more piece compartment shares a set of video image system, needs the video data volume stored very big daily.Once occur
Accident, needs manually to transfer and is stored in video with the presence or absence of abnormality, very labor intensive and material resources with artificial enquiry.
International terrorists event happens occasionally, and people's awareness of safety is increasingly promoted, and promotes public transport safety and anti-terrorism in state
Interior is also a kind of common recognition.How in real time, accurately identification train unusual condition is quick processing train accident, ensures public safety
Important link and modern Intelligentized subway train active demand.And compartment crowd density exception is detected in real time, it is
Compartment personnel dredge, and improve program comfort level and prevent the primary demand for the Train Managements such as group gathers around.Modern video image procossing skill
Art is fast-developing, the especially development of the technologies such as image procossing, computer vision and artificial intelligence so that analysis compartment is different in real time
Normal state can be achieved.Although in recent years, the image processing method for automatic identification crowd density is stepped up, mesh
Abnormal crowd density in a kind of preceding suitable image processing method energy automatic decision compartment not yet.
Currently used crowd density judgment method mainly has two major class:Based on pedestrian or human body parts(The number of people, above the waist
Deng)Detection and the method based on statistical learning.
Based on pedestrian or human body parts(The number of people, above the waist etc.)The method of detection needs to see pedestrian, the number of people from image
Or upper part of the body etc. has the body or body part of notable feature.And under the conditions of compartment, crowd's serious shielding often, i.e.,
Make to want to see that the complete number of people cannot guarantee that, thus this kind of method be used to judging the crowd density of compartment it is whether abnormal compared with
For difficulty.
Method based on statistical learning has the crowd density in subway detection method based on video, mainly passes through height at present
This background modeling method extracts prospect, estimates crowd density according to the area of prospect;During train operation, compartment background is complicated,
Illumination variation, compartment passenger moving is irregular and serious shielding condition, therefore the effect of Gauss modeling is bad, identifies crowd density
Accuracy rate it is relatively low.
Method based on statistical learning, also have at present dynamic texture based on time-space domain local binary pattern and support to
The crowd density Methods For Global Estimation of amount machine, this method have certain effect for analyzing crowd density, in public crowd
There are some applications in the estimation of density.However, as single textural characteristics when crowd density reaches more than middle-high density, area
The ability of crowd density is divided to be remarkably decreased, it is appropriate that the training samples number that judging nicety rate is relatively low and needs are very big could obtain
Effect.And compartment may be in upper and lower class or certain time, crowd density is just difficult at this time for a long time in middle-high density
To detect exception therein.
Invention content
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides a kind of detections of compartment exception crowd density
Method.This method shares the present situation of a set of video image system according to more piece compartment under metro environment, is saved according to two or more
Save the density variation of compartment(Density rating difference is more than 1 grade)It is close with the presence or absence of abnormal crowd in compartment to estimate
Degree.
Detection method includes the following steps for a kind of compartment exception crowd density provided by the invention:
Several compartment sample images of step 1, acquisition and storage with different crowd density rating, and be the sample
The corresponding crowd density grade of this image tagged;
Step 2, the extraction respective textural characteristics of sample image, Surf, Fast, Harris feature point feature, prospect
Image area bit is sought peace light stream density feature;
The textural characteristics of a certain sample image that step 3, fusion extraction obtain, Surf, Fast, Harris characteristic point are special
Sign, foreground image area bit are sought peace light stream density feature, generate multi-modal fusion feature;
Step 4, the multi-modal fusion feature according to several sample images, training obtain crowd density grader;
Step 5 intercepts image to be detected from the monitor video of compartment, extracts the line of described image to be detected successively
Feature, Surf, Fast, Harris feature point feature are managed, foreground image area bit is sought peace light stream density feature, and to these spies
Sign is merged, and obtains the multi-modal fusion feature of described image to be detected;
The multi-modal fusion feature of described image to be detected is input to the crowd density grader by step 6, obtains institute
State the crowd density grade that image to be detected corresponds to compartment;
Step 7, the crowd density grade in the compartment obtained according to the step 6 judge the compartment crowd density whether
It is abnormal;
The crowd density exception information of step 8, automatic record crowd density exception compartment.
The advantage of the technical solution adopted in the present invention has:
1st, the present invention shares a set of video image system present situation according to more piece compartment under metro environment, according to two sections or more piece
The difference of compartment density(Density rating difference is more than 1 grade)To estimate in compartment with the presence or absence of abnormal crowd density;
2nd, the present invention uses the method based on statistical machine learning, and manually video side is checked replace being widely present at present
Method is, it can be achieved that detection subway train compartment crowd density whether there is exception in real time, automatically, and records and crowd density exception occurs
The coach number and time of origin in compartment and corresponding crowd density exception picture;
3rd, invention introduces Surf, Fast, Harris characteristic points and light stream density feature, improve existing single profit
With foreground image or textural characteristics detection subway train compartment crowd density, the problem of accuracy of detection is not high;
4th, the present invention is by LBP texture feature vectors, Surf, Fast, Harris characteristic point quantity, foreground image area ratio,
Light stream density feature blends, generate multi-modal fusion feature vector so that various features collective effect in distinguish crowd density,
Improve the accuracy of detection crowd density;
5th, the present invention uses efficient random forest learning algorithm, from the compartment image pattern of different density ratings
Learn to random forest grader, and the grader learnt is used for real-time crowd density grade separation.
Compared with prior art, beneficial effects of the present invention have:
1st, the present invention provides one kind when subway train is run for the first time, detects the side of crowd density exception in real time, automatically
Method can suitably adjust on the basis of existing subway train compartment video monitoring system, to realize subway train compartment exception crowd
Real-time, the automatic detection of density and record;
2nd, for there is presently no a kind of suitable image processing method automatic decision subway train compartment crowd density is different
The problem of normal, the present invention combine subway compartment environment actual state, it is proposed that two section of one kind or more piece subway train compartment
Crowd density grade difference(Density rating difference is more than 1 grade)The method for identifying compartment exception crowd density, reduces and only relies on
Crowd density height judges the wrong report situation of crowd density exception.The present invention can help subway administrative staff to dredge passenger(Guiding
High density compartment passenger is transferred to low-density compartment), prevention passenger is crowded, tramples, and improves comfort of passenger and satisfaction;
3rd, the present invention provides a kind of effective integration foreground image feature, textural characteristics, Surf, Fast, Harris features
The method of point and light stream density feature, has highlighted crowd density feature, and enable several features in subway train vehicle environment
Lower collective effect improves the accuracy rate of identification crowd density detection.
Description of the drawings
Fig. 1 is a kind of flow chart of compartment exception crowd density detection method proposed by the present invention.
Fig. 2 is the flow chart of joint abnormal crowd density estimation in more piece compartment according to an embodiment of the invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference
Attached drawing, the present invention is described in more detail.
The texture of sample image is calculated in the sample image of the compartment of the different densities grade of acquisition in advance of the invention
Feature, Surf, Fast, Harris feature point feature, foreground image area bit are sought peace light stream density feature, and based on above-mentioned more
Modal characteristics generate multi-modal fusion feature;It trains to obtain crowd density grader based on obtained multi-modal fusion feature;When
During train operation, intercepted from monitor video and obtain detection image, the multi-modal fusion feature of detection image is equally calculated;
Classify according to the crowd density grader to the multi-modal fusion feature of the detection image, judge the detection figure
As the crowd density grade in corresponding compartment, and then estimate that the compartment is abnormal with the presence or absence of crowd density, and automatic record hair simultaneously
The coach number and time of origin of stranger's population density exception intercept and store corresponding crowd density exception picture at this time.
Fig. 1 is a kind of flow chart of compartment exception crowd density detection method proposed by the present invention, as shown in Figure 1,
It the described method comprises the following steps:
Several compartment sample images of step 1, acquisition and storage with different crowd density rating, and be the sample
The corresponding crowd density grade of this image tagged;
The crowd density grade includes low crowd density, medium crowd density, high crowd density and superelevation crowd density,
For convenience's sake, low crowd density can be labeled as density rating 0;Medium crowd density is labeled as density rating 1;It will be high
Crowd density is labeled as density rating 2;Superelevation crowd density is labeled as density rating 3.
Step 2, the extraction respective textural characteristics of sample image, Surf, Fast, Harris feature point feature, prospect
Image area bit is sought peace light stream density feature;
Simply illustrate below how texture feature extraction, Surf, Fast, Harris feature point feature, foreground picture image planes
Product bit is sought peace light stream density feature.
1)Textural characteristics
In an embodiment of the present invention, the texture of described image is extracted using the LBP operators of subregions multiple in image
Feature.The LBP operator definitions are in m × m(It is usually taken to be 3 × 3)Window in, with the pixel value of the central pixel point of window
For threshold value, the pixel value of the m × m-1 pixel adjacent with the central pixel point is compared respectively with it, if surrounding pixel
The pixel value of point is more than the pixel value of central pixel point, then the position of the central pixel point is marked as 1, otherwise labeled as 0, this
Sample, by with m × m-1 point in m × m neighborhoods relatively after, one m × m-1 corresponding with the central pixel point can be generated
Binary number, which is typically converted into decimal number i.e. LBP codes, and totally 256 kinds to get to the window center pixel
LBP values, this LBP value can be used for reflecting the texture information of the window area.
Specifically, the step of LBP textural characteristics for extracting piece image, includes the following steps:
Described image is divided into multiple n × n by step 211(Wherein, n>M, more preferably, n>12, for example it is taken as 16)Son
Region;
Step 212, for some pixel in every sub-regions, by its pixel value respectively with its m × m(3×3)It is adjacent
M × m-1 in domain(8)The pixel value of a pixel is compared, if the pixel value of surrounding pixel point is more than the picture of the pixel
Element value, then the position of the pixel is marked as 1, otherwise labeled as 0, in this way, by with m × m(3×3)M × m- in neighborhood
1(8)A point relatively after, a m × m-1 corresponding with the pixel can be generated(8)The binary number of position, this binary number is just
It is the corresponding LBP values of the pixel;
Step 213, the statistic histogram that respective sub-areas is calculated according to the LBP values per sub-regions pixel, i.e.,
The LBP values of each pixel in respective sub-areas(It is assumed to be decimal numeral LBP values)The frequency of appearance, and the statistics to obtaining
Histogram is normalized;
Statistic histogram after the normalization of every sub-regions is attached by step 214, obtains a feature vector,
LBP textural characteristics as described image.
Step 215, maximum value and minimum value in the LBP texture feature vectors, by the LBP textural characteristics to
Amount is divided into p section, wherein, the value of p can be determined and be adjusted, for example can be taken as 9 according to the needs of practical application, count institute
It states each element in LBP texture feature vectors and falls frequency in each section, so as to obtain the feature vector of a p dimension, then
It is normalized to get the LBP textural characteristics tieed up to p.
Certain textural characteristics other than using the LBP textural characteristics, can also use be normalized to centainly tie up
The gray level co-occurrence matrixes feature of the feature vector of degree and can represent image texture other features.
2)Surf, Fast, Harris feature point feature
In the step, a variety of methods of the prior art can be used to extract carrying for Surf, Fast, Harris characteristic point
It takes, for example Surf features, Fast the and Harris features of image can be generated according to Opencv library functions.
In an embodiment of the present invention, for the convenience of calculating, also by Surf characteristic point quantity, Fast characteristic points quantity and
Harris characteristic point quantity is normalized, for example for the image that size is 704*576, can be carried out using 1000
Normalization generates the 3 dimensional feature point features vector being made of above-mentioned 3 characteristic point quantity.
3)Foreground image area compares feature
In the step 2, the extraction of the foreground image area than feature includes the following steps:
Step 221 stores the empty wagons compartment image of each compartment as background image;
Present image is subtracted background image by step 222, obtains corresponding foreground image;
Step 223 passes through gaussian filtering to the foreground image(Such as the gaussian filtering of 3*3), obtain the foreground picture
As the sum of unicom region boundary rectangle area, by total pixel of itself divided by the present image to get the foreground picture image planes tieed up to 1
Product compares feature.
4)Light stream density feature
The bigger region of crowd density, the light stream density between adjacent two field pictures are bigger;Otherwise light stream density is got over
It is small.In the step, a variety of methods of the prior art can be used to extract the light stream density feature, for example, take present image with
The previous frame image of storage for the surf characteristic points extracted before, uses KLT of the prior art(Kanade-Lucas-
Tomasi)Algorithm is calculated the quantity for the surf characteristic points for being subjected to displacement variation, as light stream numerical value, is carried out normalizing
Change handle, such as divided by 1000, just obtained 1 dimension light stream density feature.
The textural characteristics of a certain sample image that step 3, fusion extraction obtain, Surf, Fast, Harris characteristic point are special
Sign, foreground image area bit are sought peace light stream density feature, generate multi-modal fusion feature;
In an embodiment of the present invention, it can be used and combine texture feature vector, characteristic point feature vector, foreground image successively
Area obtains the multi-modal fusion feature, it can also be used certainly than the method for feature vector sum light stream density feature vector
His method, for example suitably increase or decrease the side such as the dimension of each feature vector, the weight for increasing or decreasing each feature vector
Formula obtains the multi-modal fusion feature.
Step 4, the multi-modal fusion feature according to several sample images, training obtain crowd density grader;
In the step, a variety of methods of the prior art can be used to obtain the crowd density grader, such as at random
Forest learning algorithm or support vector machines learning algorithm.
Step 5 intercepts image to be detected from the monitor video of compartment, extracts the line of described image to be detected successively
Feature, Surf, Fast, Harris feature point feature are managed, foreground image area bit is sought peace light stream density feature, and to these spies
Sign is merged, and obtains the multi-modal fusion feature of described image to be detected;
In the step, the textural characteristics of image to be detected, Surf, Fast, Harris feature point feature, foreground image are extracted
Area bit seek peace light stream density feature the step of and these features are merged, obtain the step of multi-modal fusion feature
Rapid similar with the step 2,3 description, details are not described herein.
The multi-modal fusion feature of described image to be detected is input to the crowd density grader by step 6, obtains institute
State the crowd density grade that image to be detected corresponds to compartment;
Step 7, the crowd density grade in the compartment obtained according to the step 6 judge the compartment crowd density whether
It is abnormal;
Can be superelevation crowd density directly by crowd density grade in the case of reporting of less demanding by mistake in the step
Compartment is estimated as that there are the compartments of crowd density exception;It, then can be according to two sections or more piece vehicle in the case of reporting more demanding by mistake
The joint density level status in compartment judges whether the crowd density in corresponding compartment is abnormal, the crowd in even two sections or more piece compartment
The high compartment of crowd density grade is then estimated as crowd density exception compartment, by crowd by density rating there are during notable difference
The low compartment of density rating is estimated as the normal compartment of crowd density.
Judge that the whether abnormal step of the crowd density in corresponding compartment further comprises according to joint density level status
Following steps:
K is saved compartment as one group of joint-detection object by step 71, wherein, the value of K can be according to the needs of practical application
It sets, in an embodiment of the present invention, the value of K is 2 or 3;
Step 72, the crowd density grade for calculating K sections compartment successively according to the step 6:k1,k2,…,kK;
If at least 1 crowd density grade is equal to 3 in the crowd density grade in step 73, K section compartment, that is, it is judged as surpassing
High crowd density(Highest crowd density), and the crowd density rank difference of adjacent compartment is no more than 1, i.e., | k1-k2 |>1、|k2-
k3|>1、…、|kK-k1|>1 at least 1 generation, then be estimated as crowd density exception compartment by the compartment that density rating is 3.
By taking three section compartments as an example, the flow chart of the abnormal crowd density estimation of more piece compartment joint is as shown in Figure 2.
The crowd density exception information of step 8, automatic record crowd density exception compartment.
When it is abnormal that the step 7, which detects the crowd density in a certain compartment, generation crowd density can be also recorded automatically
Abnormal coach number and time of origin, interception and storage corresponding crowd density exception picture, and the information of record is tied up at this time
It is scheduled on together, in order to which later being checked and downloading.
Particular embodiments described above has carried out the purpose of the present invention, technical solution and advantageous effect further in detail
It describes in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all
Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the guarantor of the present invention
Within the scope of shield.
Claims (8)
1. a kind of detection method of compartment exception crowd density, which is characterized in that this method includes the following steps:
Several compartment sample images of step 1, acquisition and storage with different crowd density rating, and be the sample graph
As marking corresponding crowd density grade;
Step 2, the extraction respective textural characteristics of sample image, Surf, Fast, Harris feature point feature, foreground image
Area bit is sought peace light stream density feature;
The textural characteristics of a certain sample image that step 3, fusion extraction obtain, Surf, Fast, Harris feature point feature are preceding
Scape image area bit is sought peace light stream density feature, using combining texture feature vector, characteristic point feature vector, foreground picture successively
The dimension of image planes product feature vector more each than the method for feature vector sum light stream density feature vector or increase/reduction or increase/
The weight of each feature vector is reduced to obtain multi-modal fusion feature;
Step 4, the multi-modal fusion feature according to several sample images, training obtain crowd density grader;
Step 5 intercepts image to be detected from the monitor video of compartment, and the texture for extracting described image to be detected successively is special
Sign, Surf, Fast, Harris feature point feature, foreground image area bit are sought peace light stream density feature, and to these features into
Row fusion, obtains the multi-modal fusion feature of described image to be detected;
The multi-modal fusion feature of described image to be detected is input to the crowd density grader by step 6, obtains described treat
Detection image corresponds to the crowd density grade in compartment;
Step 7, the crowd density grade in the compartment obtained according to the step 6 judge whether the crowd density in the compartment is different
Often;
The crowd density exception information of step 8, automatic record crowd density exception compartment;
Wherein, the extraction of the light stream density feature includes the following steps:
Take present image and the previous frame image of storage;
For the surf characteristic points extracted before, the surf characteristic points that are subjected to displacement variation are calculated using KLT algorithms
Quantity;
The quantity is normalized, obtains the light stream density feature.
2. according to the method described in claim 1, it is characterized in that, the crowd density grade includes low crowd density, medium
Crowd density, high crowd density and superelevation crowd density.
3. according to the method described in claim 1, it is characterized in that, the textural characteristics is LBP textural characteristics or are normalized to
The gray level co-occurrence matrixes feature of the feature vector of certain dimension.
4. according to the method described in claim 3, it is characterized in that, when the textural characteristics are LBP textural characteristics, institute is extracted
The step of stating LBP textural characteristics includes the following steps:
Step 211, the subregion that described image is divided into multiple n × n;
Step 212, for some pixel in every sub-regions, by its pixel value respectively with 8 in its 3 × 3 neighborhood
The pixel value of pixel is compared, if the pixel value of surrounding pixel point is more than the pixel value of the pixel, the pixel
Position is marked as 1, otherwise labeled as 0, by with 8 points in 3 × 3 neighborhoods relatively after, obtain corresponding with the pixel
The corresponding LBP values of the binary number of one 8, the i.e. pixel;
Step 213, the statistic histogram that respective sub-areas is calculated according to the LBP values per sub-regions pixel, and to
To statistic histogram be normalized;
Statistic histogram after the normalization of every sub-regions is attached by step 214, obtains the LBP textures of described image
Feature;
Step 215, maximum value and minimum value in the LBP texture feature vectors, by described LBP texture feature vectors etc.
It is divided into p section, counts each element in the LBP texture feature vectors and fall frequency in each section, obtains a p dimension
Feature vector, be normalized, obtain p dimension LBP textural characteristics.
5. according to the method described in claim 1, it is characterized in that, in the step 2, the foreground image area is than feature
Extraction includes the following steps:
Step 221 stores the empty wagons compartment image of each compartment as background image;
Present image is subtracted background image by step 222, obtains corresponding foreground image;
Step 223, to the foreground image by gaussian filtering, obtain the foreground image unicom region boundary rectangle area it
With by total pixel of itself divided by the present image, obtain the foreground image area and compare feature.
6. according to the method described in claim 1, it is characterized in that, random forest learning algorithm or branch are used in the step 4
Vector machine learning algorithm is held to train to obtain the crowd density grader.
7. it is superelevation directly by crowd density grade according to the method described in claim 1, it is characterized in that, in the step 7
The compartment of crowd density is estimated as that there are the compartments of crowd density exception;Or according to the joint density in two sections or more piece compartment etc.
Grade state judges whether the crowd density in corresponding compartment is abnormal, and there are bright the crowd density grade in even two sections or more piece compartment
The high compartment of crowd density grade is then estimated as crowd density exception compartment by the significant difference different time, by the low vehicle of crowd density grade
Compartment is estimated as the normal compartment of crowd density.
8. if the method according to the description of claim 7 is characterized in that the step 7 is sentenced according to joint density level status
Whether the crowd density in disconnected corresponding compartment is abnormal, then the step of judgement further comprises the steps:
K is saved compartment as one group of joint-detection object by step 71;
Step 72, the crowd density grade for calculating K sections compartment successively according to the step 6;
If at least 1 crowd density grade is highest crowd density in the crowd density grade in step 73, K section compartment, and
The crowd density rank difference of adjacent compartment is no more than 1, then it is different the compartment of highest crowd density grade to be estimated as crowd density
Normal compartment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410094075.0A CN103839085B (en) | 2014-03-14 | 2014-03-14 | A kind of detection method of compartment exception crowd density |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410094075.0A CN103839085B (en) | 2014-03-14 | 2014-03-14 | A kind of detection method of compartment exception crowd density |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103839085A CN103839085A (en) | 2014-06-04 |
CN103839085B true CN103839085B (en) | 2018-06-19 |
Family
ID=50802563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410094075.0A Active CN103839085B (en) | 2014-03-14 | 2014-03-14 | A kind of detection method of compartment exception crowd density |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103839085B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268898A (en) * | 2014-09-15 | 2015-01-07 | 郑州天迈科技股份有限公司 | Method for detecting density of passengers in bus on basis of image analysis |
CN106033548B (en) * | 2015-03-13 | 2021-04-20 | 中国科学院西安光学精密机械研究所 | Crowd abnormity detection method based on improved dictionary learning |
CN105184245B (en) * | 2015-08-28 | 2018-12-21 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of crowd density estimation method of multiple features fusion |
CN106651878B (en) * | 2016-12-21 | 2019-06-11 | 福建师范大学 | A method of for extracting straight line from local invariant feature point |
CN107016696A (en) * | 2017-03-31 | 2017-08-04 | 广州地理研究所 | A kind of passenger flow density detection method and device |
CN107295318A (en) * | 2017-08-23 | 2017-10-24 | 无锡北斗星通信息科技有限公司 | Colour projection's platform based on image procossing |
CN107371004A (en) * | 2017-08-23 | 2017-11-21 | 无锡北斗星通信息科技有限公司 | A kind of method of colour image projection |
CN109086801A (en) * | 2018-07-06 | 2018-12-25 | 湖北工业大学 | A kind of image classification method based on improvement LBP feature extraction |
CN109117791A (en) * | 2018-08-14 | 2019-01-01 | 中国电子科技集团公司第三十八研究所 | A kind of crowd density drawing generating method based on expansion convolution |
CN109919066B (en) * | 2019-02-27 | 2021-05-25 | 湖南信达通信息技术有限公司 | Method and device for detecting density abnormality of passengers in rail transit carriage |
CN111460246B (en) * | 2019-12-19 | 2020-12-08 | 南京柏跃软件有限公司 | Real-time activity abnormal person discovery method based on data mining and density detection |
CN112990272B (en) * | 2021-02-19 | 2022-08-16 | 上海理工大学 | Sensor optimization selection method for fault diagnosis of water chilling unit |
CN113158854B (en) * | 2021-04-08 | 2022-03-22 | 东北大学秦皇岛分校 | Automatic monitoring train safety operation method based on multi-mode information fusion |
CN113538401B (en) * | 2021-07-29 | 2022-04-05 | 燕山大学 | Crowd counting method and system combining cross-modal information in complex scene |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093198A (en) * | 2013-01-15 | 2013-05-08 | 信帧电子技术(北京)有限公司 | Crowd density monitoring method and device |
CN103218816A (en) * | 2013-04-18 | 2013-07-24 | 中山大学 | Crowd density estimation method and pedestrian volume statistical method based on video analysis |
-
2014
- 2014-03-14 CN CN201410094075.0A patent/CN103839085B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093198A (en) * | 2013-01-15 | 2013-05-08 | 信帧电子技术(北京)有限公司 | Crowd density monitoring method and device |
CN103218816A (en) * | 2013-04-18 | 2013-07-24 | 中山大学 | Crowd density estimation method and pedestrian volume statistical method based on video analysis |
Non-Patent Citations (1)
Title |
---|
Crowd Density Estimation Using Sparse Texture Features;Nan Dong et al.;《Journal of Convergence Information Technology》;20100831;第5卷(第6期);125-137 * |
Also Published As
Publication number | Publication date |
---|---|
CN103839085A (en) | 2014-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103839085B (en) | A kind of detection method of compartment exception crowd density | |
KR101869442B1 (en) | Fire detecting apparatus and the method thereof | |
CN106056079B (en) | A kind of occlusion detection method of image capture device and human face five-sense-organ | |
CN102348128B (en) | Surveillance camera system having camera malfunction detection function | |
CN110689054A (en) | Worker violation monitoring method | |
CN111738342B (en) | Pantograph foreign matter detection method, storage medium and computer equipment | |
CN107622258A (en) | A kind of rapid pedestrian detection method of combination static state low-level image feature and movable information | |
CN106228137A (en) | A kind of ATM abnormal human face detection based on key point location | |
CN105426820B (en) | More people's anomaly detection methods based on safety monitoring video data | |
CN102982313B (en) | The method of Smoke Detection | |
CN107911663A (en) | A kind of elevator passenger hazardous act intelligent recognition early warning system based on Computer Vision Detection | |
CN110781853B (en) | Crowd abnormality detection method and related device | |
Arif et al. | Counting of people in the extremely dense crowd using genetic algorithm and blobs counting | |
CN108804987B (en) | Door opening and closing state detection method and device and people flow detection system | |
CN108668109A (en) | Image monitoring method based on computer vision | |
CN108898042B (en) | Method for detecting abnormal user behavior in ATM cabin | |
CN112163572A (en) | Method and device for identifying object | |
Malhi et al. | Vision based intelligent traffic management system | |
CN105005773A (en) | Pedestrian detection method with integration of time domain information and spatial domain information | |
CN112417955A (en) | Patrol video stream processing method and device | |
CN109359593A (en) | A kind of sleet environment fuzzy pictures monitoring and pre-alarming method based on image local grid | |
CN113378648A (en) | Artificial intelligence port and wharf monitoring method | |
CN105095891A (en) | Human face capturing method, device and system | |
WO2016019973A1 (en) | Method for determining stationary crowds | |
CN103400148A (en) | Video analysis-based bank self-service area tailgating behavior detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |