CN109446920B - Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network - Google Patents
Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network Download PDFInfo
- Publication number
- CN109446920B CN109446920B CN201811162491.4A CN201811162491A CN109446920B CN 109446920 B CN109446920 B CN 109446920B CN 201811162491 A CN201811162491 A CN 201811162491A CN 109446920 B CN109446920 B CN 109446920B
- Authority
- CN
- China
- Prior art keywords
- degree
- convolutional neural
- crowding
- neural networks
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 43
- 230000033001 locomotion Effects 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000012544 monitoring process Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 238000011478 gradient descent method Methods 0.000 claims description 7
- 230000007423 decrease Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 4
- 238000011176 pooling Methods 0.000 abstract 2
- 238000007499 fusion processing Methods 0.000 abstract 1
- 238000007781 pre-processing Methods 0.000 abstract 1
- 230000006872 improvement Effects 0.000 description 10
- 238000012360 testing method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Business, Economics & Management (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting passenger crowding degree of urban rail transit based on a convolutional neural network, which comprises the steps of preprocessing a video to be detected, segmenting and extracting a motion residual image, combining an original image and the motion residual image as the input of a convolutional neural network algorithm, establishing a feature extraction block at least comprising a convolutional layer and a maximum pooling layer, processing and calculating crowd state features contained in the original image and the motion residual image, combining the crowd state features and the motion features, establishing a feature fusion block at least comprising the convolutional layer, the maximum pooling layer and a full connection layer, performing fusion processing, simultaneously establishing a classifier, training the convolutional neural network by using a prefabricated training set with crowding degree labels, leading the classifier to correctly detect the passenger crowding degree in the video to be detected, and more comprehensively representing the passenger flow condition in a monitoring video, the detection of the crowding degree is realized, and the accuracy of algorithm detection is improved.
Description
Fields
The invention belongs to rail vehicle transportation technical fields, and in particular to the urban track traffic based on convolutional neural networks
Passenger's degree of crowding detection method.
Background technique
Gradually perfect along with the continuous quickening of urbanization process and Metro Network pattern, urban track traffic becomes
The main undertaker of urban public transport.Rapid growth passenger flow is horizontal to daily operation management, and more stringent requirements are proposed.One
Aspect, it is efficient to utilize operation resource and meet extensive line in order to formulate reasonable route plan and passenger organization scheme
Online fast-changing passenger's trip requirements, need accurately to grasp passenger flow state and passenger flow data;On the other hand, due to track
Traffic station is usually located at closed underground or track is overhead, and subway concourse area is relatively limited, when the commuter rush hour arrives, largely
Passenger flow, which pours in, easily causes subway concourse crowded and channel blockage.The crowd of big density not only causes passenger flow to dredge difficulty, it is easier to draw
Large-scale group's safety accident is played, undesirable social influence is caused.Therefore a kind of convenient, efficient method is needed to obtain in real time
Trip distribution, monitoring car standee's stream mode, provide strong technical support for passenger organization, ensure the safety and track of passenger
The normal operation of traffic.
Existing track traffic station is generally equipped with complete video monitoring system, and the content of monitor video is clear
Reflect the passenger flow degree of crowding in monitoring range, the passenger flow information comprising mass efficient.In the past due to the limitation of technology, depending on
The acquisition speed of information and levels of precision are difficult to meet the needs of practical application in frequency image.Along with image processing techniques, machine
The continuous development in the fields such as device study and computer performance, intelligent image recognition technology are come into being.By the way that image is known
Other technology combines with the video monitoring system that public arena is installed, and is obtained in image and is wrapped to monitoring camera using computer
The passenger's target contained is handled, and is automatically detected, is studied and judged to passenger flow state, when noting abnormalities target and abnormal scene
It sounds an alarm in time, realizes and the automation of the urban track traffic for passenger flow degree of crowding, intelligentized detection are monitored.
Since crowd is there are apparent visual signature, the crowd density estimation method of early stage passes through extraction crowd mostly
Visual signature after aggregation reflects the degree of crowding of crowd, and such methods include two major classes: based on pixel and being based on texture.
Detection method principle pixel-based is simple, achieves preferable recognition effect in the moderate scene of crowd density, but AT STATION,
The crowded place such as market, cause since crowd density is excessive it is serious mutually block, performance decline it is more obvious;It is based on
Texture category feature is performed poor in real-time, and two methods all not very comply with one's wishes in practical operation and utilization, and therefore, it is necessary to one
Performance more preferably method is planted to realize the detection of the urban track traffic for passenger flow degree of crowding.
Summary of the invention
The present invention is exactly directed to the problems of the prior art, provides a kind of city rail friendship based on convolutional neural networks
Logical passenger's degree of crowding detection method, overcomes that measurement error in the prior art is big, the not strong problem of real-time, with video monitoring
Based on the monitor video that system acquisition arrives, using the powerful recognition capability of convolutional neural networks, multistage by building
Convolutional neural networks extract the composite character of crowd's image and motion residuals image, more comprehensively characterize the visitor in monitor video
Stream situation realizes the detection of the degree of crowding, improves the accuracy rate of algorithm detection.
To achieve the goals above, the technical solution adopted by the present invention is that: city rail based on convolutional neural networks is handed over
Logical passenger's degree of crowding detection method, includes the following steps:
S1 obtains traffic monitoring video to be detected, pre-processes to video to be detected, be segmented and extract motion residuals figure
Picture;
S2, combines with motion residuals image original image the input as convolutional neural networks algorithm, and foundation is at least wrapped
Feature extraction block containing a convolutional layer and maximum pond layer, handles original image and motion residuals image, calculates separately original
The crowd state feature for including in image and motion residuals image;
S3 combines crowd state feature and motion feature, and building contains at least one convolutional layer, maximum pond layer and entirely
The Fusion Features block of articulamentum carries out fusion treatment to the characteristic pattern obtained in step S2:
S4, building include the classifier of degree of crowding grade;
S5 is trained convolutional neural networks using the prefabricated training set with degree of crowding label, makes classifier
Passenger's degree of crowding in video to be measured is correctly detected.
As an improvement of the present invention, the step S1 further comprises,
S11 obtains traffic monitoring video to be detected;
S12 sets detection cycle T, and video to be detected is divided into the video clip that length is T, institute according to detection cycle T
The first frame image for stating video clip is benchmark image;
S13 takes other images in video clip, makes the difference respectively with benchmark image, obtains motion residuals image.
As an improvement of the present invention, in the step S2 each feature extraction block quantity be 1, the convolutional layer with
It is to replace connection with the connection type of pond layer.
It is improved as another kind of the invention, the characteristic pattern that Fusion Features block is exported in the step S3 with feature extraction block
For input, the Fusion Features number of blocks is 1.
It is improved as another kind of the invention, connecting layer number in the step S3 entirely is 3, and is last three layers, described
Convolutional layer and the connection type of pond layer are to replace connection, and before being respectively positioned on full articulamentum.
As another improvement of the invention, classifier grade is divided into ten layers in the step S4, respectively spacious: 0-2
Grade;It is comfortable: 3-5 grades;It is crowded: 5-8 grades;It is dangerous: 9-10 grades.
As a further improvement of the present invention, during the step S5 is trained convolutional neural networks, using random
Gradient descent algorithm is modified the parameter in convolutional neural networks, and the stochastic gradient descent method formula is as follows:
G (θ)=∑ θ xi
θm=θm-1-η▽θh(θ)
Wherein, g (θ) indicates that network assumes that function, θ indicate that the parameter weight of convolutional neural networks, h (θ) indicate loss letter
Number, yiIndicate the sample value of i-th of sample, m indicates that the total degree of algorithm iteration, σ indicate penalty coefficient, ▽θIndicate gradient, η table
Show the learning rate in gradient decline.
As a further improvement of the present invention, in the step S5, when the result of detection is lower than the practical degree of crowding,
Penalty coefficient σ=1+log (yi-gθ(xi)), otherwise σ=1.
Compared with prior art, the invention proposes a kind of, and the urban track traffic passenger based on convolutional neural networks is crowded
Degree detecting method has the beneficial effect that the track traffic for passenger flow based on convolutional neural networks, degree of crowding detection method
In conjunction with video monitoring system acquired image, testing result can real-time, effectively reflect practical passenger flow shape in monitoring range
Condition;The criteria for classifying combination actual scene of the degree of crowding meets the operation demand using ground;Detection can be utmostly in real time
The workload of the artificial checking monitoring video of reduction and guide passenger flow to manage work, improve the safety of urban track traffic operation
And service quality.
Detailed description of the invention
Fig. 1 is detection method step schematic diagram of the invention;
Fig. 2 is the connected mode schematic diagram of convolutional layer and maximum pond layer in the embodiment of the present invention 2;
Fig. 3 is the connection type of convolutional layer in 3 Fusion Features block of the embodiment of the present invention, maximum pond layer and full articulamentum.
Specific embodiment
Below with reference to drawings and examples, the present invention is described in detail.
Embodiment 1
Urban track traffic passenger's degree of crowding detection method based on convolutional neural networks, as shown in Figure 1, including as follows
Step:
S1 obtains traffic monitoring video to be detected, pre-processes to video to be detected, be segmented and extract motion residuals figure
Picture;
S11 obtains traffic monitoring video to be detected;
S12 sets detection cycle T, and video to be detected is divided into the video clip that length is T, institute according to detection cycle T
The first frame image for stating video clip is benchmark image, is denoted as p1;
S13 takes other images in video clip, such as takes the frame figure at 1/3T in detection unit, 2/3T and T respectively
Picture is denoted as p2, p3, p4, by p2, p3, p4Respectively with benchmark image p1It makes the difference, the movement for obtaining crowd in video clip to be detected is residual
Difference image is denoted as p12, p13, p14。
S2, by benchmark image p1With motion residuals image p12,p13,p14One group is combined into as passenger's degree of crowding convolution
The input of neural network algorithm, for the benchmark image p of input1With motion residuals image p12, p13, p14, building first, second,
Third, fourth feature extract four width images of block alignment processing input, the crowd characteristic for including in calculating benchmark image and residual error
The motion feature for including in image, the feature extraction block contain at least one convolutional layer and maximum pond layer;
First, second, third, fourth feature extraction block be convolutional neural networks input, first, second, third, fourth
The quantity of process block is 1, and the convolutional layer is to replace connection with the connection type of maximum pond layer.
S3 combines crowd state feature and motion feature, and building contains at least one convolutional layer, maximum pond layer and entirely
The Fusion Features block of articulamentum carries out fusion treatment to the characteristic pattern that obtains in step S2, the fusion method be by first, the
Two, the characteristic pattern that third, fourth process block export carries out quantitative addition, then is input in fusion block and carries out convolution algorithm.
Fusion Features block is input, Fusion Features with the characteristic pattern that the first, second, third, fourth feature extraction block exports
Number of blocks is 1, and the full layer number that connects is 3, and is last three layers, and the convolutional layer is to replace to connect with the connection type of pond layer
It connects, and before being respectively positioned on full articulamentum, type of attachment is the full connection of convolutional layer-maximum pond layer-convolutional layer-maximum pond layer ...
The full articulamentum of the full articulamentum-of layer-, table specific as follows:
S4, building include the classifier of degree of crowding grade, and the classifier grade is divided into ten layers, respectively spacious: 0-2
Grade;It is comfortable: 3-5 grades;It is crowded: 5-8 grades;It is dangerous: 9-10 grades, to have convolutional neural networks and multiply to include in video to be detected
The ability that the objective degree of crowding is divided.
S5 is trained convolutional neural networks using the prefabricated training set with degree of crowding label, and according to changing
Good gradient descent method updates the parameter in network, and last video input to be detected is completed the convolutional neural networks of training, made
Classifier makes reasonable judgement detection to passenger's degree of crowding in image.
Embodiment 2
The present embodiment difference from example 1 is that: the step S5 is adopted in being trained to convolutional neural networks
The parameter in convolutional neural networks is modified with stochastic gradient descent algorithm, the stochastic gradient descent method formula is as follows:
G (θ)=∑ θ xi
θm=θm-1-η▽θh(θ)
Wherein, g (θ) indicates that network assumes that function, θ indicate that the parameter weight of convolutional neural networks, h (θ) indicate loss letter
Number, yiIndicate the sample value of i-th of sample, m indicates that the total degree of algorithm iteration, σ indicate penalty coefficient, ▽θIndicate gradient, η table
Show the learning rate in gradient decline;
The convolutional neural networks structural parameters update method that this method is related to is changed using the stochastic gradient descent method of improvement
Good stochastic gradient descent method is compared with the traditional method, and increases a punishment system in loss function in conjunction with actual operation demand
Number.In actual operation situation, when the degree of crowding testing result be lower than the practical degree of crowding, may cause follow-up management measure
Delay, and then influence the normal operation of urban track traffic, adverse effect is higher than practical crowded journey much larger than testing result
The situation of degree.Therefore, to decrease below the number that the testing result of the practical degree of crowding occurs, one is added in loss function
Penalty coefficient, when so that the testing result lower than the practical degree of crowding occurring, the adjustment amplitude of parameter increases.When the result of detection
When lower than the practical degree of crowding, σ=1+log (yi-gθ(xi)), in the case of remaining, otherwise σ=1.
Embodiment 3
Step1: setting detection cycle T, the present embodiment draws video to be detected by taking T=3sec as an example, according to detection cycle T
It is divided into the video clip that length is T, hereinafter referred to as detection unit.Take the first frame image of detection unit as benchmark image, note
For p1;The image at t=1s in detection unit, t=2s and t=3s is taken, p is denoted as2, p3, p4;By p2, p3, p4Respectively with reference map
As p1It makes the difference, obtains the motion residuals image of crowd in detection unit, be denoted as p12, p13, p14;By benchmark image p1With motion residuals
Image p12, p13, p14It is combined into one group of input as passenger's degree of crowding detection algorithm;According to the practical fortune of urban track traffic
Situation in battalion, the degree of crowding is divided into 10 grade spaciousnesses, and { (0-2 grades), comfortable (3-5 grades) are crowded (5-8 grades), dangerous
(9-10 grades) }, image of the monitor video under the different degree of crowding is intercepted, addition indicates the label of the degree of crowding, as nerve
The training set of network;
Step2: for the benchmark image p of input1With motion residuals image p12, p13, p14, building first, second, third,
Four width images of fourth process fast alignment processing input include in the crowd characteristic and residual image for including in calculating benchmark image
Motion feature, above-mentioned process block includes 4 convolutional layers and maximum pond layer, and type of attachment is as shown in Figure 2;Assuming that input figure
As passing through convolutional layer C1 for the color image of 224*224, convolution algorithm, convolution step are carried out to it using the convolution kernel of 11*11*3
A length of 4, generate the characteristic pattern of 48 layers of 55*55 pixel;By maximum pond layer MP1, operation scale in pond is 3*3, step-length 2,
The characteristic pattern of 96 layers of 27*27 pixel of generation of Chi Huahou;Generate 256 layers of 13*13's by convolutional layer C2 and maximum pond layer MP2
Characteristic pattern;The characteristic pattern of 384 layers of 13*13 is generated by convolutional layer C3 and maximum pond layer MP3;By convolutional layer C4 and maximum pond
Change the characteristic pattern that layer MP4 generates 384 layers of 13*13;
Step3: four groups of 13*13*384 packets that construction feature fusion block exports the first, second, third, fourth process block
Fusion treatment is carried out containing the characteristic pattern of crowd characteristic and motion feature.The Fusion Features block includes convolutional layer, a maximum
Pond layer and 3 full articulamentums, connection are as shown in Figure 3;Wherein, for the convolution kernel of convolutional layer C5 having a size of 3*3, convolution step-length is 1,
The characteristic pattern of four groups of 13*13*256 is obtained after convolution algorithm;It is again 3*3, the maximum pond layer that step-length is 2 by pond scale
MP5 obtains the characteristic pattern of 4 groups of 6*6*256;Again by three layers with 4096 neuron full articulamentums obtain 4096 it is defeated
Out.
Step4: building classifier includes 10 degree of crowding grades, i.e., { spacious (0-2 grades), comfortable (3-5 grades) are crowded
(5-8 grades), dangerous (9-10 grades) }, so that network is had the ability divided to the passenger's degree of crowding for including in detection unit.
4096 outputs of full articulamentum are connect entirely with 10 neurons of classifier.
Step5: the convolutional neural networks previously constructed are trained using preset training set, and according to the ladder of improvement
Spend the parameter in descending method corrective networks.The stochastic gradient descent method of improvement is on the basis of conventional method, according to practical fortune
Battalion's demand increases penalty coefficient σ=1+log (y when the result of detection is lower than the practical degree of crowding in loss functioni-gθ
(xi)), otherwise σ=1.After completing training, unit to be detected is inputted into convolutional neural networks, does classifier to video to be detected
Reasonable judgement out.
The convolutional layer activation primitive that the present embodiment is related to selects ReLU function.
The basic principles, main features and advantages of the present invention have been shown and described above.The technology of the industry
Personnel only illustrate the present invention it should be appreciated that the present invention is not limited by examples detailed above described in examples detailed above and specification
Principle, various changes and improvements may be made to the invention without departing from the spirit and scope of the present invention, these variation and
Improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by appended claims and its is equal
Object defines.
Claims (7)
1. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks, which is characterized in that including as follows
Step:
S1 obtains traffic monitoring video to be detected, pre-processes to video to be detected, be segmented and extract motion residuals image;
S2 combines with motion residuals image original image the input as convolutional neural networks algorithm, establishes and includes at least one
The feature extraction block of a convolutional layer and maximum pond layer, handles original image and motion residuals image, calculates separately original image
With the crowd state feature for including in motion residuals image;
S3 combines crowd state feature and motion feature, and building contains at least one convolutional layer, maximum pond layer and full connection
The Fusion Features block of layer carries out fusion treatment to the characteristic pattern obtained in step S2:
S4, building include the classifier of degree of crowding grade;
S5 is trained convolutional neural networks using the prefabricated training set with degree of crowding label, using stochastic gradient
Descent algorithm is modified the parameter in convolutional neural networks, carries out classifier to passenger's degree of crowding in video to be measured
Correct detection, the stochastic gradient descent method formula are as follows:
G (θ)=∑ θ xi
Wherein, g (θ) indicates that network assumes function;The parameter weight of θ expression convolutional neural networks;H (θ) indicates loss function;xi
Indicate the network inputs value of i-th of sample;yiIndicate the sample value of i-th of sample;The total degree of m expression algorithm iteration;σ is indicated
Penalty coefficient;Indicate gradient;η indicates the learning rate in gradient decline.
2. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks as described in claim 1,
It is characterized in that the step S1 further comprises,
S11 obtains traffic monitoring video to be detected;
S12 sets detection cycle T, and video to be detected is divided into the video clip that length is T, the view according to detection cycle T
The first frame image of frequency segment is benchmark image;
S13 takes other images in video clip, makes the difference respectively with benchmark image, obtains motion residuals image.
3. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks as claimed in claim 1 or 2,
It is characterized in that the quantity of each feature extraction block is 1 in the step S2, the convolutional layer and the connection type with pond layer
Alternately to connect.
4. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks as claimed in claim 3,
It is characterized in that Fusion Features block in the step S3 with characteristic pattern that feature extraction block exports for input, the Fusion Features block number
Amount is 1.
5. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks as claimed in claim 4,
It is characterized in that connecting layer number in the step S3 entirely is 3, and is last three layers, the connection type of the convolutional layer and pond layer
Alternately to connect, and before being respectively positioned on full articulamentum.
6. urban track traffic passenger's degree of crowding detection method based on convolutional neural networks as claimed in claim 3,
It is characterized in that in the step S4 that classifier grade is divided into ten layers, it is respectively spacious: 0-2 grades;It is comfortable: 3-5 grades;It is crowded: 5-8
Grade;It is dangerous: 9-10 grades.
7. urban track traffic passenger's degree of crowding detection method according to claim 6 based on convolutional neural networks,
It is characterized in that in the step S5, when the result of detection is lower than the practical degree of crowding, penalty coefficient σ=1+log (yi-gθ
(xi)), otherwise σ=1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811162491.4A CN109446920B (en) | 2018-09-30 | 2018-09-30 | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811162491.4A CN109446920B (en) | 2018-09-30 | 2018-09-30 | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109446920A CN109446920A (en) | 2019-03-08 |
CN109446920B true CN109446920B (en) | 2019-08-06 |
Family
ID=65546153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811162491.4A Active CN109446920B (en) | 2018-09-30 | 2018-09-30 | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109446920B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018097384A1 (en) * | 2016-11-24 | 2018-05-31 | 한화테크윈 주식회사 | Crowdedness notification apparatus and method |
CN110276398A (en) * | 2019-06-21 | 2019-09-24 | 北京滴普科技有限公司 | A kind of video abnormal behaviour automatic judging method |
CN112347814A (en) * | 2019-08-07 | 2021-02-09 | 中兴通讯股份有限公司 | Passenger flow estimation and display method, system and computer readable storage medium |
CN110598630A (en) * | 2019-09-12 | 2019-12-20 | 江苏航天大为科技股份有限公司 | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network |
CN111723690B (en) * | 2020-06-03 | 2023-10-20 | 北京全路通信信号研究设计院集团有限公司 | Method and system for monitoring state of circuit equipment |
CN111582251B (en) * | 2020-06-15 | 2021-04-02 | 江苏航天大为科技股份有限公司 | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network |
CN111859717B (en) * | 2020-09-22 | 2020-12-29 | 北京全路通信信号研究设计院集团有限公司 | Method and system for minimizing regional multi-standard rail transit passenger congestion coefficient |
CN112396587B (en) * | 2020-11-20 | 2024-01-30 | 重庆大学 | Method for detecting congestion degree in bus compartment based on collaborative training and density map |
CN113553921B (en) * | 2021-07-02 | 2022-06-10 | 兰州交通大学 | Convolutional neural network-based subway carriage congestion degree identification method |
CN116596731A (en) * | 2023-05-25 | 2023-08-15 | 北京贝能达信息技术股份有限公司 | Rail transit intelligent operation and maintenance big data management method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069400A (en) * | 2015-07-16 | 2015-11-18 | 北京工业大学 | Face image gender recognition system based on stack type sparse self-coding |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN106203331A (en) * | 2016-07-08 | 2016-12-07 | 苏州平江历史街区保护整治有限责任公司 | A kind of crowd density evaluation method based on convolutional neural networks |
-
2018
- 2018-09-30 CN CN201811162491.4A patent/CN109446920B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069400A (en) * | 2015-07-16 | 2015-11-18 | 北京工业大学 | Face image gender recognition system based on stack type sparse self-coding |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN106203331A (en) * | 2016-07-08 | 2016-12-07 | 苏州平江历史街区保护整治有限责任公司 | A kind of crowd density evaluation method based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
Single-Image Crowd Counting via Mutil-Column Convolutional Neural Network;Yingying Zhang et al;《The IEEE Conference on Computer Vision and Pattern Recognition(CVPR)》;20161212;第589-596页 |
Also Published As
Publication number | Publication date |
---|---|
CN109446920A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109446920B (en) | Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network | |
WO2019047905A1 (en) | Road traffic analysis system, method and apparatus | |
CN104616093B (en) | A kind of seismic disaster relief command dispatching system and method | |
CN108898085A (en) | Intelligent road disease detection method based on mobile phone video | |
CN104778834B (en) | Urban road traffic jam judging method based on vehicle GPS data | |
CN108375808A (en) | Dense fog forecasting procedures of the NRIET based on machine learning | |
CN108492555A (en) | A kind of city road net traffic state evaluation method and device | |
CN104021671B (en) | The determination methods of the road real-time road that a kind of svm combines with fuzzy Judgment | |
CN104504377B (en) | A kind of passenger on public transport degree of crowding identifying system and method | |
CN109190507A (en) | A kind of passenger flow crowding calculation method and device based on rail transit train | |
CN108198415A (en) | A kind of city expressway accident forecast method based on deep learning | |
CN110047291A (en) | A kind of Short-time Traffic Flow Forecasting Methods considering diffusion process | |
CN106652483A (en) | Method for arranging traffic information detection points in local highway network by utilizing detection device | |
KR101183105B1 (en) | Method of establishing information of cloud data and establishing system of information of cloud data | |
CN104156734A (en) | Fully-autonomous on-line study method based on random fern classifier | |
CN109935080A (en) | The monitoring system and method that a kind of vehicle flowrate on traffic route calculates in real time | |
CN105513362B (en) | A kind of bus platform adjacent area bus running state evaluation verification method | |
CN102332089A (en) | Railway wagon brake shoe key going-out fault recognition method based on artificial neural network | |
CN116863274A (en) | Semi-supervised learning-based steel plate surface defect detection method and system | |
CN106611160A (en) | CNN (Convolutional Neural Network) based image hair identification method and device | |
CN108776796A (en) | A kind of action identification method based on global spatio-temporal attention model | |
CN109191845A (en) | A kind of public transit vehicle arrival time prediction technique | |
CN109887283A (en) | A kind of congestion in road prediction technique, system and device based on bayonet data | |
CN102509102A (en) | Visibility measuring method based on image study | |
CN107680391A (en) | Two pattern fuzzy control methods of crossroad access stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |