CN111723739A - Congestion state monitoring method in bus based on convolutional neural network - Google Patents

Congestion state monitoring method in bus based on convolutional neural network Download PDF

Info

Publication number
CN111723739A
CN111723739A CN202010565599.9A CN202010565599A CN111723739A CN 111723739 A CN111723739 A CN 111723739A CN 202010565599 A CN202010565599 A CN 202010565599A CN 111723739 A CN111723739 A CN 111723739A
Authority
CN
China
Prior art keywords
congestion
layer
monitoring model
congestion state
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010565599.9A
Other languages
Chinese (zh)
Inventor
李锋林
宋晓伟
刘雄
刘彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Esso Information Co ltd
Original Assignee
Esso Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Esso Information Co ltd filed Critical Esso Information Co ltd
Priority to CN202010565599.9A priority Critical patent/CN111723739A/en
Publication of CN111723739A publication Critical patent/CN111723739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a method for monitoring congestion states in a bus based on a convolutional neural network, which comprises the following steps: 1. formulating an image annotation criterion; 2. completing data annotation on the acquired image according to the annotation criterion in the step 1, and dividing the acquired image into a training data set and a test data set according to a ratio of 8: 2; 3. carrying out image processing on the training data set to obtain an expanded training data set; 4. and designing a congestion state monitoring model in the bus based on the convolutional neural network. 5. Training parameter optimization is carried out on the congestion state monitoring model designed in the step 4 by using the expanded training data set obtained in the step 3 to obtain a congestion state monitor; 6. and (3) testing the congestion state monitor by using the test data set obtained in the step (2). Under the condition of ensuring the accuracy of state judgment, the invention can finish the state monitoring judgment by transmitting the picture of the current vehicle back to the rear-end server, and can finish the congestion state monitoring judgment by utilizing the existing camera in the vehicle.

Description

Congestion state monitoring method in bus based on convolutional neural network
Technical Field
The invention relates to the field of bus system management and electronic information, in particular to a method for monitoring a congestion state in a bus based on a convolutional neural network.
Background
The convolutional neural network is inspired by the biological visual information processing process and evolves, and the connection mode between the basic units of the convolutional neural network is similar to that of visual cortical tissues of animals, so that the convolutional neural network model has the capability of processing complex information. Due to the optimization development of the convolutional neural network model, the fields of face recognition, automatic driving, voice and video and the like are promoted to be developed greatly in recent years.
With the increase of urban population year by year, the crowded state in the bus is a normal life state, and the traffic transportation management department needs a more accurate and faster means to complete the scheduling task of the bus. Therefore, accurate information of the congestion state in the vehicle is provided for the traffic transportation management department in time, and the precondition for completing reasonable scheduling is provided.
The existing bus congestion monitoring system completes congestion judgment by counting the number of passengers getting on and off the bus when the bus stops at each stop. The method includes the steps that statistics of the number of passengers getting on and off needs to be assumed, a target background is single, firstly, a cascade classifier is trained through an AdaBoost self-adaptive algorithm based on Haar-like characteristics, and the cascade classifier is used for monitoring the head of a passenger; drawing a nine-square grid layout of target movement based on empirical data, and combining the nine-square grid layout with a hash table to evolve a target tracking algorithm based on the hash table; and finally, the counting of the number of people getting on or off the bus is completed by distinguishing the access direction of the recognition target. Therefore, the method needs to track the target so as to distinguish the getting-on and getting-off states of passengers, and in order to ensure that the target tracking state cannot lose the target, the video data of each station needs to be sent back to a back-end server to complete calculation, or high-performance calculating equipment is additionally arranged on the bus. If a large amount of videos are transmitted to the back-end server, a high requirement is put on the performance of the back-end server, and if a high-performance computing device is additionally installed, the cost is also increased for the installation of the device. In addition, the high-definition cameras are required to be additionally arranged at the front door and the rear door of the bus, and the installation cost is invisibly increased.
Disclosure of Invention
The invention aims to provide a simple and effective method for monitoring the congestion state in a bus based on a convolutional neural network aiming at the defects of the existing method for identifying the congestion state in the bus, so that the high requirement of the existing method on video transmission is avoided under the condition of ensuring the accuracy of state judgment, the state monitoring judgment can be completed by returning a picture of the current bus to a rear-end server, and the congestion state monitoring judgment can be completed by utilizing the existing camera in the bus without additionally installing a camera.
The technical scheme for realizing the purpose of the invention is as follows:
a method for monitoring congestion states in a bus based on a convolutional neural network comprises the following steps:
the method comprises the following steps that firstly, an in-car image is collected in real time by using an existing camera in the car;
step two, transmitting the images in the vehicle collected in real time in the step one to a monitoring system,
inputting the current images in the vehicle into a congestion state monitoring model of the monitoring system, wherein the congestion state monitoring model is designed and trained based on a convolutional neural network, and the congestion state monitoring model outputs a judgment result of the congestion state;
the congestion state monitoring model comprises the following criteria:
the congestion status monitoring model determines that the congestion status is "congested" in accordance with all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is more than or equal to 10; b) because the standing passenger blocks, the whole vehicle has no visible carriage floor, but because of the perspective effect, although the visible carriage floor is absent, the actual visible carriage floor is inferred to exist, and the visible carriage floor does not belong to the category;
the congestion status monitoring model determines that the congestion status is "moderate" subject to all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is less than or equal to 5; b) because the standing passengers are not completely shielded, the whole vehicle has visible carriage floors; or, because of perspective effect, although there is no visible car floor, it is inferred that there is actually a visible car floor;
the congestion status monitoring model determines that the congestion status is "available seats": neither all conditions judged to be "moderate" nor all conditions judged to be "crowded" are met.
The perspective effect in the invention refers to that the image shot by the camera is large at the near side, small at the far side, wide at the near side, narrow at the far side, real at the near side, virtual at the far side.
As a further improvement of the present invention, after the congestion status monitoring model is input to the current in-vehicle image, the determination is not continued if the determination is correct in the order of determining "congested", determining "moderate", and determining "empty seat".
As a further improvement of the present invention, the congestion status monitoring model is represented in the form of:
CSMM = [I, C, R, DW1, DW2, DW3, DW4, DW5, DW6, AP, FC, L]
wherein, the symbol "[ ]" represents that the congestion state monitoring model is constructed according to the calculation layer sequence in brackets, I represents a data input layer, C represents a common convolution layer, and R represents an activation function layer; DW1-DW6 denotes channel separation convolutional layers, AP denotes average pooling layers; FC denotes a full link layer, L denotes a loss function layer, and a result of state determination is obtained.
As a further improvement of the present invention, the normal convolutional layer is composed of 32 convolution kernels of size 3 x 3 and includes a batch normalization layer.
As a further development of the invention, the parameter of the fully connected layer is 512 x 3.
As a further improvement of the invention, the channel separation convolutional layer is represented in the form: DW = [ DC, BN, R, PC, BN, R ];
the channel separation convolution layer is formed by sequentially combining a space convolution layer DC, a batch normalization layer BN, an activation function layer R, a fusion convolution layer PC, the batch normalization layer BN and the activation function layer R, the space convolution layer DC is formed by convolution kernels with the size of 3 x 3, and the fusion convolution layer PC is formed by convolution kernels with the size of 1 x 1.
As a further improvement of the present invention, after the congestion status monitoring model is designed, training is required, and the specific process of the training is as follows:
a) collecting N pictures, wherein N is more than 10, and marking the congestion states of the N pictures according to a criterion for judging the congestion states;
b) dividing all the marked N pictures into a training data set and a testing data set according to the ratio of 8: 2;
c) and expanding the training data set, and training the congestion state monitoring model by using the expanded training data set.
As a further improvement of the present invention, the method for expanding the training data set comprises:
(c11) rotating the picture through a random horizontal mirror image to simulate the working condition that the installation positions of the cameras are inconsistent;
(c12) processing pictures through random color dithering to simulate the working condition of illumination change in the automobile;
(c13) the working condition that the camera types in the vehicle are inconsistent is simulated by adjusting the image compression rate to process the image;
(c14) processing the image through an illumination simulation function to simulate the working conditions of all the days at various time intervals;
(c15) the imaging effect of various cameras is simulated by carrying out numerical adjustment on the color channels, so that the working condition that the cameras in the vehicle are inconsistent is simulated;
(c16) the input image is subjected to gray processing randomly, so that the influence of the photosensitive effect of the camera on difficult recognition is reduced;
(c17) the influence of imaging noise on a monitoring model is reduced through normalization processing of an input image;
(c18) and the image is processed by Gaussian filtering, so that the influence of image noise on the monitoring model is reduced.
As a further improvement of the present invention, the method for training the congestion status monitoring model comprises:
(c21) using the expanded training data set as the input of a congestion state monitoring model CSMM, and obtaining the CSMM judgment result from the output of the full connection layer FC;
(c22) feeding the output result in the step (c 21) and the corresponding label of the input image into a loss function layer L to calculate a loss value;
(c23) performing update iteration on the parameters in the CSMM by using a random gradient descent method through the mark in the step (c 22);
(c24) judging whether the current iteration number reaches a preset learning rate adjusting point or not, and returning to the step (c 21) to continue the model training after adjusting the learning rate when the current iteration number reaches the preset learning rate adjusting point;
(c25) and when the iteration times reach the maximum set value, stopping iteration to obtain a congestion state monitor CSMT.
As a further improvement of the present invention, the congestion status monitoring model needs to be tested after being trained, and the method for testing the congestion status monitoring model comprises:
(1) inputting the marked test data set into a congestion status monitor CSMT, and obtaining a corresponding output judgment result output from a full connection layer FC;
(2) and comparing the judgment result output with the label file corresponding to the test data to calculate the accuracy of the current congestion state monitor.
Compared with the prior art, the invention has the beneficial effects that:
1. the method and the system have the advantages that under the condition of ensuring the accuracy of state judgment, the high requirement of the existing method on video transmission is avoided, the state monitoring judgment can be completed by returning the current picture of the vehicle to the rear-end server, in addition, the camera does not need to be additionally arranged, and the congestion state monitoring judgment can be completed by utilizing the existing camera in the vehicle.
2. According to the invention, the existing camera in the vehicle is used for collecting the images in the vehicle in different vehicles, different time periods and different crowds, an image marking criterion is formulated according to the target task requirement, and the collected image data is reasonably marked. And then reasonably designing a congestion state monitoring model CSMM (crown StateMonitor model) in the bus based on the convolutional neural network according to a target task, and carrying out optimization adjustment on parameters of the model CSMM by using the marked image data to obtain an optimal parameter set.
3. The monitoring method has simple flow and high execution speed, and particularly, the existing method for monitoring the congestion state in the bus needs to use human head monitoring and target tracking algorithms and judge the traveling direction of a tracked target to finish the judgment of the congestion state of the current bus, and the monitoring flow is responsible for and has low speed. The method can obtain the judgment result of the crowded state only by inputting the current images in the vehicle into the trained state monitor CSMT.
4. The invention does not need to be additionally provided with a camera. Specifically, in the existing bus congestion monitoring method, a camera needs to be additionally arranged at a loading/unloading place to count the number of passengers getting on/off the bus, a head monitoring algorithm and a target tracking algorithm have higher requirements on the installation of the camera, and the accuracy of the algorithm is influenced by the installation position. The method of the invention does not need to additionally install a camera, and can finish the judgment of the crowded state in the vehicle only by utilizing the existing camera in the vehicle.
Drawings
Fig. 1 is a flow chart of a method for monitoring congestion status in a bus;
fig. 2 is a schematic diagram of a congestion status monitoring model;
FIG. 3 is a schematic diagram of a discrete convolutional layer per channel;
FIG. 4 is a schematic view of a conventional convolutional layer;
FIG. 5 is a schematic view of a channel separation convolutional layer;
FIG. 6 is a flow diagram of a feature extraction module;
FIG. 7 is a schematic representation of a data set.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
The invention provides a method for monitoring congestion states in a bus based on a convolutional neural network, which comprises the following steps:
the method comprises the following steps that firstly, an in-car image is collected in real time by using an existing camera in the car;
step two, transmitting the images in the vehicle collected in real time in the step one to a monitoring system,
inputting the current images in the vehicle into a congestion state monitoring model of the monitoring system, wherein the congestion state monitoring model is designed and trained based on a convolutional neural network, and the congestion state monitoring model outputs a judgment result of the congestion state;
the congestion state monitoring model comprises the following criteria:
the congestion status monitoring model determines that the congestion status is "congested" in accordance with all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is more than or equal to 10; b) because the standing passenger blocks, the whole vehicle has no visible carriage floor, but because of the perspective effect, although the visible carriage floor is absent, the actual visible carriage floor is inferred to exist, and the visible carriage floor does not belong to the category;
the congestion status monitoring model determines that the congestion status is "moderate" subject to all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is less than or equal to 5; b) because the standing passengers are not completely shielded, the whole vehicle has visible carriage floors; or, because of perspective effect, although there is no visible car floor, it is inferred that there is actually a visible car floor;
the congestion status monitoring model determines that the congestion status is "available seats": neither all conditions judged to be "moderate" nor all conditions judged to be "crowded" are met.
After the current in-vehicle image is input into the congestion state monitoring model, according to the sequence of firstly judging "congested", then judging "moderate" and finally judging "available seats", if the judgment is correct, the judgment is not continued.
In the invention, firstly, a data set is calibrated, then a CSMM (model sense memory model) for monitoring the congestion state in the bus is designed, then a model is trained, and finally the model is tested.
1. Calibrating a data set: and marking the collected data according to a formulated data marking criterion, and dividing the marked data into a training data set TrainDataSet and a test data set TestDataSet according to a ratio of 8: 2.
2. And designing a congestion state monitoring model CSMM in the bus based on the convolutional neural network.
3. And (3) carrying out CSMM training on a congestion state monitoring model:
3a) in order to simulate more actual working conditions by utilizing the existing training data set, the training data set TrainDataSet can be transformed by an image processing means to obtain an expanded training data set TrainDataSet Plus;
3b) taking the extended training data set Train DataSet Plus as the input of a congestion State monitoring model CSMM in the bus, and optimizing and adjusting model parameters by using a random gradient descent method to obtain a trained congestion State Monitor CSMT (crowd State Monitor tool) which is a model for monitoring the congestion State of the bus;
4. and (3) testing the effect of the model: and taking the Test data set as the input of the congestion state monitor CSMT, and obtaining a model Test output result according to the output judgment result of the state monitor and the label data of the Test data set.
The first implementation mode comprises the following steps:
the present embodiment discloses a congestion status monitoring model.
The congestion status monitoring model is represented in the form: CSMM = [ I, C, R, DW1, DW2, DW3, DW4, DW5, DW6, AP, FC, L ].
Wherein, the symbol "[ ]" represents that the congestion state monitoring model is constructed according to the calculation layer sequence in brackets, I represents a data input layer, C represents a common convolution layer, and R represents an activation function layer; DW1-DW6 denotes channel separation convolutional layers, AP denotes average pooling layers; FC denotes a full link layer, L denotes a loss function layer, and a result of state determination is obtained. Preferably, the normal convolutional layer is composed of 32 convolution kernels of size 3 x 3 and includes a batch normalization layer. The parameters of the fully connected layers were 512 x 3.
The channel separation convolutional layer is represented by: DW = [ DC, BN, R, PC, BN, R ]; the channel separation convolution layer is formed by sequentially combining a space convolution layer DC, a batch normalization layer BN, an activation function layer R, a fusion convolution layer PC, the batch normalization layer BN and the activation function layer R, the space convolution layer DC is formed by convolution kernels with the size of 3 x 3, and the fusion convolution layer PC is formed by convolution kernels with the size of 1 x 1.
The congestion state monitoring model needs to be trained after being designed, and the training process comprises the following specific steps: a) collecting N pictures, wherein N is more than 10, and marking the congestion states of the N pictures according to a criterion for judging the congestion states; b) dividing all the marked N pictures into a training data set and a testing data set according to the ratio of 8: 2; c) and expanding the training data set, and training the congestion state monitoring model by using the expanded training data set.
The method for expanding the training data set comprises the following steps: (c11) rotating the picture through a random horizontal mirror image to simulate the working condition that the installation positions of the cameras are inconsistent; (c12) processing pictures through random color dithering to simulate the working condition of illumination change in the automobile; (c13) the working condition that the camera types in the vehicle are inconsistent is simulated by adjusting the image compression rate to process the image; (c14) processing the image through an illumination simulation function to simulate the working conditions of all the days at various time intervals; (c15) the imaging effect of various cameras is simulated by carrying out numerical adjustment on the color channels, so that the working condition that the cameras in the vehicle are inconsistent is simulated; (c16) the input image is subjected to gray processing randomly, so that the influence of the photosensitive effect of the camera on difficult recognition is reduced; (c17) the influence of imaging noise on a monitoring model is reduced through normalization processing of an input image; (c18) and the image is processed by Gaussian filtering, so that the influence of image noise on the monitoring model is reduced.
The method for training the congestion state monitoring model comprises the following steps: (c21) using the expanded training data set as the input of a congestion state monitoring model CSMM, and obtaining the CSMM judgment result from the output of the full connection layer FC; (c22) feeding the output result in the step (c 21) and the corresponding label of the input image into a loss function layer L to calculate a loss value; (c23) performing update iteration on the parameters in the CSMM by using a random gradient descent method through the mark in the step (c 22); (c24) judging whether the current iteration number reaches a preset learning rate adjusting point or not, and returning to the step (c 21) to continue the model training after adjusting the learning rate when the current iteration number reaches the preset learning rate adjusting point; (c25) and when the iteration times reach the maximum set value, stopping iteration to obtain a congestion state monitor CSMT.
The congestion state monitoring model needs to be tested after being trained, and the method for testing the congestion state monitoring model comprises the following steps: (1) inputting the marked test data set into a congestion status monitor CSMT, and obtaining a corresponding output judgment result output from a full connection layer FC; (2) and comparing the judgment result output with the label file corresponding to the test data to calculate the accuracy of the current congestion state monitor.
The second embodiment:
on the basis of the first embodiment, the present embodiment discloses the following:
1. the training data and test data calibration criteria are formulated, as shown in fig. 7, and include:
step one, judging whether to mark as crown, and marking as crown according to the following conditions:
a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is more than or equal to 10;
b) because the standing passengers are shielded, the whole vehicle has no visible carriage floor; however, due to the perspective effect, although there is no visible car floor, it is inferred that there is a visible car floor actually, and the type does not belong to the category;
step two, judging whether the standard is Normal or not, and marking the standard as Normal according to the following all conditions:
a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is less than or equal to 5;
b) because the standing passengers are not completely shielded, the whole vehicle has visible carriage floors; or, because of perspective effect, the visible car floor is not visible, but the visible car floor is inferred to actually exist.
Step three, judging whether the label is a Modate, and labeling the Modate meeting the following conditions:
a) neither all conditions judged to be Normal nor all conditions judged to be Crowd are met.
2. Labeling and dividing data into a training data set and a testing data set:
a) according to a data calibration criterion, according to the sequence of firstly judging Crowd, then judging Normal and finally judging mode, if the judgment is correct, the judgment is not continuously executed;
b) dividing all marked data into a training data set Train DataSet and a Test data set Test DataSet according to the ratio of 8: 2;
3. training data set Train DataSet expansion:
because data labeling is time-consuming, a training set needs to be expanded in an image processing mode to enrich the working condition coverage in the training set, and the specific data expansion method comprises the following steps:
a) rotating the picture through a random horizontal mirror image to simulate the working condition that the installation positions of the cameras are inconsistent;
b) processing pictures through random color dithering to simulate the working condition of illumination change in the automobile;
c) the working condition that the camera types in the vehicle are inconsistent is simulated by adjusting the image compression rate to process the image;
d) processing the image through an illumination simulation function to simulate the working conditions of all the days at various time intervals;
e) the imaging effect of various cameras is simulated by carrying out numerical adjustment on the color channels, so that the working condition that the cameras in the vehicle are inconsistent is simulated.
f) The input image is subjected to gray processing randomly, so that the influence of the photosensitive effect of the camera on difficult recognition is reduced;
g) the influence of imaging noise on a monitoring model is reduced through normalization processing of an input image;
h) the image is processed through Gaussian filtering, and the influence of image noise on a monitoring model is reduced;
4. stage of model design
The congestion state monitoring model CSMM is formed by combining a feature extraction module and a state judgment module, wherein:
a) the feature extraction module is formed by combining a common convolution layer and six channel separation convolution layers. The normal convolutional layer is followed by a batch normalization layer and an activation function layer. The channel separation convolution layer is composed of a space convolution and a fusion convolution, and a batch normalization layer and an activation function layer are immediately connected after the space convolution and the fusion convolution.
Wherein the normal convolutional layer and the channel separation convolutional layer operate as shown in fig. 2, 3, 4 and 5. The channel separation convolutional layer comprises two parts, namely a channel separation convolution and a channel fusion convolutional layer, wherein the channel separation convolution carries out convolution operation on the characteristics of each channel of the input characteristic diagram, and then the characteristic diagrams of a plurality of channels are fused by utilizing the channel fusion convolution. Compared with the common convolutional layer, the method can reduce the number of parameters of the model and enhance the nonlinear expression capability of the model.
The working flow of the feature extraction layer is as shown in fig. 6, and firstly, the parameters of the feature extraction module are initialized randomly, then the judgment result of the current input image and the labeling result of the current image are used for being input into the loss function, the corresponding loss value is obtained through calculation, and further, the parameters of the feature module are adjusted according to the back propagation principle. When the loss function value is smaller and only fluctuation with small amplitude exists, the iterative training process of the model can be judged to be completed.
b) The classification judging module is composed of an average pooling layer, a full-connection layer and a loss function layer.
c) The congestion status monitoring model CSMM may be expressed as follows:
CSMM = [I, C, R, DW1, DW2, DW3, DW4, DW5, DW6, AP, FC, L]
DW = [DC, BN, R, PC, BN, R]
wherein, the symbol "[ ]" represents that the model is constructed according to the calculation layer sequence in brackets, I represents a data input layer, C represents a common convolution layer, is composed of 32 convolution kernels with the size of 3 × 3 and comprises a batch normalization layer, and R represents an activation function layer; DW1_ DW6 represents a channel separation convolutional layer, which is composed of a spatial convolutional layer DC, a batch normalization layer BN, an activation function layer R, a fusion convolutional layer PC, a batch normalization layer BN, and an activation function layer R, which are sequentially combined, wherein DC is composed of convolution kernels with a size of 3 × 3, and PC is composed of convolution kernels with a size of 1 × 1. AP represents an average pooling layer; FC represents a full connection layer, the parameter is 512 × 3, L represents a loss function layer, and a state judgment result is obtained;
5. model training phase
Step one, an expanded training data set Train DataSet Plus is used as the input of a congestion state monitoring model CSMM, and a determination result output of the CSMM is obtained from the output of an FC layer;
step two, sending the output result output of the step one and the corresponding label of the input image into a loss function L to calculate a loss value delta;
step three, updating and iterating the parameters in the CSMM by using a random gradient descent method through the delta in the step two;
step four, judging whether the current iteration number reaches a preset learning rate adjusting point or not, and when the current iteration number reaches the preset adjusting point, adjusting the learning rate and returning to the step one to continue the model training;
and step five, when the iteration times reach the maximum set value, stopping iteration to obtain a CSMT.
6. Stage of model testing
a) Inputting the marked test data into a congestion status monitor CSMT, and obtaining a corresponding output judgment result output from a full connection layer FC;
b) and comparing and calculating the output and the label file corresponding to the test data to obtain the accuracy of the current congestion state monitor.
The third embodiment is as follows:
firstly, experimental conditions:
a) experiment operation software and hardware platform:
a software platform: python3.6, PyTorch1.1;
hardware platform: intel I78700, NVIDIA GTX1070 Ti;
b) setting experimental data and key parameters:
data set:
number of training sets: normal (4000 sheets), Moderate (6000 sheets), crown (4000 sheets);
number of test sets: normal (2000), Moderate (500), crown (500);
the learning rate adjustment is set to:
initial learning rate of 0.1, setpoint of (100, 300, 600)
The training optimization parameters are set as:
weight attenuation coefficient: 0.0005, iteration round number: 800, batch size: 32
Secondly, analyzing an experimental result:
the embodiment simulates and constructs a relatively complete training data set by reasonably designing a data calibration criterion and constructing a monitoring model by means of the strong bionic capability of the convolutional neural network and expanding training data through an image processing algorithm, so that the congestion monitoring model has better adaptability to various working conditions on the bus. The method of the invention does not need to additionally install a camera, can complete monitoring by utilizing the existing camera in the vehicle, and reduces the installation cost of the product.
Compared with the bus congestion monitoring products on the market at present, the bus congestion monitoring method has the advantages of being fast, low in cost, convenient to upgrade for the second time and the like, is an effective bus congestion monitoring method, and has good practical value and application prospect.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (8)

1. A method for monitoring the congestion state in a bus based on a convolutional neural network is characterized by comprising the following steps:
the method comprises the following steps that firstly, an in-car image is collected in real time by using an existing camera in the car;
step two, transmitting the images in the vehicle collected in real time in the step one to a monitoring system,
inputting the current images in the vehicle into a congestion state monitoring model of the monitoring system, wherein the congestion state monitoring model is designed and trained based on a convolutional neural network, and the congestion state monitoring model outputs a judgment result of the congestion state;
the congestion state monitoring model comprises the following criteria:
the congestion status monitoring model determines that the congestion status is "congested" in accordance with all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is more than or equal to 10; b) because the standing passenger blocks, the whole vehicle has no visible carriage floor, but because of the perspective effect, although the visible carriage floor is absent, the actual visible carriage floor is inferred to exist, and the visible carriage floor does not belong to the category;
the congestion status monitoring model determines that the congestion status is "normal" in accordance with all of the following conditions: a) after the visible vacant seats are offset, the number of passengers standing in the whole vehicle is less than or equal to 5; b) because the standing passengers are not completely shielded, the whole vehicle has visible carriage floors; or, due to perspective effect, although there is no visible carriage floor, it is inferred that there is actually a visible carriage floor;
the congestion status monitoring model determines that the congestion status is "available seats": neither all the conditions determined to be "normal" nor all the conditions determined to be "crowded" are met.
2. The method as claimed in claim 1, wherein after the congestion status monitoring model is inputted to the current in-vehicle image, the determination is not continued if the determination is correct in the order of "congested", then "moderate", and finally "available seats".
3. The method of claim 1, wherein the congestion status monitoring model is represented in the form of:
CSMM = [I, C, R, DW1, DW2, DW3, DW4, DW5, DW6, AP, FC, L]
wherein, the symbol "[ ]" represents that the congestion state monitoring model is constructed according to the calculation layer sequence in brackets, I represents a data input layer, C represents a common convolution layer, and R represents an activation function layer; DW1-DW6 denotes channel separation convolutional layers, AP denotes average pooling layers; FC denotes a full link layer, L denotes a loss function layer, and a result of state determination is obtained.
4. The method of claim 3, wherein the channel separation convolutional layer is represented in the form of: DW = [ DC, BN, R, PC, BN, R ];
the channel separation convolution layer is formed by sequentially combining a space convolution layer DC, a batch normalization layer BN, an activation function layer R, a fusion convolution layer PC, the batch normalization layer BN and the activation function layer R, the space convolution layer DC is formed by convolution kernels with the size of 3 x 3, and the fusion convolution layer PC is formed by convolution kernels with the size of 1 x 1.
5. The method for monitoring the congestion state in the bus according to claim 3 or 4, wherein the congestion state monitoring model is designed and then needs to be trained, and the training process comprises the following specific steps:
a) collecting N pictures, wherein N is more than 10, and marking the congestion states of the N pictures according to a criterion for judging the congestion states;
b) dividing all the marked N pictures into a training data set and a testing data set according to the ratio of 8: 2;
c) and expanding the training data set, and training the congestion state monitoring model by using the expanded training data set.
6. The method of claim 5, wherein the method of expanding the training data set comprises:
(c11) rotating the picture through a random horizontal mirror image to simulate the working condition that the installation positions of the cameras are inconsistent;
(c12) processing pictures through random color dithering to simulate the working condition of illumination change in the automobile;
(c13) the working condition that the camera types in the vehicle are inconsistent is simulated by adjusting the image compression rate to process the image;
(c14) processing the image through an illumination simulation function to simulate the working conditions of all the days at various time intervals;
(c15) the imaging effect of various cameras is simulated by carrying out numerical adjustment on the color channels, so that the working condition that the cameras in the vehicle are inconsistent is simulated;
(c16) the input image is subjected to gray processing randomly, so that the influence of the photosensitive effect of the camera on difficult recognition is reduced;
(c17) the influence of imaging noise on a monitoring model is reduced through normalization processing of an input image;
(c18) and the image is processed by Gaussian filtering, so that the influence of image noise on the monitoring model is reduced.
7. The method of claim 5, wherein the method of training the congestion status monitoring model comprises:
(c21) using the expanded training data set as the input of a congestion state monitoring model CSMM, and obtaining the CSMM judgment result from the output of the full connection layer FC;
(c22) feeding the output result in the step (c 21) and the corresponding label of the input image into a loss function layer L to calculate a loss value;
(c23) performing update iteration on the parameters in the CSMM by using a random gradient descent method through the mark in the step (c 22);
(c24) judging whether the current iteration number reaches a preset learning rate adjusting point or not, and returning to the step (c 21) to continue the model training after adjusting the learning rate when the current iteration number reaches the preset learning rate adjusting point;
(c25) and when the iteration times reach the maximum set value, stopping iteration to obtain a congestion state monitor CSMT.
8. The method for monitoring the congestion status in the bus according to claim 5, wherein the congestion status monitoring model is tested after training, and the method for testing the congestion status monitoring model comprises:
(1) inputting the marked test data set into a congestion status monitor CSMT, and obtaining a corresponding output judgment result output from a full connection layer FC;
(2) and comparing the judgment result output with the label file corresponding to the test data to calculate the accuracy of the current congestion state monitor.
CN202010565599.9A 2020-06-19 2020-06-19 Congestion state monitoring method in bus based on convolutional neural network Pending CN111723739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010565599.9A CN111723739A (en) 2020-06-19 2020-06-19 Congestion state monitoring method in bus based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010565599.9A CN111723739A (en) 2020-06-19 2020-06-19 Congestion state monitoring method in bus based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN111723739A true CN111723739A (en) 2020-09-29

Family

ID=72567719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010565599.9A Pending CN111723739A (en) 2020-06-19 2020-06-19 Congestion state monitoring method in bus based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111723739A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255480A (en) * 2021-05-11 2021-08-13 中国联合网络通信集团有限公司 Method, system, computer device and medium for identifying degree of congestion in bus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN109190507A (en) * 2018-08-13 2019-01-11 湖南信达通信息技术有限公司 A kind of passenger flow crowding calculation method and device based on rail transit train
WO2020040411A1 (en) * 2018-08-21 2020-02-27 한국과학기술정보연구원 Device for predicting traffic state information, method for predicting traffic state information, and storage medium for storing program for predicting traffic state information
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN109190507A (en) * 2018-08-13 2019-01-11 湖南信达通信息技术有限公司 A kind of passenger flow crowding calculation method and device based on rail transit train
WO2020040411A1 (en) * 2018-08-21 2020-02-27 한국과학기술정보연구원 Device for predicting traffic state information, method for predicting traffic state information, and storage medium for storing program for predicting traffic state information
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
(澳)拉库马•布亚 等: "《雾计算与边缘计算:原理及范式》", 机械工业出版社, pages: 230 - 231 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255480A (en) * 2021-05-11 2021-08-13 中国联合网络通信集团有限公司 Method, system, computer device and medium for identifying degree of congestion in bus

Similar Documents

Publication Publication Date Title
CN111444821B (en) Automatic identification method for urban road signs
WO2022165809A1 (en) Method and apparatus for training deep learning model
CN111079640B (en) Vehicle type identification method and system based on automatic amplification sample
CN109935080B (en) Monitoring system and method for real-time calculation of traffic flow on traffic line
CN108039044B (en) Vehicle intelligent queuing system and method based on multi-scale convolutional neural network
CN107985189B (en) Early warning method for lane changing depth of driver in high-speed driving environment
CN112396587B (en) Method for detecting congestion degree in bus compartment based on collaborative training and density map
CN112766192A (en) Intelligent train monitoring system
CN108694829B (en) Traffic flow identification monitoring network and method based on unmanned aerial vehicle group mobile platform
CN111572549A (en) Driving environment adjusting method and system, storage medium and electronic equipment
CN104881661A (en) Vehicle detection method based on structure similarity
CN115238958A (en) Dangerous event chain extraction method and system based on complex traffic scene
CN116901975B (en) Vehicle-mounted AI security monitoring system and method thereof
CN115662113B (en) Signal intersection man-vehicle game conflict risk assessment and early warning method
CN111723739A (en) Congestion state monitoring method in bus based on convolutional neural network
CN113255634A (en) Vehicle-mounted mobile terminal target detection method based on improved Yolov5
CN110738866B (en) Distributed parking lot parking space planning system
CN113239609A (en) Test system and detection method for target identification and monitoring of monocular camera of intelligent vehicle
US11748664B1 (en) Systems for creating training data for determining vehicle following distance
US8730238B2 (en) Visual three dimensional simulation system and method
CN111241918A (en) Vehicle anti-tracking method and system based on face recognition
CN108960398A (en) A kind of overcrowding actively monitoring device of the car of wireless remote and monitoring method
CN108537828A (en) A kind of shop data analysing method and system
Huang et al. A bus crowdedness sensing system using deep-learning based object detection
CN113343926A (en) Driver fatigue detection method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination