CN115081714A - One-key switching station self-checking optimization method for urban rail transit - Google Patents
One-key switching station self-checking optimization method for urban rail transit Download PDFInfo
- Publication number
- CN115081714A CN115081714A CN202210722098.6A CN202210722098A CN115081714A CN 115081714 A CN115081714 A CN 115081714A CN 202210722098 A CN202210722098 A CN 202210722098A CN 115081714 A CN115081714 A CN 115081714A
- Authority
- CN
- China
- Prior art keywords
- layer
- self
- checking
- data set
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000005457 optimization Methods 0.000 title claims abstract description 21
- 230000001364 causal effect Effects 0.000 claims abstract description 37
- 230000006870 function Effects 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000012795 verification Methods 0.000 claims abstract description 14
- 238000003062 neural network model Methods 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 38
- 230000004913 activation Effects 0.000 claims description 21
- 238000010606 normalization Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 13
- 238000012790 confirmation Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000011002 quantification Methods 0.000 claims description 2
- 238000005096 rolling process Methods 0.000 description 5
- 238000007689 inspection Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000002351 wastewater Substances 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Operations Research (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a self-checking optimization method for an urban rail transit one-key switching station, which comprises the following steps of: s1, constructing a time sequence data set; s2, preprocessing the time sequence data set; performing characteristic engineering on the data in the time sequence data set to convert the data into a supervised learning data set; s3, building a causal convolutional neural network model, loading training set data into the causal convolutional neural network model, and performing model training on the causal convolutional neural network model to enable the causal convolutional neural network model to capture the time sequence characteristics of a long time sequence; and S4, inputting the verification set into a causal convolution neural network model for verification, inputting the statistical value of the current day and the previous week, which is made by the characteristic engineering, into the trained neural network model, and predicting the self-checking operation record of the current day. The invention can provide the total station equipment one-time alarm information and the suggestion of the operation mode thereof for the current day patrol plan, save the operation time and improve the scientificity and the high efficiency of the one-key station switching function.
Description
Technical Field
The invention belongs to the technical field of one-key switch station self-inspection, and particularly relates to a one-key switch station self-inspection optimization method for urban rail transit.
Background
The construction of the rapidly developed subway in China brings convenience to the life of people, brings huge challenges to the operation management of the subway station, and the construction of the intelligent subway is successively developed in various cities in China in order to reduce the pressure of the operation management of the subway and guarantee the operation service level.
At present, the research result of the smart station has shifted from the function verification stage to the value output stage, and in the smart station, the one-key switching station function is one of the most representative functions. The conventional one-key switching station function is that on the basis of a traditional comprehensive monitoring system, the system and equipment involved in the station opening and closing process are automatically linked, the traditional equipment monitoring is converted into scene linkage, the one-key station opening and closing function is realized through remote operation, the remote operation process comprises sending and confirming of self-checking commands of each equipment or system, and each of the operation steps can display prompt information and most of the operation steps need manual confirmation, so that the operation time and the workload of workers are greatly increased, and the one-key switching station is relatively complicated to operate; therefore, the patent application designs a self-checking optimization method for one-key switch stations of urban rail transit.
Disclosure of Invention
In view of the above, the present invention aims to provide an urban rail transit one-key switching station self-checking optimization method, so as to solve the problems that the operation of the existing one-key switching station is relatively complicated, subway operators repeatedly click the warning neglect information prompt and the confirmation prompt of each self-checking device, and the operation time and the workload of the operators are greatly increased.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a self-checking optimization method for an urban rail transit one-key switching station comprises the following steps:
s1, constructing a time sequence data set; carrying out single-hot coding on the historical self-checking operation record, and converting the self-checking operation type from the flat type information into numerical value representation to form a time sequence data set;
s2, preprocessing the time sequence data set; performing characteristic engineering on data in the time sequence data set, converting the time sequence data set into a supervised learning data set, and dividing the supervised learning data set into a training set, a testing set and a verification set;
s3, building a causal convolutional neural network model, loading training set data containing time sequence characteristics in the causal convolutional neural network model, and carrying out model training on the causal convolutional neural network model to enable the causal convolutional neural network model to capture the time sequence characteristics of a long time sequence;
and S4, inputting the data of the verification set into the trained causal convolutional neural network model for model performance verification, inputting the statistical value of the day previous to the week, which is made through characteristic engineering, into the trained neural network model, and predicting the self-checking operation record of the day.
Further, in step S1, the self-test operation category includes that the alarm information is not displayed, the alarm information is displayed and selected to be confirmed, the alarm information is displayed and selected to be ignored;
the one-hot coding converts original characteristic variables in the historical self-checking operation records into multi-dimensional variables of original characteristic classification, and 0/1 is used for carrying out yes/no and presence/absence quantification on each dimension;
converting the alarm information which is not displayed in the self-checking operation category into [0,0 ];
converting the alarm information displayed in the self-checking operation category and selecting confirmation into [1,0 ];
the display of the alarm message and the selection of ignore in the self-test category is converted into [1,1 ].
Further, in step S1, the log-based historical self-test records are arranged in time sequence to form a time-series data set, where the time-series data set is represented as:
S={Y 1 ,Y 2 ,Y 3 ,...,Y n };
in the formula, S is a time series data set, and 1, 2.
Further, in step S2, data statistics are performed on the data in all the time series data sets in step S1, and the feature engineering includes the number X of times ignored within the week before the current day (1) The number of confirmations X within the week before the current day (2) The number of times of non-warning in the week before the current day X (3) And the statistics X of each device in the week of the current day (4) Wherein the statistics is the type with the most frequent operation types in the week of the current day;
X (1) ,X (2) ,X (3) and X (4) Input features X constituting a day i And as a supervised learning dataset, the supervised learning dataset is represented as:
D={(X 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X m ,Y m )};
wherein D is a supervised learning dataset, (X) m ,Y m ) As marked samples.
Further, the causal convolutional neural network model comprises a residual block, an attention mechanism module and a full connection layer which are connected in sequence;
the number of the residual blocks is multiple, each residual block comprises a causal convolution layer, a normalization layer, an activation function and a Dropout layer, and meanwhile, a jump connection is introduced into the residual block, and the residual block formula is as follows:
x i =x i-1 +f(x i-1 );
wherein x i-1 For the output of the last residual block, f (x) i-1 ) As a function of the operation level of the current residual block, x i Is the result of the current residual block.
Further, the method for constructing the residual block in the step 3 comprises the following steps that the residual block comprises two residual modules with the same structure, the two residual modules are sequentially connected, and each residual module comprises:
(1) the first causal convolutional layer is divided into two layers, the expansion coefficients are 2 and 4 respectively, and the sizes of convolution kernels are 3;
(2) a second normalization layer, the input of which is connected to the output of the causal convolutional layer, the formula of which is:
wherein, mu B Andis the mean and variance of a batch of data,a very small term greater than 0, gamma and beta network learning parameters;
(3) and the third ReLU activation function layer, wherein the input of the ReLU activation function layer is connected with the output of the normalization layer, and the activation function formula of the layer is as follows:
g(x)=max(0,x);
wherein x is the output value of the previous layer, and g (x) is the ReLU activation function;
(4) the fourth layer is a Dropout layer, half of the neuron connections are cut off, the input of the Dropout layer is connected with the output of the ReLU activation function layer, and the output of the Dropout layer is connected with the causal convolution layer of the next residual module.
Further, the jump connection is that a channel is added between the start end and the end of the residual block, and the result of the main channel is added through a 1 × 1 convolution layer.
Further, the self-attention mechanism includes three channels, wherein the first channel is formulated as follows:
F A (RB)=σ(f 1*1 [MaxPooling(RB);AvgPooling(RB)]);
where RB is the result of passing the last residual block, MaxBooling and AvgPooling are the maximum and average pooling layers, respectively, f 1*1 Convolution layer of size 1 x 1, σ is sigmoid activation function, "; "represents a splicing operation on the channel;
the second channel passes through the 1 × 1 convolutional layer and the normalization layer, and after being multiplied by the first channel result, the second channel passes through the Softmax layer, and can be represented by the following formula:
C=f 1 (x) T ·F A (RB);
wherein F A (RB) is the output of the first channel, f 1 (x) Processing functions of convolution layer and normalization layer with the size of 1 × 1, wherein C is a multiplication result of two channels, and S is output of C through a SoftMax layer;
the third channel is only passed through the 1 x 1 sized convolution layer, and the results of the remaining two channels are multiplied.
Further, the self-checking operation records are all alarm information operation types of respective checking equipment.
Compared with the prior art, the urban rail transit one-key switch station self-checking optimization method has the following beneficial effects:
(1) according to the urban rail transit One-key switch station self-checking optimization method, the One-Hot coding (One-Hot) and sliding window characteristic engineering operation is carried out on the historical self-checking operation information of the station, so that data are adapted to supervision learning, and more characteristics are obtained; preprocessing collected historical self-checking operation record data of the station one-key switching station and characteristic engineering operation are achieved, and prediction of self-checking information of the subway one-key switching station is completed;
meanwhile, a residual block connection and self-attention mechanism module is added in the traditional causal convolutional neural network so as to further improve the prediction accuracy;
(2) the urban rail transit one-key switch station self-checking optimization method further optimizes the subway one-key switch station self-checking process, reduces the workload of operators, shortens the one-key switch station time, improves the operation management working efficiency, and shortens the operation time by about 90 seconds compared with the conventional self-checking mode.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flow chart of a self-checking optimization method for an urban rail transit one-key switching station according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a causal convolutional neural network model according to an embodiment of the present invention;
FIG. 3 is a diagram of a residual block according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a main attention mechanism module according to an embodiment of the invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
It should be noted that the one-key switch station self-inspection operation records used in this embodiment are from 10 stations along the Tianjin subway No. 6 line, where the self-inspection devices and systems include a rolling door, an escalator, a helicopter, a waste water pump, a drainage pump, a PSD system, an AFC system, a PIS system, and a PA system. The type of the rolling door alarm information comprises an entrance rolling door switch signal remote/local state, an entrance rolling door switch signal and a one-step descending fire-proof rolling door descending feedback; the escalator alarm types comprise a thermal overload relay protection alarm, a handrail belt inlet fault alarm, a step sinking alarm, a step anti-jumping alarm, an apron board fault alarm, a handrail belt breakage fault alarm, a comb plate fault alarm, an emergency stop button trigger alarm, a middle emergency stop button trigger alarm, an overhaul cover plate fault alarm and a water level switch trigger alarm; the elevator comprises a safety loop alarm; the waste water pump and the drainage pump comprise ultrahigh/low water level alarm; the PSD system comprises a gap detection isolation mode alarm, a door opening overtime alarm and an obstacle meeting alarm; the AFC system comprises a ticket checker self-checking abnormity alarm; the PIS system and the PA system comprise communication abnormity alarms. Meanwhile, a convolutional neural network model is established based on a TensorFlow framework, and a final model is obtained after 10 epochs are trained. And using a training loss function as a cross entropy function, and using the average correct rate and the mean square error value of the test set as evaluation standards of the network.
Referring to fig. 1, the present embodiment provides a self-checking optimization method for an urban rail transit one-key switch station, including the following steps:
s1, constructing a time sequence data set; carrying out one-hot coding on the historical self-checking operation record, converting the self-checking operation type from the flat type information into numerical representation, and forming a time sequence data set;
the self-checking operation category comprises that the alarm information is not displayed, the alarm information is displayed and selected to be confirmed, and the alarm information is displayed and selected to be ignored;
the one-hot coding converts original characteristic variables in the historical self-checking operation records into multi-dimensional variables of original characteristic classification, and carries out yes/no and presence/absence quantization on each dimension by using 0/1;
converting the alarm information which is not displayed in the self-checking operation category into [0,0 ];
converting the alarm information displayed in the self-checking operation category and selecting confirmation into [1,0 ];
the display of the alarm message and the selection of ignore in the self-test category is converted into [1,1 ].
In step S1, the time-series data set is a history self-check operation record, and is arranged in time sequence by day to form a time-series data set, and the time-series data set is expressed as: s ═ Y 1 ,Y 2 ,Y 3 ,...,Y n };
Wherein S is a time series data set, and 1, 2.
S2, preprocessing the time sequence data set; performing feature engineering on data in the time sequence data set to convert the time sequence data set into a supervised learning data set, wherein the supervised learning data set is divided into a training set, a testing set and a verification set;
to the step ofPerforming data statistics on all data in the time series data set in S1, wherein the characteristic engineering comprises the number X of times of neglecting in the week before the current day (1) The number of confirmations X within the week before the current day (2) The number of times of non-warning in the week before the current day X (3) And the statistics X of each device in the week of the current day (4 ) Wherein the statistics is the type with the most frequent operation types in the week of the current day;
X (1) ,X (2) ,X (3) and X (4) Input features X constituting a day i And as a supervised learning data set;
the supervised learning dataset may be denoted as D { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X m ,Y m ) Where D is a supervised learning dataset, (X) m ,Y m ) As labeled samples;
the supervised learning data set is respectively composed of a training set, a verification set and a test set, and the training set is used for training the network model; the verification set is used for verifying the reliability of the model and adjusting network parameters; the test set is used to test the trained model.
S3, building a causal convolutional neural network model, loading training set data containing time sequence characteristics in the causal convolutional neural network model, and carrying out model training on the causal convolutional neural network model to enable the causal convolutional neural network model to capture the time sequence characteristics of a long time sequence;
as shown in fig. 2, the causal convolutional neural network model includes a residual block, a self-attention mechanism module, and a full connection layer, which are connected in sequence;
in this embodiment, 3 residual blocks are taken as an example, each residual block includes a causal convolution layer, a normalization layer, an activation function, and a Dropout layer, and a skip connection is introduced into the residual block, where the formula of the residual block is as follows:
x i =x i-1 +f(x i-1 )
wherein x i-1 For the output of the last residual block, f (x) i-1 ) As a function of the operation level of the current residual block, x i Is the result of the current residual block.
The method for constructing the residual block in the step 3 comprises the following steps that the residual block comprises two residual modules with the same structure, the two residual modules are sequentially connected, and each residual module comprises:
(1) the first causal convolutional layer is divided into two layers, the expansion coefficients are 2 and 4 respectively, and the sizes of convolution kernels are 3;
(2) a second normalization layer, the input of which is connected to the output of the causal convolutional layer, the formula of which is:
wherein, mu B Andis the mean and variance of a batch of data,a very small term greater than 0, gamma and beta network learning parameters;
(3) and the third ReLU activation function layer, wherein the input of the ReLU activation function layer is connected with the output of the normalization layer, and the activation function formula of the layer is as follows:
g(x)=max(0,x);
wherein x is the output value of the previous layer, and g (x) is the ReLU activation function;
(4) the fourth layer is a Dropot layer, half of neuron connection is cut off, the input of the Dropot layer is connected with the output of the ReLU activation function layer, the output of the Dropot layer is connected with the causal convolution layer of the next residual module, and the Dropot layer is used for reducing the size of the model and facilitating accelerated training.
The jump connection adds a channel between the beginning and end of the residual block, and adds the result to the main channel through 1 x 1 convolutional layer.
The self-attention mechanism includes three channels, wherein the first channel is formulated as follows:
F A (RB)=σ(f 1*1 [MaxPooling(RB);AvgPooling(RB)])
where RB is the result of passing the last residual block, MaxBooling and AvgPooling are the maximum and average pooling layers, respectively, f 1*1 Convolution layer of size 1 x 1, σ is sigmoid activation function, "; "represents a splicing operation on the channel;
the second channel passes through the 1 × 1 convolutional layer and the normalization layer, and after being multiplied by the first channel result, the second channel passes through the Softmax layer, and can be represented by the following formula:
C=f 1 (x) T ·F A (RB)
wherein F A (RB) is the output of the first channel, f 1 (x) Processing functions of a convolution layer with the size of 1 x 1 and a normalization layer, wherein C is a multiplication result of two channels, and S is output of C through a SoftMax layer;
the third channel is only passed through the 1 x 1 sized convolution layer, and the results of the remaining two channels are multiplied.
And S4, inputting the data of the verification set into the trained causal convolutional neural network model for model performance verification, inputting the statistical value of the day previous to the week, which is made through characteristic engineering, into the trained neural network model, and predicting the self-checking operation record of the day.
The self-checking operation records are all alarm information operation types of respective detection equipment, and are provided for subway related personnel at one time.
Those skilled in the art will appreciate that the methods of the above-described embodiments may be implemented in whole or in part by instructing the associated system with a software program that may be stored in a computer memory.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (9)
1. A one-key switching station self-checking optimization method for urban rail transit is characterized by comprising the following steps:
s1, constructing a time sequence data set; carrying out one-hot coding on the historical self-checking operation record, converting the self-checking operation type from the flat type information into numerical representation, and forming a time sequence data set;
s2, preprocessing the time sequence data set; performing characteristic engineering on data in the time sequence data set, converting the time sequence data set into a supervised learning data set, and dividing the supervised learning data set into a training set, a testing set and a verification set;
s3, building a causal convolutional neural network model, loading training set data containing time sequence characteristics in the causal convolutional neural network model, and carrying out model training on the causal convolutional neural network model to enable the causal convolutional neural network model to capture the time sequence characteristics of a long time sequence;
and S4, inputting the data of the verification set into the trained causal convolutional neural network model for model performance verification, inputting the statistical value of the day previous to the week, which is made through characteristic engineering, into the trained neural network model, and predicting the self-checking operation record of the day.
2. The urban rail transit one-key switch station self-checking optimization method according to claim 1, characterized in that: in step S1, the self-checking operation category includes that the alarm information is not displayed, the alarm information is displayed and selected to be confirmed, the alarm information is displayed and selected to be ignored;
the one-hot coding converts original characteristic variables in the historical self-checking operation records into multi-dimensional variables of original characteristic classification, and 0/1 is used for carrying out yes/no and presence/absence quantification on each dimension;
converting the alarm information which is not displayed in the self-checking operation category into [0,0 ];
converting the alarm information displayed in the self-checking operation category and selecting confirmation into [1,0 ];
the display of the alarm message and the selection of ignore in the self-test category is converted into [1,1 ].
3. The urban rail transit one-key switch station self-checking optimization method according to claim 2, characterized in that: in step S1, the log-expressed historical self-test records are arranged in time order to form a time-series data set, where the time-series data set is expressed as:
S={Y 1 ,Y 2 ,Y 3 ,...,Y n };
in the formula, S is a time series data set, and 1, 2.
4. The urban rail transit one-key switch station self-checking optimization method according to claim 3, characterized in that: in step S2, data statistics are performed on all the data in the time series data set in step S1, and the feature engineering includes the number X of times ignored in the week before the current day (1) The number of confirmations X within the week before the current day (2) The number of times of non-warning in the week before the current day X (3) And the statistics X of each device in the week of the current day (4) Wherein the statistics is the type with the most frequent operation types in the week of the current day;
X (1) ,X (2) ,X (3) and X (4) Input features X constituting a day i And as a supervised learning dataset, the supervised learning dataset is represented as:
D={(X 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X m ,Y m )};
wherein D is a supervised learning dataset, (X) m ,Y m ) As marked samples.
5. The urban rail transit one-key switch station self-checking optimization method according to claim 1, characterized in that: the causal convolutional neural network model comprises a residual block, a self-attention mechanism module and a full connection layer which are connected in sequence;
the number of the residual blocks is multiple, each residual block comprises a causal convolution layer, a normalization layer, an activation function and a Dropout layer, and meanwhile, a jump connection is introduced into the residual block, and the residual block formula is as follows:
x i =x i-1 +f(x i-1 );
wherein x i-1 For the output of the last residual block, f (x) i-1 ) As a function of the operation level of the current residual block, x i Is the result of the current residual block.
6. The urban rail transit one-key switch station self-checking optimization method according to claim 5, wherein the residual block building method in step 3 is that the residual block comprises two residual modules with the same structure, the two residual modules are connected in sequence, and each residual module comprises:
(1) the first causal convolutional layer is divided into two layers, the expansion coefficients are 2 and 4 respectively, and the sizes of convolution kernels are 3;
(2) a second normalization layer, the input of which is connected to the output of the causal convolutional layer, the formula of which is:
wherein, mu B Andis the mean and variance of a batch of data,a very small term greater than 0, gamma and beta network learning parameters;
(3) and the third ReLU activation function layer, wherein the input of the ReLU activation function layer is connected with the output of the normalization layer, and the activation function formula of the layer is as follows:
g(x)=max(0,x);
wherein x is the output value of the previous layer, and g (x) is the ReLU activation function;
(4) the fourth layer is a Dropout layer, half of the neuron connections are cut off, the input of the Dropout layer is connected with the output of the ReLU activation function layer, and the output of the Dropout layer is connected with the causal convolution layer of the next residual module.
7. The urban rail transit one-key switch station self-checking optimization method according to claim 5, characterized in that: the jump connection adds a channel between the beginning and end of the residual block, and adds the result to the main channel through 1 x 1 convolutional layer.
8. The urban rail transit one-key switch station self-checking optimization method according to claim 5, characterized in that: the self-attention mechanism includes three channels, wherein the first channel is formulated as follows:
F A (RB)=σ(f 1*1 [MaxPooling(RB);AvgPooling(RB)]);
where RB is the result of passing the last residual block, MaxBooling and AvgPooling are the maximum and average pooling layers, respectively, f 1*1 Convolution layer of size 1 x 1, σ is sigmoid activation function, "; "represents a splicing operation on the channel;
the second channel passes through the 1 × 1 convolutional layer and the normalization layer, and after being multiplied by the first channel result, the second channel passes through the Softmax layer, and can be represented by the following formula:
C=f 1 (x) T ·F A (RB);
wherein F A (RB) is the output of the first channel, f 1 (x) Processing functions of a convolution layer with the size of 1 x 1 and a normalization layer, wherein C is a multiplication result of two channels, and S is output of C through a SoftMax layer;
the third channel is only passed through the 1 x 1 sized convolution layer, and the results of the remaining two channels are multiplied.
9. The urban rail transit one-key switch station self-checking optimization method according to claim 1, characterized in that: and the self-checking operation records are all alarm information operation types of respective equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210722098.6A CN115081714A (en) | 2022-06-24 | 2022-06-24 | One-key switching station self-checking optimization method for urban rail transit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210722098.6A CN115081714A (en) | 2022-06-24 | 2022-06-24 | One-key switching station self-checking optimization method for urban rail transit |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115081714A true CN115081714A (en) | 2022-09-20 |
Family
ID=83255089
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210722098.6A Pending CN115081714A (en) | 2022-06-24 | 2022-06-24 | One-key switching station self-checking optimization method for urban rail transit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115081714A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117218762A (en) * | 2023-11-09 | 2023-12-12 | 深圳友朋智能商业科技有限公司 | Intelligent container interaction control method, device and system based on machine vision |
-
2022
- 2022-06-24 CN CN202210722098.6A patent/CN115081714A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117218762A (en) * | 2023-11-09 | 2023-12-12 | 深圳友朋智能商业科技有限公司 | Intelligent container interaction control method, device and system based on machine vision |
CN117218762B (en) * | 2023-11-09 | 2024-02-09 | 深圳友朋智能商业科技有限公司 | Intelligent container interaction control method, device and system based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263172B (en) | Power grid monitoring alarm information evenized autonomous identification method | |
CN110929898B (en) | Hydropower station start-stop equipment operation and maintenance and fault monitoring online evaluation system and method | |
CN111178598A (en) | Passenger flow prediction method and system for railway passenger station, electronic device and storage medium | |
CN110336375B (en) | Processing method and system for power grid monitoring alarm information | |
CN107016507A (en) | Electric network fault method for tracing based on data mining technology | |
CN115081714A (en) | One-key switching station self-checking optimization method for urban rail transit | |
CN110428109B (en) | Subway shield door fault interval time prediction model establishing and predicting method | |
CN104991549A (en) | Track circuit red-light strip default diagnosis method based on FTA and multilevel fuzzy-neural sub-networks | |
CN116700193A (en) | Factory workshop intelligent monitoring management system and method thereof | |
CN109583794A (en) | A kind of method of determining elevator failure time | |
CN113484749A (en) | Generator fault diagnosis and prediction method | |
CN114498934A (en) | Transformer substation monitoring system | |
CN115442212A (en) | Intelligent monitoring analysis method and system based on cloud computing | |
CN113205223A (en) | Electric quantity prediction system and prediction method thereof | |
CN117354171B (en) | Platform health condition early warning method and system based on Internet of things platform | |
CN111160537A (en) | Crossing traffic police force resource scheduling system based on ANN | |
CN117689214A (en) | Dynamic safety assessment method for energy router of flexible direct-current traction power supply system | |
CN113554298A (en) | Comprehensive evaluation and intelligent operation and maintenance method for deep underground subway station | |
CN115330012A (en) | Production accident prediction based on digital twin intelligent algorithm for safety production | |
CN116070129A (en) | Intelligent diagnosis system for hydropower centralized control accident | |
CN114584585A (en) | Industrial equipment self-diagnosis system and method based on Internet of things | |
CN115913891A (en) | Big data analysis-based advanced operation and maintenance system and operation and maintenance method | |
CN114398947A (en) | Expert system-based power grid fault automatic classification method and system | |
CN114167837A (en) | Intelligent fault diagnosis method and system for railway signal system | |
CN116167901B (en) | Fire emergency drilling method and system based on computer technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |