CN115062402A - Data-driven train level acceleration extraction method - Google Patents
Data-driven train level acceleration extraction method Download PDFInfo
- Publication number
- CN115062402A CN115062402A CN202210536133.5A CN202210536133A CN115062402A CN 115062402 A CN115062402 A CN 115062402A CN 202210536133 A CN202210536133 A CN 202210536133A CN 115062402 A CN115062402 A CN 115062402A
- Authority
- CN
- China
- Prior art keywords
- learning
- train
- sampling
- acceleration
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/10—Noise analysis or noise optimisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a data-driven train level acceleration extraction method, and belongs to the technical field of computers and information science. The method firstly optimizes the learning sequence of the samples by utilizing different loss change modes of the noise and the normal samples in the training process. Then introducing self-sampling learning into AdaBoost and gradient boosting learning respectively to provide an SSAdaBoost and SSGB steady learning method, establishing a target train performance estimation model, mining special train performance knowledge in noisy operation data, and fitting a mapping function between acceleration and characteristics such as speed, a rank sequence (implicit delay characteristic) and gradient. And finally, controlling the characteristic influence of a level sequence, time delay and the like by using a 'query sample', establishing a performance table, and realizing the extraction of the quantitative relation between the target characteristic and the label. The problems that the tracking difficulty of the recommended speed is high and the extra energy consumption is caused by frequent switching of the vehicle control level in the existing method are solved. The extraction performance is similar to the actual performance, and the method can be used for establishing performance constraint matched with the controlled train and improving the optimization effect of the recommendation speed.
Description
Technical Field
The invention relates to a data-driven train level acceleration extraction method, and belongs to the technical field of computers and information science.
Background
The development of rail transit is an effective way for solving the traveling problems of residents in modern large cities, and is also an important way for building green cities and intelligent cities. The common rail transit includes traditional railways (national railways, intercity railways and urban railways), subways, light rails and trams, and the novel rail transit includes a magnetic suspension rail system, a monorail system (straddle type rail system and suspension type rail system), a passenger automatic rapid transit system and the like. With the diversified development of train and railway technologies, rail transit presents more and more types, is not only distributed in long-distance land transportation, but also widely applied to middle and short-distance urban public transportation, and has important significance in performance extraction of trains, real-time running state monitoring of trains and safety operation guarantee of trains.
The train operation data mainly comprises speed, recommended speed and grade acquired by a vehicle-mounted system, information such as the position, gradient, curve and target point position fed back by CBTC, and is stored in a formatted log form cycle by cycle. In the motion state estimation problem, the speed of adjacent sampling points is subjected to difference calculation to obtain the acceleration which is used as a supervised learning label to mine the train motion state change mode under the influence of the current speed, gradient and level.
When a vehicle runs, the vehicle is influenced by various factors such as sensor errors, communication packet loss, processing delay and the like, and the conditions of fluctuation, jumping points, missing and the like of sampling data are easy to occur, so that errors are generated in acceleration calculation, and tag noise is introduced. The recommended speed is an ideal running speed which meets the constraints of speed limit, timetable and the like and is generated by optimizing the indexes of train running efficiency, energy consumption, comfort level and the like. Existing recommended speed optimization methods are generally based on continuous or discrete assumptions, with emphasis on the energy consumption level of the control objective. Aiming at a level position speed control system commonly used by a train, the problems of train delay, acceleration dispersion and the like can be considered to a certain extent under the condition of dispersion assumption, but the existing method cannot analyze and extract the performance of the train, the optimization process is still independent of a controlled object, the recommended acceleration can not be matched with the train acceleration feasible region, the level position switching during operation is frequent, and unexpected energy consumption is generated.
Disclosure of Invention
The invention aims to solve the problems that the matching of the recommended acceleration and the train level acceleration is not considered in the conventional method, the tracking difficulty of the recommended speed is high, the train control level is frequently switched and extra energy consumption is generated. A data-driven train level acceleration extraction method is provided.
The design principle of the invention is as follows: firstly, estimating the reliability and the learning difficulty of a sample by using different loss change modes of a noise sample and a normal sample in the training process, and optimizing the learning sequence of the sample. Then introducing self-sampling learning into AdaBoost and gradient boosting learning respectively to provide an SSAdaBoost and SSGB steady learning method, establishing a target train performance estimation model, mining special train performance knowledge in noisy operation data, and fitting a mapping function between acceleration and characteristics such as speed, a rank sequence (implicit delay characteristic) and gradient. Finally, an incidence relation extraction method based on feature control is provided, the influence of features such as a 'query sample' control level sequence, time delay and gradient is utilized, acceleration values corresponding to all levels in different speed intervals are sequentially queried, a train exclusive performance table is established, and the extraction of the quantitative relation between target features and labels is achieved.
The technical scheme of the invention is realized by the following steps:
Step 1.1, estimating the reliability and the learning difficulty of the samples by using the change modes of the loss function values of the noise-containing samples and the normal samples in the training process of the improved learning model, wherein the change modes comprise absolute values, change directions, change speeds and the like.
And 1.2, adjusting the training sample subset in each iteration, and optimizing the sample learning sequence while eliminating noise points.
And 2, introducing AdaBoost into self-sampling learning, and establishing SSAdaBoost and SSGB methods.
And 2.1, establishing a target train performance estimation model, mining special train performance knowledge in noisy operation data, and fitting a mapping function between acceleration and characteristics such as speed, a rank sequence (implicit delay characteristic) and gradient.
And 3, controlling the characteristic influences of the level sequence, the time delay, the gradient and the like by using the 'query sample', and sequentially querying the corresponding acceleration values of each level in different speed intervals.
And 3.1, establishing a train exclusive performance table to extract the quantitative relation between the target characteristics and the labels.
Advantageous effects
The sample weight adjustment of the SSAdaBoost is compared to the original AdaBoostChanging to a wii effectively limits weight growth. Through screening the training set, the learner can train on more stable, low-noise samples, compare SPLBoost, and misjudgment and abandonment to the difficult sample of non-noise can be avoided in self-sampling learning, and need not the priori knowledge of sample learning order. Compared with the original AdaBoost, the method adds two sampling control over parameters, and the calculation cost (M multiplied by O (3kn +3n)) is close to that of the original method (M multiplied by O (3kn +2 n)).
Along with the increase of the running speed of the train, the actual acceleration of the same level position may change, and the basic rule is as follows: 1) the actual acceleration generated by the traction level on the train is reduced along with the rise of the speed, and the reduction is obvious particularly in a high-speed stage; 2) the actual deceleration generated by the coasting grade position to the train is increased along with the rising of the speed and is mainly influenced by the resistance related to the speed; 3) the actual deceleration of the train generated by the brake level bit increases with the speed, and the change amplitude is relatively small. Before the trains of the same type are on line, the first train moving adjustment can be carried out on the representative vehicles, the actual acceleration of the level position in each speed interval is tested on a controllable test line, and a complete performance table is drawn.
Drawings
FIG. 1 is a schematic diagram of a data-driven train level acceleration extraction method according to the present invention.
Fig. 2 is a self-sampling boosting learning UCI task model error rate statistic.
Fig. 3 is a diagram of experimental results of a large-scale real task of self-sampling boosting learning.
Fig. 4 is a comparison of simulation effects of the knowledge extraction performance table.
Detailed Description
In order to better illustrate the objects and advantages of the present invention, embodiments of the method of the present invention are described in further detail below with reference to examples.
Experiments mainly verify the robustness and modeling performance of the SSAdaBoost and SSGB self-sampling boosting learning algorithm and the application effect of the model interpretation method for performing feature control based on the query sample in the train performance extraction task.
(1) The method comprises the steps of evaluating the self-sampling promotion learning performance of the SSAdaBoost and the SSGB by adopting 70 sets of UCI public data sets with different noise levels and 3 sets of large-scale real task data sets, wherein the comparison algorithm comprises 6 high-performance promotion learning algorithms such as AdaBoost, LogitBoost, GentleBoost, RBoost, CB-AdaBoost, GradientBoost and the like, and covers the existing advanced robust promotion learning algorithm. The robustness of the algorithm directly influences the recognition effect of the model on train performance knowledge in noisy operation data, and is the premise for realizing accurate performance extraction and performance self-adaptive recommendation speed optimization.
In the experiment, the interval is divided every 5%, the noise level setting range is from 0% to 30%, 7 sub-data sets are established for each task, and therefore the total number of data sets of the UCI data modeling experiment is 70. The noise is only added to the training set and the verification set, and the noise is not added to each test set, so that the reliability of the test result is ensured. And dividing 560 groups of experimental results according to tasks, and drawing a modeling error rate change curve of different algorithms along with the aggravation of noise so as to facilitate observation.
The experiment takes a supervised two-classification task as a scene, because label imbalance problem exists in part of task data, an Error Rate (Error Rate), an Accuracy Rate (Accuracy), an F1Score (F1Score) and a Nemenyi Test method (Nemenyi Test) for testing the significance of performance among algorithms are adopted as evaluation indexes, and the calculation method is shown in formulas 1-4:
in the matrix, tp (true positive) represents the number of positive examples predicted as positive examples; FN (false negative) represents the number of positive cases predicted as negative cases; FP (false positive) represents the number of negative cases predicted as positive cases; TN (true negative) represents the number of negative cases predicted as negative cases. According to the above symbol definition, the evaluation index of the experiment is calculated as follows:
the Nemenyi inspection method belongs to a Post-Hoc Test (Post-Hoc Test) method and is used for performing pairwise inspection on significance of algorithm performance difference, the inspection index is an error rate, if the significance is provided, the two algorithms have obvious performance difference, and if the significance is not provided, the performance is similar. Wherein, the Critical Difference value (CD) is determined by the number K of algorithms and the number N of experimental groups of a single algorithm:
wherein the critical value q α Based on student Range statistics divided byAnd obtaining, and referring to the experimental process for a detailed parameter setting method.
In the experiment process, the depth of a weak learner tree is set to be 2, the number of weak learners is set to be 50, the adjustment range of the decay coefficient delta is set to be 0.9 and 1 as the over-parameter of the SSAdaBoost, and the adjustment range of the sampling proportion mu is set to be 0 and 0.4.
(2) And verifying the application effect of the provided model interpretation method for performing feature control based on the 'query sample' in the train performance extraction task. Because the actual acceleration of different levels under each speed interval of the train is not easy to measure, the real performance table can not be obtained and compared with the extracted performance table, and the experiment is carried out by adopting a Conversion Method. Based on a physical simulation model, an extraction performance table is used as a train level acceleration parameter, the operation simulation effect is analyzed, and the more similar the simulation result and the actual operation result, the more similar the extraction performance and the actual performance are represented. Meanwhile, a simulation model is established by adopting a train delivery performance table and is used for comparing and analyzing the accuracy of the extracted performance table. The experimental data are the operation data of train with No. three lines of an airport and a combined-fertilizer subway of the Xian subway.
In the experiment, a 'conversion method' is adopted to analyze the simulation precision of the physical model based on the extraction performance table, and the performance extraction effect is equivalently evaluated. The performance extraction effect is influenced by the train performance estimation model, and an acceleration estimation model established by the SSGB needs to be evaluated. The output of the model is continuous variable, and the Mean Squared Error (MSE) of a regression task typical index with higher sensitivity is adopted for evaluation:
in the formula, y i The true tag value representing the ith test sample,the estimated label value of the model output, n is the number of test samples. MSE can measure the estimation error of the model on the test set, and the result can represent the evaluation effect of similar indexes such as average absolute error, root mean square error and the like.
Analyzing the performance extraction effect by comparing the similarity between the simulation curve obtained based on the extraction performance and the real curve, and adopting the AUC err End point velocity error v err End point position error s err And (3) index such as:
and (4) drawing curve qualitative analysis curve similarity during the experiment, wherein the price comparison objects comprise a real speed-position curve, a simulation curve based on an extraction performance table and a simulation curve based on a delivery performance table.
The experiment is carried out in a hardware environment of an MSI Prestige desktop computer, a CPU (Central processing Unit) is an Intel Core i7-10700K eight-Core sixteen-thread processor, the CPU dominant frequency is 3.8GHz, the physical memory is 32G, the memory frequency is 2400MHz, a display card is a GeForce RTX 2080SUPER, an 8GB independent display memory is provided, and an operating system is windows 10 and 64 bits.
The specific process of the experiment is as follows:
Step 1.1, estimating the reliability and the learning difficulty of the samples by using the change modes of the loss function values of the noise-containing samples and the normal samples in the training process of the improved learning model, wherein the change modes comprise absolute values, change directions, change speeds and the like.
Specifically, taking the binary task as an example, assume that the training set contains n samples (x) 1 ,y 1 ),…,(x n ,y n ) WhereinIs the feature vector of the ith sample, y i E { -1,1} is the label of the ith sample. A generic supervised learning process will build a classification model on this training set by optimizing the following problem:
wherein L (y) i ,f(x i Theta)) represents a loss function corresponding to a loss algorithm, and the strong learner in the promotion learning is formed by combining a plurality of weak learners, namelyFor simplicity of description, F will be described below m (x i Θ) writing F m . The self-sampling learning attention sample training effect and training speed are in the form of:
wherein, lambda is a self-sampling coefficient, the scale of the training sample of the next iteration is determined, alpha is a balance coefficient, and the action strength of two types of regular patterns is controlled.
And 1.2, adjusting the training sample subset in each iteration, and optimizing the sample learning sequence while eliminating noise points.
And 2, introducing AdaBoost into self-sampling learning, and establishing SSAdaBoost and SSGB methods.
Introducing self-sampling learning into AdaBoost, and establishing an SSAdaBoost method. AdaBoost builds an incremental logistic regression model based on exponential losses, building a new weak learner f (x) in the iteration by optimizing the following problem:
adding a self-sampling regular into an SSAdaBoost training target, and constraining a weak learner training sample set:
where α is the balance coefficient and λ is the sample rate coefficient, the above equation can be solved by alternately updating f (x) and the self-sampling weight v. The method takes the strong learner as an optimization object, and updating F (x) means training a new weak learner f (x) and performing weighted combination on the new weak learner f (x).
In each iteration, v is first fixed, making the subcomponents of the optimization objectiveBecomes a constant value when the optimization problem (11) becomes a weighted exponential loss minimization problem:
and 2.1, establishing a target train performance estimation model, mining special train performance knowledge in noisy operation data, and fitting a mapping function between acceleration and characteristics such as speed, a rank sequence (implicit delay characteristic) and gradient.
The optimization problem is similar to the AdaBoost primitive function (10), the same solving method can be adopted, and an incremental logistic regression model minimization E (ve) is established through a quasi-Newton method -yF(x) ). The above equation is expanded to the second order:
with respect to f (x) epsilon { -1,1} and c >0, minimizing point-by-point the above equation yields:
wherein v is i w i For sample weights, the second order approximate minimum of the above equation is still for f (x) i ) E { -1,1 }. The new weak learner F can be trained on the basis of the sample weight while the calculation mode in the original algorithm is maintained, and the objective function L (F + cf) is convex with respect to c, and the weak learner weight c can be obtained by directly making the derivative thereof 0:
Subsequently, c and f are fixed, and v in the problem (11) is solved:
as can be seen from the form of the self-sampling weight solution, the proposed method can be based on samples toThe learning effect of the representation andand the reliability estimation and selection of the sample are completed by the characterized learning rate, and the quality of the training set is improved.
A self-sampling learning method is introduced into gradient boosting learning, and an SSGB method is provided. The gradient boosting learning is characterized in that target loss is minimized in a gradient descending-like mode, and a weak learner training mode and weight are as follows:
the derivation and optimization process of SSGB is very similar to that of ssagaboost, and its hyper-parameters can also be determined using the same strategy.
And 3, controlling the characteristic influences of the level sequence, the time delay, the gradient and the like by using the 'query sample', and sequentially querying the corresponding acceleration values of each level in different speed intervals.
The hypothesis model fits the features (x) a ,x b ,x c ) Mapping with a function of tag y, and knowing x b =c b ,x c =c c When it is time, it has no effect on tag y (e.g., x) b Representing the correction gradient, c b With 0, y representing acceleration, the specific relationship to acceleration when correcting for non-zero slope is unknown, but x b When 0 has no influence on other characteristics or acceleration), then (q) is defined a ,c b ,c c ) For query samples, it can be used to extract x a =q a The corresponding tag y value.
And 3.1, establishing a train exclusive performance table to extract the quantitative relation between the target characteristics and the labels.
And (3) testing results: based on real data of the west-safety airport line, accurate motion simulation of learning is promoted by combining an automatic driving system and the provided elements, and the performance is tested to be adaptive to the optimization effect of the recommendation speed. The result shows that compared with the existing method, the SSAdaBoost obviously improves the robustness of the model, can accurately model for the delay noisy operation data, and the train performance extracted by the method is close to the actual performance.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (6)
1. The data-driven train level acceleration extraction method is characterized by comprising the following steps of:
step 1, aiming at noisy table type data, a self-sampling learning method SSL is provided: estimating the reliability and the learning difficulty of the samples by using the change modes of the loss function values of the noise-containing samples and the normal samples in the training process of the learning model, including the absolute values, the change directions, the change speeds and the like, adjusting the training sample subset in each iteration, and optimizing the learning sequence of the samples while eliminating noise points;
step 2, introducing self-sampling learning into AdaBoost, establishing an SSAdaBoost and SSGB method, establishing a target train performance estimation model, mining special train performance knowledge in noisy operation data, and fitting a mapping function between acceleration and characteristics such as speed, a level sequence (implicit delay characteristic) and gradient;
and 3, controlling the characteristic influences of the level sequence, the time delay, the gradient and the like by using the 'query sample', sequentially querying the corresponding acceleration values of each level in different speed intervals, establishing a train exclusive performance table, and realizing the extraction of the quantitative relation between the target characteristic and the label.
2. The data-driven train-level acceleration extraction method of claim 1, characterized in that: in step 1, the present invention is directed to noisy tabular data.
3. The data-driven train-level acceleration extraction method of claim 1, characterized in that: in step 2, the method for establishing the SSAdaBoost and the SSGB is realized by introducing self-sampling learning into the AdaBoost and gradient boosting learning, the self-sampling learning focuses on the training effect and the training speed of the sample, and the form is as follows:
wherein, lambda is a self-sampling coefficient, the scale of the training sample of the next iteration is determined, alpha is a balance coefficient, and the action intensity of two types of regularization is controlled.
4. The data-driven train-level acceleration extraction method of claim 1, characterized in that: in step 2, adding a self-sampling regular into an SSAdaBoost training target, and constraining a weak learner training sample set:
wherein alpha is a balance coefficient, lambda is a sampling rate coefficient, the above formula can be solved by alternately updating F (x) and self-sampling weight v, the method takes a strong learner as an optimization object, updating F (x) means training a new weak learner f (x) and performing weighted combination into the strong learning, and the input of the method is a training sample set { (x) 1 ,y 1 ),…,(x n ,y n ) Training iteration times M, sampling proportion mu, balance coefficient alpha being 0.5, decay coefficient delta and outputting as a strong classifier F M (x)。
5. The data-driven train-level acceleration extraction method of claim 1, characterized in that: in step 3, a self-sampling learning method is introduced into gradient boosting learning, and an SSGB method is provided, wherein the gradient boosting learning is characterized in that target loss is minimized in a gradient descending similar mode, and a weak learner training mode and weight are as follows:
the input of the method is a training sample set { (x) 1 ,y 1 ),…,(x n ,y n ) The training iteration times M, the sampling proportion mu, the balance coefficient alpha equal to 0.5 and the decay coefficient delta, and the strong classifier F is output M (x)。
6. The data-driven train-level acceleration extraction method of claim 1, characterized in that: in step 3, the hypothesis model fits the features (x) a ,x b ,x c ) Mapping with a function of tag y, and knowing x b =c b ,x c =c c When it is time, it has no effect on tag y (e.g., x) b Representing the correction gradient, c b With 0, y representing acceleration, the specific relationship to acceleration when correcting for non-zero slope is unknown, but x b When 0 has no influence on other characteristics or acceleration), then (q) is defined a ,c b ,c c ) For query samples, it can be used to extract x a =q a The corresponding tag y value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210536133.5A CN115062402A (en) | 2022-05-17 | 2022-05-17 | Data-driven train level acceleration extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210536133.5A CN115062402A (en) | 2022-05-17 | 2022-05-17 | Data-driven train level acceleration extraction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115062402A true CN115062402A (en) | 2022-09-16 |
Family
ID=83197945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210536133.5A Pending CN115062402A (en) | 2022-05-17 | 2022-05-17 | Data-driven train level acceleration extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115062402A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115600044A (en) * | 2022-11-28 | 2023-01-13 | 湖南大学(Cn) | River section flow calculation method, device, equipment and storage medium |
-
2022
- 2022-05-17 CN CN202210536133.5A patent/CN115062402A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115600044A (en) * | 2022-11-28 | 2023-01-13 | 湖南大学(Cn) | River section flow calculation method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111832225B (en) | Method for constructing driving condition of automobile | |
CN111814871B (en) | Image classification method based on reliable weight optimal transmission | |
Dong et al. | Improved robust vehicle detection and identification based on single magnetic sensor | |
CN110097755A (en) | Freeway traffic flow amount state identification method based on deep neural network | |
CN113401143B (en) | Individualized self-adaptive trajectory prediction method based on driving style and intention | |
CN112650204B (en) | Intelligent track unmanned vehicle fault gene identification method and system | |
CN112215487B (en) | Vehicle running risk prediction method based on neural network model | |
CN110120218A (en) | Expressway oversize vehicle recognition methods based on GMM-HMM | |
CN111368920A (en) | Quantum twin neural network-based binary classification method and face recognition method thereof | |
KR20210141784A (en) | A method for training a deep learning network based on AI and a learning device using the same | |
CN113239720B (en) | Subway vehicle running gear fault diagnosis method based on deep migration learning | |
CN110232415B (en) | Train bogie fault identification method based on biological information characteristics | |
CN115062402A (en) | Data-driven train level acceleration extraction method | |
CN113610188A (en) | Bow net contact force non-section abnormity identification method and device | |
CN115935672A (en) | Fuel cell automobile energy consumption calculation method fusing working condition prediction information | |
CN113033899A (en) | Unmanned adjacent vehicle track prediction method | |
CN113076235B (en) | Time sequence abnormity detection method based on state fusion | |
CN112562320A (en) | Self-adaptive traffic incident detection method based on improved random forest | |
CN111144462A (en) | Unknown individual identification method and device for radar signals | |
Zhang et al. | Car-following behavior model learning using timed automata | |
Xu et al. | Robustness analysis of discrete state-based reinforcement learning models in traffic signal control | |
CN106991817B (en) | Method for determining traffic capacity of road sections of multi-level road network | |
CN107862341A (en) | A kind of vehicle checking method | |
CN113139464B (en) | Power grid fault detection method | |
CN112308824B (en) | Curve radius classification identification method and device based on track geometric detection data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |