EP4066219A1 - Method for maintenance of a fleet of a plurality of vehicles - Google Patents

Method for maintenance of a fleet of a plurality of vehicles

Info

Publication number
EP4066219A1
EP4066219A1 EP19861267.3A EP19861267A EP4066219A1 EP 4066219 A1 EP4066219 A1 EP 4066219A1 EP 19861267 A EP19861267 A EP 19861267A EP 4066219 A1 EP4066219 A1 EP 4066219A1
Authority
EP
European Patent Office
Prior art keywords
fleet
vehicle
vehicles
global model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19861267.3A
Other languages
German (de)
French (fr)
Inventor
Aurélien MAYOUE
Peter Hauser
Henri-Nicolas Olivier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Carfit Corp
Original Assignee
Commissariat a lEnergie Atomique CEA
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Carfit Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commissariat a lEnergie Atomique CEA, Commissariat a lEnergie Atomique et aux Energies Alternatives CEA, Carfit Corp filed Critical Commissariat a lEnergie Atomique CEA
Publication of EP4066219A1 publication Critical patent/EP4066219A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/006Indicating maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0816Indicating performance data, e.g. occurrence of a malfunction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the invention relates to the field of monitoring the wear of vehicles of a fleet. More particularly, the present invention is based on MEMS sensors (containing at least a 3-axis accelerometer, a processor and memory units) embedded in each vehicle and a machine learning setting to provide maintenance alerts.
  • MEMS sensors containing at least a 3-axis accelerometer, a processor and memory units
  • the solution of the proposed invention aims to provide each user with maintenance alerts concerning mechanical issues about the undercarriage parts of a vehicle such as tires, wheels, shock absorbers and brakes. More precisely, the solution is based on MEMS sensors and analyses vehicle vibrations to track essential information about tires, wheels, shock absorbers and brakes and provides the user with maintenance alerts about mechanical issues (wheel imbalance, tire wear, brake pad wear). Speed knowledge is needed to track mechanical issues but the analysis of car’s vibrations permits also to estimate in real-time the speed of a vehicle. Therefore the solution also provides the user with information about speed profile and distance traveled for each drive. These pieces of information can be gathered to provide an ID card of a vehicle.
  • mechanical issues detection and speed profile estimation are based on statistical models provided by a machine learning setting.
  • the present invention aims to provide a solution based on a decentralized optimization procedure which enables to train a global model without the need to shuffle data from the sensor to the central server.
  • each individual endpoint uses its local data to contribute to the training of a global model with the possibility of improving its specific model.
  • the solution of the present invention leverages data variability from all user devices to improve the robustness of the global model on the one hand and data specificity from each user device to improve the accuracy of each model at edge on the other hand. This process can run continually, self-adapting to the new vehicles.
  • the subject of the invention is a method, implemented by computer, for maintenance of a fleet of a plurality of vehicles, each vehicle of the fleet comprising a MEMS sensor, a processor and a memory, wherein the method comprises a first phase of machine learning of a global model and a second phase of using the global model on each vehicle of the fleet, the first phase of machine learning comprising the following steps:
  • the step of personalization of the global model of at least one among the plurality of vehicles of the fleet is performed during the first phase of machine learning of the global model.
  • the step of personalization of the global model of the at least one among the plurality of vehicles of the fleet may be performed based on a distribution of the data locally collected, local performances or an iteration of the learning process.
  • the updated local parameters of the personalized global model of the at least one among the plurality of vehicles of the fleet may be excluded from the step of transferring the updated local parameters to the central server.
  • the data locally collected by the MEMS sensor of each vehicle is the instantaneous acceleration of said vehicle.
  • the invention also relates to a device for maintenance of a fleet of a plurality of vehicles, the device comprising means for implementing the steps of the method of the invention.
  • the invention also relates to a computer program product, said computer program comprising code instructions making it possible to perform the steps of the method of the invention, when said program is run on a computer.
  • FIG. 1 schematically represents a block diagram with the steps of a method for maintenance of a fleet of a plurality of vehicles according to the invention
  • FIG. 2 schematically represents the steps of the first phase of machine learning of a global model of the method for maintenance of a fleet of a plurality of vehicles according to the invention
  • - Figure 3 schematically represents required materials for a vehicle of the fleet during the first phase of machine learning and during the second phase of using the model on the vehicle according to the invention
  • FIG. 5 represents the local accuracy of the model for the vehicles of the fleet and the performance gain or loss for each vehicle based on the first phase of the method of the invention
  • FIG. 7 represents the relative errors of the estimation by the shared model of the invention, without and with the step of personalization, of the distance travelled by the fleet of vehicles;
  • FIG. 9 represents the performances about estimation of distance travelled by the fleet of vehicles using regression models, with the shared model and with the personalized model according to the invention.
  • FIG. 10 represents the duration of the acquisitions for various vehicles of the fleet, a first part of the acquisition without any mechanical issues and a second part with an imbalance trouble;
  • FIG. 13 represents a general diagram of an artificial neural network.
  • FIG. 1 schematically represents a block diagram with the steps of a method for maintenance of a fleet of a plurality of vehicles according to the invention.
  • the method for maintenance of a fleet of a plurality of vehicles is implemented by computer.
  • Each vehicle of the fleet comprises a MEMS sensor, a processor and a memory.
  • the method comprises a first phase 10 of machine learning of a global model and a second phase 20 of using the global model on each vehicle of the fleet.
  • the first phase 10 of machine learning comprises the following steps. First of all, the first phase 10 comprises a step 100 of computing the global model based on initial data on a central server, thus providing initial global parameters of the global model.
  • the first phase 10 comprises a step 102 of uploading the global model with the initial global parameters on the memory of each vehicle of the fleet.
  • the first phase 10 comprises a step 104 of updating by the processor of said vehicle the initial global parameters based on data locally collected by the MEMS sensor and stored in the memory of said vehicle, thus providing updated local parameters of said vehicle.
  • the data locally collected by the MEMS sensor 40 of each vehicle is the instantaneous acceleration of said vehicle.
  • the first phase 10 further comprises, for at least one among the plurality of vehicles of the fleet, a step 106 of transferring the updated local parameters to the central server.
  • the first phase 10 comprises a step 108 of updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server.
  • the step 104 of updating and then 106 of transferring the updated local parameters to the central server may be performed for each vehicle of the fleet or only for part of the fleet.
  • the steps 102, 104, 106, 108 of the first phase 10 are performed until a predetermined convergence criterion is reached.
  • the method of the invention further comprises a step 110 of personalization of the global model of at least one among the plurality of vehicles of the fleet, the personalization of the global model being based on the data locally collected by the MEMS sensor and stored in the memory of said vehicle.
  • the method of the invention comprises, for each vehicle of the fleet, a step 112 of sending an alarm signal to a user of said vehicle, if a deviation between a feature obtained from the global model and a predefined feature is determined.
  • the step 110 of personalization of the global model of at least one among the plurality of vehicles of the fleet is performed during the first phase 10 of machine learning of the global model.
  • the updated local parameters of the personalized global model of the at least one among the plurality of vehicles of the fleet may be excluded from the step 106 of transferring the updated local parameters to the central server.
  • the updated local parameters of the personalized global model may be only adapted to a certain class of vehicles and would decrease the performance of the global model if they were taken into account during the step 108 of updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server.
  • the choice of personalizing the model for a particular vehicle may be based on their data distribution, local performances or the iteration of the learning process.
  • the step 110 of personalization of the global model of the at least one among the plurality of vehicles of the fleet is performed based on a distribution of the data locally collected, local performances or an iteration of the learning process.
  • the personalization of the global model will be described based on examples in more details below.
  • figure 13 represents a general diagram of an artificial neural network, for example a convolutional neural network.
  • a neural network is conventionally composed of several layers C e , Ci, Ci +i , C s of interconnected neurons.
  • the network comprises at least one input layer C e and one output layer C s and at least one intermediate layer C-i, Ci +i .
  • the neurons N, , e of the input layer C e each receive an input data input.
  • the input data can be of different natures depending on the intended application. In the context of the invention, the input data may be acceleration or speed of a vehicle.
  • a neural network has the general function of learning how to solve a given problem, which may be a classification problem, but not only.
  • a neural network is, for example, used in the field of regression to estimate speed.
  • Each neuron of a layer is connected, by its input and/or its output, to all the neurons of the previous or next layer. More generally, a neuron may only be connected to a portion of the neurons of another layer, particularly in the case of a convolutional network.
  • the connections between two neurons N i e , Nj of two successive layers are made through artificial synapses Si, S 2 , S 3 which can be realized, in particular, by digital memories or by memristive devices.
  • the coefficients of the synapses can be optimized thanks to a learning mechanism of the neural network. The aim of the learning mechanism is to train the neural network to solve a defined problem.
  • This mechanism has two distinct phases, a first data propagation phase from the input layer to the output layer and a second error retro- propagation phase from the output layer to the input layer with, for each layer, an update of the synapses weights.
  • the errors calculated by the output neurons at the end of the data propagation phase are related to the problem to be solved. In general, it is a question of determining an error between the value of an output neuron and an expected value or a target value depending on the problem to be solved.
  • learning data for example data about speed or acceleration
  • Each neuron implements, during this first phase, a function of integration of the received data which consists, in the case of a convolutional network, in calculating a sum of the received data weighted by the coefficients of the synapses.
  • each neuron performs a convolution operation on a convolutional filter size corresponding to the size of the neuron submatrix of the previous layer to which it is connected.
  • Each neuron then propagates the result of the convolution to the neurons of the next layer.
  • the integration function that it realizes can vary.
  • the neurons N i s of the output layer C s perform additional processing in that they calculate an error between the output value of the neuron N i s and an expected value or a target value which corresponds to the state end of the output layer neuron that is desired in connection with the learning input data and the problem to be solved by the network. For example, if the network has to solve a classification problem, the expected final state of a neuron is the class it is supposed to identify in the input data.
  • the neurons of the output layer C s transmit the calculated errors to the neurons of the previous layer Ci +i which calculate a local error from the retro-propagated error from the previous layer and in turn transmit this local error to the previous layer .
  • each neuron calculates, from the local error, an update value of the weights of the synapses to which it is connected and sets synapses. The process continues for each layer of neurons up to the penultimate layer which is responsible for updating the weights of the synapses that connect it to the C e input layer.
  • the invention may be based on the use of different types of artificial neural networks to solve a problem of vibration analysis.
  • the general principles introduced above serve to introduce the basic concepts used to implement the invention, as known by a person skilled in the art of neural networks.
  • the learning process involved in the invention can be considered with any type of models for which some notion of updates can be defined, and not only a neural network.
  • Figure 2 schematically represents the steps of the first phase 10 of machine learning of a global model of the method for maintenance of a fleet 30 of a plurality of vehicles 31 , 32, 33 according to the invention.
  • the first phase 10 of the invention is based on devices equipping each vehicle 31 , 32, 33 of the fleet 30 and a Machine Learning setting.
  • the invention aims at monitoring the wear of each vehicle from the fleet 30. Therefore each user is provided with information about speed profile and distance traveled for each drive, and maintenance alerts concerning mechanical issues about the undercarriage parts of a vehicle such as tires, wheels, shock absorbers and brakes.
  • the device contains at least a 3-axis accelerometer (MEMS sensor 40) to leverage the vehicle’s vibrations and also a storage (memory 42) and calculation units (at least one processor 41) to store and process local data. It is set inside each vehicle 31 , 32, 33 of the fleet 30. It is preferably attached to the steering wheel. But experiments proved that a device setting on the mirror provides also useful information to estimate speed and then detect mechanical issues.
  • MEMS sensor 40 3-axis accelerometer
  • storage memory 42
  • calculation units at least one processor 41
  • the Machine Learning approach uses features extracted from raw data or directly raw data as inputs to train mechanical issues or speed profile models. But, contrary to traditional Machine Learning approaches which require collecting data and performing training process on a central server, the proposed model is built in a decentralized way which enables to train a shared model from local data on the device of each vehicle without the need to ever upload data to a central server 50. This decentralized procedure is based on the following main steps of the first phase 10 according to the invention.
  • the step 100 is done on a central server 50 (also called a Cloud) and consists in initializing the shared model (or global model). In practice, this can be done randomly or using publicly available data to pretrain the model.
  • the computing of the global model based on initial data on the central server 50 provides initial global parameters of the global model.
  • the global model also called the shared model, is designed to be deployed on each vehicle 31 , 32, 33 of the fleet 30 in such a way that each device receives a copy of the current model parameters q.
  • Q can be a vector or a matrix depending on the kinds of machine-learned models.
  • the step 104 consists in updating, for each vehicle 31, 32, 33 of the fleet 30 (or a subset of vehicles, i.e. at least one among the plurality of the vehicles), by the processor 41 of said vehicle 31 , 32, 33 the initial global parameters based on data locally collected by the device, i.e. the MEMS sensor 40 and stored in the memory 42 of said vehicle 31 , 32, 33, thus providing updated local parameters of said vehicle 31 , 32, 33.
  • This step is also called the model retraining.
  • a subset of K vehicles to participate in the training round is selected.
  • Each selected device of the corresponding selected vehicle(s) performs one or several training steps (with the calculation unit) on their local data (stored in the storage unit) based on the minimization of their local objective function:
  • F fc (0) — ⁇ iePfc xi,y 0) where x, are the n k local samples and y, are the associated labels.
  • the data may advantageously be deleted from the storage unit to guarantee that data used for a new training round have never been taken into account before.
  • step 106 There is a transfer of the updated local parameters to the central server 50 (step 106), and an update of the global model based on the transferred updated local parameters by an aggregation algorithm on the central server 50 (step 108). This is also called model aggregating.
  • each selected device transmits its updates to the central server 50 where an algorithm aggregates them (by weighted averaging) in order to build an improved global model.
  • This step can be formalized by: q ⁇ - Q - H k where
  • N n k is the total number of samples used by the K selected devices of the selected vehicles of the fleet for retraining at the edge and h is the global learning rate. This step can be considered as a new iteration of the whole training process improving the robustness of the shared model.
  • the steps 102 to 108 are thus repeated until a stopping criterion, a so-called convergence criterion, is reached.
  • the stopping criterion is set by the service provider. Generally, it is based on a validation set on which performances of the current model are evaluated. And the training is stopped if there was no improvement for a certain number of iteration, i.e. if the accuracy of the retrained model does not evolve significantly anymore.
  • a smart approach to do that consists in using the same local data (and their labels) as those used for training. First, the local data are used to compute the validation accuracy of the current model and then, they are used to calculate a model update for the next iteration of training. In this way, both local updates and evaluation reports will be sent to the server 50.
  • the service provider has of course the possibility of evaluating the current model without training it again. In this case, only the evaluation reports will be transmitted to the server by the selected devices.
  • the learning process can be considered with any type of models for which some notion of updates can be defined.
  • models based on gradient descent linear regression, logistic regression, neural networks
  • K-means is also well adapted to edge retraining as the sequential k-means algorithm updates online parameters of the clustering.
  • the speed model is based on regression or multi-class classification (considering ranges of speed) approaches whereas the tracking of each mechanical issue is based on specific one- class classification models. Indeed, the normal case is modeled (without any mechanical issues) and then it is detected when the vibrations move away from the normal situation.
  • Such models are speed-dependent, hence the necessity of having previously estimated the speed with the speed model.
  • the devices communicate with the central server 50 (using a low-power wide-area network LPWAN) to download/upload the parameters of the shared model (respectively report evaluation performances).
  • the devices selected for the local training are those connected to a personal smartphone 51 (e.g. by Bluetooth).
  • the smartphone 51 provides labels necessary to train (respectively evaluate) both speed model (using e.g. GPS) and mechanical issues model (using e.g. an application provided by the service provider inviting the user to mention when his car has been reviewed or checked in a garage to be sure the vehicle has no mechanical issues).
  • speed model using e.g. GPS
  • mechanical issues model using e.g. an application provided by the service provider inviting the user to mention when his car has been reviewed or checked in a garage to be sure the vehicle has no mechanical issues.
  • the smartphone 51 is not necessary during the second phase 20, also called utilization phase or inference step, as depicted in Figure 3.
  • Figure 3 schematically represents the required materials for a vehicle 31 of the fleet 30 during the first phase 10 (part (a) of Figure 3) of machine learning and during the second phase 20 (part (b) of Figure 3) of using the model on the vehicle 31 according to the invention.
  • the invention aims at addressing also the problem of non-IID (independent and identically distributed) data.
  • the IID sampling of the training data is a key point to train a Machine Learning model. It ensures that the stochastic gradient is an unbiased estimate of the full gradient. But, in a decentralized learning process, it is unrealistic to assume that the local data on each edge device is always IID. Non- iidness could be due to a poor representation of the classes locally or to a difference between the data distribution of the client and the population distribution. Indeed, the distribution of vibration data can be different depending on the brand of the vehicle or the kind of engine (thermal or electrical engine) and the representation of the speed classes depends on the driver’s habits (driving in town vs. highway driving).
  • the first one is dedicated to the estimation of speed and distance based on both multiclass classification and regression.
  • the second one is dedicated to the detection of wheel imbalance mechanical issue using a one-class classification model.
  • the first application of the invention is presented through a vibrations analysis in a decentralized way to estimate speed and distance travelled by a fleet of cars.
  • a fleet of 42 cars was equipped with a MEMS sensor 40 (more precisely a 3-axis accelerometer) sampled at 200Hz (and an Arm-Cortex M4 for storage and calculation).
  • the sensor 40 was set on the top of the steering wheel.
  • the sensor 40 can directly share information with the central server 50 using LST-M network and is possibly connecting to a smartphone 51 by Bluetooth.
  • the fleet of vehicles is made up of 8 different brands (Peugeot, Renault, Nissan, BMW, Audi%) for a total of 20 models (208, Clio, A1 ).
  • Each driver has taken a trip whose duration is comprised between 20 minutes and more than 2 hours. The total duration of acquisitions is about 8 hours. Only 30 cars were selected to train the shared model. For these vehicles, the first part of their trip was used for training while the second part was kept for evaluation. The remaining 12 vehicles participated only in evaluating the model.
  • spectral representations FFT, filter-bank outputs
  • spectral features e.g. Linear Frequency Cepstral Coefficients, Band Energy Ratio
  • Spectral representations/features are calculated considering segments (windows) whose duration is comprised between 2 seconds and 10 seconds.
  • the kind of the model could be either a regression one (linear regression, logistic regression%) to estimate a speed value or a multi-class one (neural network, linear support vector machines...) to classify inputs into speed bins. Both cases are illustrated in this part.
  • d s x t s where t s is the duration of the segment.
  • our multi-class and regression models are both based on a fully-connected neural network (with 2 hidden layers of 500 neurons each) taking as inputs LFCC (Linear Frequency Cepstral Coefficients) feature vectors and outputting an estimation about speed (considering 3 m/s bins in the case of classification).
  • LFCC Linear Frequency Cepstral Coefficients
  • the segmentation of the raw data was done considering 5s window (with 50% overlapping between successive windows).
  • LFCC are calculated for the three axis of the sensor and then concatenated to build the input feature vector. It is reminded that labels about speed are provided by the GPS of the smartphone.
  • Figure 4 represents the evolution of the accuracy of the shared model of the invention depending on the number of learning rounds.
  • K which defines the number of devices participating in each training round.
  • Figure 5 represents the local accuracy of the model for the vehicles of the fleet and the performance gain or loss for each vehicle based on the first phase of the method of the invention.
  • the shared model is evaluated with the test set of each device (see Figure 5). It appears that there is a significant difference of performances between the vehicles of the fleet (the local accuracy is comprised between 0.24 and 0.87 for a mean value of 0.70). This is due to the fact that the distribution of the data could be very different depending on the vehicle’s brand. As an illustration, the worst performances are obtained for a ‘PEUGEOT ION’ car which is the only electric vehicle of the fleet.
  • Figure 6 represents the performance improvement of the model accuracy thanks to the step of personalization of the invention.
  • a way to improve local performances whatever the distribution of data consists in personalizing the models at edge (step 110).
  • the shared model is trained until it has sufficient quality.
  • each device which participates in the training process has now the possibility to fine-tune its model locally while keeping training the shared model.
  • Figure 6 shows that personalization is relevant to improve local accuracy (and thus mean accuracy) in the case of non-IID data (the mean accuracy increases of 0.03 to reach 0.73 with personalization).
  • the improvement of performances can be of course better if the starting round is not the same for all devices but the result of a function taking as inputs the distance between the local data distribution and the population distribution and/or the gap between the local accuracy and the mean accuracy of the population.
  • Figure 8 represents the regression performance with the shared model and personalized model according to the invention.
  • Mean MAE is 1.5 m/s using the shared model and decreases to 1.2 m/s considering personalization that is quite acceptable.
  • the speed estimation will be used as input of the mechanical issue model.
  • Figure 9 represents the performances about estimation of distance travelled by the fleet of vehicles using regression models, with the shared model (part (a) of Figure 9) and with the personalized model (part (b) of Figure 9) according to the invention. It shows the performance for the distance estimation based on the speed provided by the regression models. Results for distance estimation are still very satisfactory as relative error is about 0.05 considering speed outputted by the shared model and only 0.03 after personalization of the models.
  • the second application of the invention is presented through a vibrations analysis in a decentralized way to estimate wheel imbalance mechanical issue for a fleet of cars.
  • a fleet of 7 cars was used. Before the start of acquisitions, each vehicle was taken to the shop to be sure it had no mechanical issue. Then, each driver has taken a first trip to produce data which correspond to the normal class. A second trip was taken after a weight was added to the front or rear wheel of the car. Data acquired during this trip were labeled as ‘imbalance’. The imbalance issue introduced by the weight could be more or less hard to detect as the weight was light, medium or heavy. 50% of the first trip was kept for training. The remaining part of this trip and the second trip were used for evaluation.
  • Figure 10 represents the duration of the acquisitions for various vehicles of the fleet, a first part of the acquisition without any mechanical issues and a second part with an imbalance trouble.
  • the models considered to track the mechanical issue are speed-dependent one-class models. It means that the normal class is modeled for each range of speed and then it is detected when the vibrations move away from the normal case.
  • Spectral features are characteristics of mechanical issues.
  • the features are calculated considering 5s segments (with 50% overlapping considering successive windows) on the longitudinal or transverse axis of the vehicle (the spherical coordinates with roll and pitch can also be used).
  • the choice for the best axis depends on the mechanical issue.
  • the longitudinal axis is made the best choice to detect imbalance problem while features are calculated on transverse axis to detect wheel geometry/alignment problem.
  • the model consists in estimating the mean m and standard deviation s of the spectral features m and s are updated online during the training process.
  • the goal of mechanical issue detection consists in limiting the number of False Positive (FP) while keeping a good number of True Positive (TP). To do that, an alert will be sent to the driver only when the value of the spectral features is beyond m ⁇ 3s.
  • the training process of the one-class model is done until the number of FP remains stable.
  • Figure 11 represents the performance obtained with the shared model of the invention for the imbalance mechanical issue. The results reported were obtained considering the spectral flatness from the longitudinal axis as input feature. The number of devices used to update the model parameters (m and o) at each round of training was 5. Performances evaluation was based on FP and TP ratio. FP is the ratio between the number of alerts and the total number of segments of the testing part for the first (normal) trip whereas TP is the ratio between the number of alerts and the total number of segments of the second (imbalance) trip.
  • part (a) of Figure 11 shows the evolution of the FP during the learning process. It proves that the decentralized learning process according to the invention permits to well model the normal class as the number of FP decreases to 0.6% at the end of the training process.
  • part (b) of Figurel 1 shows that the TP is comprised between 1.7% (Mitsubishi with a light imbalance) and 27.6% (Ford with a heavy imbalance). It appears that it is very hard to detect the light imbalance because the TP do not exceed 3.5% in this case. For the other cases (medium and heavy), a recurrent alert should inform the driver that his car has a mechanical issue.
  • TP are calculated considering all the ranges of speed but the alerts are often activated only for high speeds. Thus, the more time the driver spends at high speeds, the more recurrent is the alert.
  • Figure 12 represents the performance improvement of the model accuracy for the imbalance mechanical issue thanks to the step of personalization of the invention.
  • the invention enables the maintenance of a fleet of vehicles based on the initialization of a global model, the parameters of which are transferred to the vehicles of the fleet where they are locally updated thanks to data locally collected.
  • the updated parameters may be taken into account to upgrade the global model at the central server, and the parameters of the upgraded global model may be transferred to the vehicles.
  • the invention gives the possibility to personalize the global model used by said vehicle.
  • the described invention offers a solution to maintain a fleet of vehicles based on the analysis of signals from one sensor, advantageously a 3-axis accelerometer, embedded in each vehicle of the fleet.
  • the service provider has no need to collect data for the training and validation of the Machine Learning models that will be deployed inside each vehicle of the fleet to estimate speed or to detect mechanical issues.
  • the invention proposes to train a shared model for all vehicles of the fleet but also to personalize it powering personalized experiences, and thus leading to a better accuracy of the model for a large variety of vehicles.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention concerns a method, implemented by computer, for maintenance of a fleet of a plurality of vehicles, each vehicle of the fleet comprising a MEMS sensor, a processor and a memory, the method comprising a first phase (10) of machine learning of a global model and a second phase (20) of using the global model on each vehicle of the fleet. According to the invention, the method further comprises a step (110) of personalization of the global model of at least one among the plurality of vehicles of the fleet, the personalization of the global model being based on the data locally collected by the MEMS sensor and stored in the memory of said vehicle, and for each vehicle of the fleet, a step (112) of sending an alarm signal to a user of said vehicle, if a deviation between a feature obtained from the global model and a predefined feature is determined.

Description

DESCRIPTION
Title of the invention : Method for maintenance of a fleet of a plurality of vehicles [0001] The invention relates to the field of monitoring the wear of vehicles of a fleet. More particularly, the present invention is based on MEMS sensors (containing at least a 3-axis accelerometer, a processor and memory units) embedded in each vehicle and a machine learning setting to provide maintenance alerts.
[0002] Studies show that 40 % of the car maintenance budget is spent on replacing the wearing parts, such as tires, wheels, brakes. The state of these parts is not or hardly monitored by sensors, even on new cars.
[0003] There exist solutions applying motion sensor data to wheel imbalance detection, tire pressure monitoring and/or tread depth measurement. These solutions track the mechanical issues for a particular vehicle but not for a fleet of vehicles. Before deploying such a solution, the service provider needs to collect data to train the Machine Learning model. Finally the accelerometer-based solution requires at least one other sensor at the inference step to work properly.
[0004] The solution of the proposed invention aims to provide each user with maintenance alerts concerning mechanical issues about the undercarriage parts of a vehicle such as tires, wheels, shock absorbers and brakes. More precisely, the solution is based on MEMS sensors and analyses vehicle vibrations to track essential information about tires, wheels, shock absorbers and brakes and provides the user with maintenance alerts about mechanical issues (wheel imbalance, tire wear, brake pad wear...). Speed knowledge is needed to track mechanical issues but the analysis of car’s vibrations permits also to estimate in real-time the speed of a vehicle. Therefore the solution also provides the user with information about speed profile and distance traveled for each drive. These pieces of information can be gathered to provide an ID card of a vehicle.
[0005] According to the invention, mechanical issues detection and speed profile estimation are based on statistical models provided by a machine learning setting.
[0006] There is a need for maintenance alerts concerning mechanical issues about the undercarriage parts of a vehicle. A solution based on a machine learning model seems to be adapted since the model should be suitable for a wide variety of vehicles. Nevertheless, the machine learning model should offer enough efficiency. [0007] Traditional machine learning approaches require collecting data and performing training process on a central server. These approaches have the disadvantage of requiring the transfer of the personal data between the devices embedded in the vehicle and the central server. Depending on the connection between the vehicles and the central server, these transmissions may be slow.
[0008] The present invention aims to provide a solution based on a decentralized optimization procedure which enables to train a global model without the need to shuffle data from the sensor to the central server. In this way, each individual endpoint uses its local data to contribute to the training of a global model with the possibility of improving its specific model. In other words, the solution of the present invention leverages data variability from all user devices to improve the robustness of the global model on the one hand and data specificity from each user device to improve the accuracy of each model at edge on the other hand. This process can run continually, self-adapting to the new vehicles.
[0009] To this end, the subject of the invention is a method, implemented by computer, for maintenance of a fleet of a plurality of vehicles, each vehicle of the fleet comprising a MEMS sensor, a processor and a memory, wherein the method comprises a first phase of machine learning of a global model and a second phase of using the global model on each vehicle of the fleet, the first phase of machine learning comprising the following steps:
• computing the global model based on initial data on a central server, thus providing initial global parameters of the global model,
• uploading the global model with the initial global parameters on the memory of each vehicle of the fleet,
• for at least one among the plurality of vehicles of the fleet, updating by the processor of said vehicle the initial global parameters based on data locally collected by the MEMS sensor and stored in the memory of said vehicle, thus providing updated local parameters of said vehicle,
• for at least one among the plurality of vehicles of the fleet, transferring the updated local parameters to the central server,
• updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server, the steps of the first phase being performed until a predetermined convergence criterion is reached, the method further comprising a step of personalization of the global model of at least one among the plurality of vehicles of the fleet, the personalization of the global model being based on the data locally collected by the MEMS sensor and stored in the memory of said vehicle, and the method comprising, for each vehicle of the fleet, a step of sending an alarm signal to a user of said vehicle, if a deviation between a feature obtained from the global model and a predefined feature is determined.
[0010] According to an aspect of the invention, the step of personalization of the global model of at least one among the plurality of vehicles of the fleet is performed during the first phase of machine learning of the global model.
[0011] The step of personalization of the global model of the at least one among the plurality of vehicles of the fleet may be performed based on a distribution of the data locally collected, local performances or an iteration of the learning process.
[0012] According to another aspect of the invention, the updated local parameters of the personalized global model of the at least one among the plurality of vehicles of the fleet may be excluded from the step of transferring the updated local parameters to the central server.
[0013] Advantageously, the data locally collected by the MEMS sensor of each vehicle is the instantaneous acceleration of said vehicle.
[0014] The invention also relates to a device for maintenance of a fleet of a plurality of vehicles, the device comprising means for implementing the steps of the method of the invention.
[0015] The invention also relates to a computer program product, said computer program comprising code instructions making it possible to perform the steps of the method of the invention, when said program is run on a computer.
[0016] The accompanying drawings illustrate various non-limiting, exemplary, innovative aspects in accordance with the present description:
- Figure 1 schematically represents a block diagram with the steps of a method for maintenance of a fleet of a plurality of vehicles according to the invention;
- Figure 2 schematically represents the steps of the first phase of machine learning of a global model of the method for maintenance of a fleet of a plurality of vehicles according to the invention; - Figure 3 schematically represents required materials for a vehicle of the fleet during the first phase of machine learning and during the second phase of using the model on the vehicle according to the invention;
- Figure 4 represents the evolution of the accuracy of the shared model of the invention depending on the number of learning rounds;
- Figure 5 represents the local accuracy of the model for the vehicles of the fleet and the performance gain or loss for each vehicle based on the first phase of the method of the invention;
- Figure 6 represents the performance improvement of the model accuracy thanks to the step of personalization of the invention;
- Figure 7 represents the relative errors of the estimation by the shared model of the invention, without and with the step of personalization, of the distance travelled by the fleet of vehicles;
- Figure 8 represents the regression performance with the shared model and personalized model according to the invention;
- Figure 9 represents the performances about estimation of distance travelled by the fleet of vehicles using regression models, with the shared model and with the personalized model according to the invention;
- Figure 10 represents the duration of the acquisitions for various vehicles of the fleet, a first part of the acquisition without any mechanical issues and a second part with an imbalance trouble;
- Figure 11 represents the performance obtained with the shared model of the invention for the imbalance mechanical issue;
- Figure 12 represents the performance improvement of the model accuracy for the imbalance mechanical issue thanks to the step of personalization of the invention:
- Figure 13 represents a general diagram of an artificial neural network.
[0017] For the sake of clarity, the same elements have the same references in the various figures.
[0018] Figure 1 schematically represents a block diagram with the steps of a method for maintenance of a fleet of a plurality of vehicles according to the invention. The method for maintenance of a fleet of a plurality of vehicles is implemented by computer. Each vehicle of the fleet comprises a MEMS sensor, a processor and a memory. According to the invention the method comprises a first phase 10 of machine learning of a global model and a second phase 20 of using the global model on each vehicle of the fleet. The first phase 10 of machine learning comprises the following steps. First of all, the first phase 10 comprises a step 100 of computing the global model based on initial data on a central server, thus providing initial global parameters of the global model. The first phase 10 comprises a step 102 of uploading the global model with the initial global parameters on the memory of each vehicle of the fleet. For at least one among the plurality of vehicle of the fleet, the first phase 10 comprises a step 104 of updating by the processor of said vehicle the initial global parameters based on data locally collected by the MEMS sensor and stored in the memory of said vehicle, thus providing updated local parameters of said vehicle. The data locally collected by the MEMS sensor 40 of each vehicle is the instantaneous acceleration of said vehicle.
[0019] The first phase 10 further comprises, for at least one among the plurality of vehicles of the fleet, a step 106 of transferring the updated local parameters to the central server. Finally, the first phase 10 comprises a step 108 of updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server.
[0020] The step 104 of updating and then 106 of transferring the updated local parameters to the central server may be performed for each vehicle of the fleet or only for part of the fleet.
[0021 ] The steps 102, 104, 106, 108 of the first phase 10 are performed until a predetermined convergence criterion is reached.
[0022] The method of the invention further comprises a step 110 of personalization of the global model of at least one among the plurality of vehicles of the fleet, the personalization of the global model being based on the data locally collected by the MEMS sensor and stored in the memory of said vehicle.
[0023] Furthermore the method of the invention comprises, for each vehicle of the fleet, a step 112 of sending an alarm signal to a user of said vehicle, if a deviation between a feature obtained from the global model and a predefined feature is determined.
[0024] The step 110 of personalization of the global model of at least one among the plurality of vehicles of the fleet is performed during the first phase 10 of machine learning of the global model. The updated local parameters of the personalized global model of the at least one among the plurality of vehicles of the fleet may be excluded from the step 106 of transferring the updated local parameters to the central server. The updated local parameters of the personalized global model may be only adapted to a certain class of vehicles and would decrease the performance of the global model if they were taken into account during the step 108 of updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server. The choice of personalizing the model for a particular vehicle may be based on their data distribution, local performances or the iteration of the learning process. In other words, the step 110 of personalization of the global model of the at least one among the plurality of vehicles of the fleet is performed based on a distribution of the data locally collected, local performances or an iteration of the learning process. The personalization of the global model will be described based on examples in more details below.
[0025] As an example of Machine Learning, figure 13 represents a general diagram of an artificial neural network, for example a convolutional neural network. A neural network is conventionally composed of several layers Ce, Ci, Ci+i, Cs of interconnected neurons. The network comprises at least one input layer Ce and one output layer Cs and at least one intermediate layer C-i, Ci+i. The neurons N,, e of the input layer Ce each receive an input data input. The input data can be of different natures depending on the intended application. In the context of the invention, the input data may be acceleration or speed of a vehicle. A neural network has the general function of learning how to solve a given problem, which may be a classification problem, but not only. A neural network is, for example, used in the field of regression to estimate speed. Each neuron of a layer is connected, by its input and/or its output, to all the neurons of the previous or next layer. More generally, a neuron may only be connected to a portion of the neurons of another layer, particularly in the case of a convolutional network. The connections between two neurons Ni e, Nj of two successive layers are made through artificial synapses Si, S2, S3 which can be realized, in particular, by digital memories or by memristive devices. The coefficients of the synapses can be optimized thanks to a learning mechanism of the neural network. The aim of the learning mechanism is to train the neural network to solve a defined problem. This mechanism has two distinct phases, a first data propagation phase from the input layer to the output layer and a second error retro- propagation phase from the output layer to the input layer with, for each layer, an update of the synapses weights. The errors calculated by the output neurons at the end of the data propagation phase are related to the problem to be solved. In general, it is a question of determining an error between the value of an output neuron and an expected value or a target value depending on the problem to be solved.
[0026] In the first phase of data propagation, learning data, for example data about speed or acceleration, are provided at input of the neurons of the input layer and propagated in the network. Each neuron implements, during this first phase, a function of integration of the received data which consists, in the case of a convolutional network, in calculating a sum of the received data weighted by the coefficients of the synapses. In other words, each neuron performs a convolution operation on a convolutional filter size corresponding to the size of the neuron submatrix of the previous layer to which it is connected. Each neuron then propagates the result of the convolution to the neurons of the next layer. According to the models of neurons chosen, the integration function that it realizes can vary.
[0027] The neurons Ni sof the output layer Cs perform additional processing in that they calculate an error between the output value of the neuron Ni sand an expected value or a target value which corresponds to the state end of the output layer neuron that is desired in connection with the learning input data and the problem to be solved by the network. For example, if the network has to solve a classification problem, the expected final state of a neuron is the class it is supposed to identify in the input data.
[0028] During the second phase of error back propagation, the neurons of the output layer Cs transmit the calculated errors to the neurons of the previous layer Ci+i which calculate a local error from the retro-propagated error from the previous layer and in turn transmit this local error to the previous layer . In parallel, each neuron calculates, from the local error, an update value of the weights of the synapses to which it is connected and sets synapses. The process continues for each layer of neurons up to the penultimate layer which is responsible for updating the weights of the synapses that connect it to the Ce input layer.
[0029] The invention may be based on the use of different types of artificial neural networks to solve a problem of vibration analysis. The general principles introduced above serve to introduce the basic concepts used to implement the invention, as known by a person skilled in the art of neural networks. It should be noted that the learning process involved in the invention can be considered with any type of models for which some notion of updates can be defined, and not only a neural network.
[0030] Figure 2 schematically represents the steps of the first phase 10 of machine learning of a global model of the method for maintenance of a fleet 30 of a plurality of vehicles 31 , 32, 33 according to the invention. The first phase 10 of the invention is based on devices equipping each vehicle 31 , 32, 33 of the fleet 30 and a Machine Learning setting. The invention aims at monitoring the wear of each vehicle from the fleet 30. Therefore each user is provided with information about speed profile and distance traveled for each drive, and maintenance alerts concerning mechanical issues about the undercarriage parts of a vehicle such as tires, wheels, shock absorbers and brakes.
[0031] The device contains at least a 3-axis accelerometer (MEMS sensor 40) to leverage the vehicle’s vibrations and also a storage (memory 42) and calculation units (at least one processor 41) to store and process local data. It is set inside each vehicle 31 , 32, 33 of the fleet 30. It is preferably attached to the steering wheel. But experiments proved that a device setting on the mirror provides also useful information to estimate speed and then detect mechanical issues.
[0032] The Machine Learning approach uses features extracted from raw data or directly raw data as inputs to train mechanical issues or speed profile models. But, contrary to traditional Machine Learning approaches which require collecting data and performing training process on a central server, the proposed model is built in a decentralized way which enables to train a shared model from local data on the device of each vehicle without the need to ever upload data to a central server 50. This decentralized procedure is based on the following main steps of the first phase 10 according to the invention.
[0033] The step 100 is done on a central server 50 (also called a Cloud) and consists in initializing the shared model (or global model). In practice, this can be done randomly or using publicly available data to pretrain the model. The computing of the global model based on initial data on the central server 50 (step 100) provides initial global parameters of the global model. [0034] In the step 102 the global model, also called the shared model, is designed to be deployed on each vehicle 31 , 32, 33 of the fleet 30 in such a way that each device receives a copy of the current model parameters q. Q can be a vector or a matrix depending on the kinds of machine-learned models.
[0035] The step 104 consists in updating, for each vehicle 31, 32, 33 of the fleet 30 (or a subset of vehicles, i.e. at least one among the plurality of the vehicles), by the processor 41 of said vehicle 31 , 32, 33 the initial global parameters based on data locally collected by the device, i.e. the MEMS sensor 40 and stored in the memory 42 of said vehicle 31 , 32, 33, thus providing updated local parameters of said vehicle 31 , 32, 33. This step is also called the model retraining. A subset of K vehicles to participate in the training round is selected. Each selected device of the corresponding selected vehicle(s) performs one or several training steps (with the calculation unit) on their local data (stored in the storage unit) based on the minimization of their local objective function: Ffc(0) = — åiePfc xi,y 0) where x, are the nk local samples and y, are the associated labels. The update of the k-th device is denoted by Hk = Q - 0k where 0k is the updated local model. This step permits to produce an update helping to train the shared model. At the end of the local training, the data may advantageously be deleted from the storage unit to guarantee that data used for a new training round have never been taken into account before.
[0036] Finally, the steps 106 and 108 take place. There is a transfer of the updated local parameters to the central server 50 (step 106), and an update of the global model based on the transferred updated local parameters by an aggregation algorithm on the central server 50 (step 108). This is also called model aggregating.
In other words, each selected device transmits its updates to the central server 50 where an algorithm aggregates them (by weighted averaging) in order to build an improved global model. This step can be formalized by: q <- Q - Hk where
N = nk is the total number of samples used by the K selected devices of the selected vehicles of the fleet for retraining at the edge and h is the global learning rate. This step can be considered as a new iteration of the whole training process improving the robustness of the shared model.
[0037] The steps 102 to 108 are thus repeated until a stopping criterion, a so-called convergence criterion, is reached. The stopping criterion is set by the service provider. Generally, it is based on a validation set on which performances of the current model are evaluated. And the training is stopped if there was no improvement for a certain number of iteration, i.e. if the accuracy of the retrained model does not evolve significantly anymore. A smart approach to do that consists in using the same local data (and their labels) as those used for training. First, the local data are used to compute the validation accuracy of the current model and then, they are used to calculate a model update for the next iteration of training. In this way, both local updates and evaluation reports will be sent to the server 50. The service provider has of course the possibility of evaluating the current model without training it again. In this case, only the evaluation reports will be transmitted to the server by the selected devices.
[0038] It is to be noted that no raw data or features are transmitted to the central server 50 during the learning process (only parameters of the updates and possibly evaluation reports), which reduces communication costs. As the data are stored locally and never transmitted to the service provider, this approach also improves user privacy. Finally, the solution of the invention can be deployed without having previously collected data as the model can be initialized randomly and then self- improved during the decentralized learning process.
[0039] The learning process can be considered with any type of models for which some notion of updates can be defined. For example, models based on gradient descent (linear regression, logistic regression, neural networks...) can naturally be optimized with this method considering partial derivatives. K-means is also well adapted to edge retraining as the sequential k-means algorithm updates online parameters of the clustering. Within the solution of the invention, the speed model is based on regression or multi-class classification (considering ranges of speed) approaches whereas the tracking of each mechanical issue is based on specific one- class classification models. Indeed, the normal case is modeled (without any mechanical issues) and then it is detected when the vibrations move away from the normal situation. Such models are speed-dependent, hence the necessity of having previously estimated the speed with the speed model.
[0040] During the learning (respectively evaluation) process, the devices communicate with the central server 50 (using a low-power wide-area network LPWAN) to download/upload the parameters of the shared model (respectively report evaluation performances). Furthermore, the devices selected for the local training (respectively evaluation) are those connected to a personal smartphone 51 (e.g. by Bluetooth). Indeed, the smartphone 51 provides labels necessary to train (respectively evaluate) both speed model (using e.g. GPS) and mechanical issues model (using e.g. an application provided by the service provider inviting the user to mention when his car has been reviewed or checked in a garage to be sure the vehicle has no mechanical issues). It is noted that the smartphone 51 is not necessary during the second phase 20, also called utilization phase or inference step, as depicted in Figure 3.
[0041] Figure 3 schematically represents the required materials for a vehicle 31 of the fleet 30 during the first phase 10 (part (a) of Figure 3) of machine learning and during the second phase 20 (part (b) of Figure 3) of using the model on the vehicle 31 according to the invention.
[0042] The invention aims at addressing also the problem of non-IID (independent and identically distributed) data. The IID sampling of the training data is a key point to train a Machine Learning model. It ensures that the stochastic gradient is an unbiased estimate of the full gradient. But, in a decentralized learning process, it is unrealistic to assume that the local data on each edge device is always IID. Non- iidness could be due to a poor representation of the classes locally or to a difference between the data distribution of the client and the population distribution. Indeed, the distribution of vibration data can be different depending on the brand of the vehicle or the kind of engine (thermal or electrical engine) and the representation of the speed classes depends on the driver’s habits (driving in town vs. highway driving). It is known that non-iidness reduces significantly the accuracy of a model trained in a decentralized way. A way to improve global performance whatever the local distribution of data consists in personalizing the models at edge. Personalization works like this: during the first rounds of learning, i.e. during the first phase 10, all the devices train the shared model, which improves over time. From round S, each device has now the possibility to fine-tune its model locally while keeping training the shared model: a. At iteration S: 0iocai - Q b. Minimization of the local objective function: qiooai) where x, are the n local samples and y, are the associated labels. This step produces the local update h. c. Update of the local parameters: Q iocai <- 0iocai - r|iocazh
[0043] The steps b and c are repeated until the convergence of the local learning process. The choice for S is not trivial as it should be different for each device. The choice of personalizing a model could be based on the distance between the local data distribution and the population distribution or the gap between the local accuracy and the mean accuracy of the population. Experiments have proved that personalization permits to improve the performance of the whole solution.
[0044] In the following the invention is illustrated with two concrete applications. The first one is dedicated to the estimation of speed and distance based on both multiclass classification and regression. The second one is dedicated to the detection of wheel imbalance mechanical issue using a one-class classification model. In this way, an overview of the invention through various implementations and applications is proposed.
[0045] The first application of the invention is presented through a vibrations analysis in a decentralized way to estimate speed and distance travelled by a fleet of cars. For this experiment, a fleet of 42 cars was equipped with a MEMS sensor 40 (more precisely a 3-axis accelerometer) sampled at 200Hz (and an Arm-Cortex M4 for storage and calculation). The sensor 40 was set on the top of the steering wheel. The sensor 40 can directly share information with the central server 50 using LST-M network and is possibly connecting to a smartphone 51 by Bluetooth. The fleet of vehicles is made up of 8 different brands (Peugeot, Renault, Nissan, BMW, Audi...) for a total of 20 models (208, Clio, A1 ...). Each driver has taken a trip whose duration is comprised between 20 minutes and more than 2 hours. The total duration of acquisitions is about 8 hours. Only 30 cars were selected to train the shared model. For these vehicles, the first part of their trip was used for training while the second part was kept for evaluation. The remaining 12 vehicles participated only in evaluating the model.
[0046] In a preliminary study, a spectral analysis confirmed that vibration signals contain information about speed so that spectral representations (FFT, filter-bank outputs...) or spectral features (e.g. Linear Frequency Cepstral Coefficients, Band Energy Ratio...) can be contemplated as inputs of the speed model. Spectral representations/features are calculated considering segments (windows) whose duration is comprised between 2 seconds and 10 seconds. The kind of the model could be either a regression one (linear regression, logistic regression...) to estimate a speed value or a multi-class one (neural network, linear support vector machines...) to classify inputs into speed bins. Both cases are illustrated in this part. For each speed estimation/classification s outputted by the model, the distance d is estimated considering the next formula: d = s x ts where ts is the duration of the segment. Without any kinds of restrictions, our multi-class and regression models are both based on a fully-connected neural network (with 2 hidden layers of 500 neurons each) taking as inputs LFCC (Linear Frequency Cepstral Coefficients) feature vectors and outputting an estimation about speed (considering 3 m/s bins in the case of classification). The segmentation of the raw data was done considering 5s window (with 50% overlapping between successive windows). For each segment, LFCC are calculated for the three axis of the sensor and then concatenated to build the input feature vector. It is reminded that labels about speed are provided by the GPS of the smartphone.
[0047] Figure 4 represents the evolution of the accuracy of the shared model of the invention depending on the number of learning rounds. First the shared model was evaluated at each round of learning considering several values for K (which defines the number of devices participating in each training round). The part (a) of Figure 4 shows the evolution of the mean accuracy, i.e. the accuracy calculated with the test set of all the devices, depending on the number of rounds. It proves that the decentralized learning process self-improves over time and that a value for K comprised between 3 and 5 (i.e. about 10% of the total number of vehicles of the fleet) seems to be a relevant choice to obtain a good computational efficiency and convergence rate (K=5 is chosen for the next experiments). The local accuracy for two vehicles of the fleet is reported, one which has participated in the training process during the first part of its trip (part (b) of Figure 4) and the other which has never retrained the shared model at edge (part (c) of Figure 4). In both cases, the local accuracy improves over time which means a device does not need to participate in the collaborative learning process to take advantage of it. [0048] Figure 5 represents the local accuracy of the model for the vehicles of the fleet and the performance gain or loss for each vehicle based on the first phase of the method of the invention.
[0049] At the end of the learning process (i.e. after 100 rounds within this experimentation), the shared model is evaluated with the test set of each device (see Figure 5). It appears that there is a significant difference of performances between the vehicles of the fleet (the local accuracy is comprised between 0.24 and 0.87 for a mean value of 0.70). This is due to the fact that the distribution of the data could be very different depending on the vehicle’s brand. As an illustration, the worst performances are obtained for a ‘PEUGEOT ION’ car which is the only electric vehicle of the fleet.
[0050] Figure 6 represents the performance improvement of the model accuracy thanks to the step of personalization of the invention.
[0051] A way to improve local performances whatever the distribution of data consists in personalizing the models at edge (step 110). At the beginning, only the shared model is trained until it has sufficient quality. Then, each device which participates in the training process has now the possibility to fine-tune its model locally while keeping training the shared model. Figure 6 shows that personalization is relevant to improve local accuracy (and thus mean accuracy) in the case of non-IID data (the mean accuracy increases of 0.03 to reach 0.73 with personalization). For this experiment, the starting round, from which the process of personalization began, was S=25. The improvement of performances can be of course better if the starting round is not the same for all devices but the result of a function taking as inputs the distance between the local data distribution and the population distribution and/or the gap between the local accuracy and the mean accuracy of the population.
[0052] It is also possible to estimate the distance d of each travel considering the next formula: d = s x ts where s is the output of our multi-class speed model and ts is the duration of the segment of analysis. The distance is evaluated on the test set of each vehicle with the calculation of the relative error: err = d is the d-true estimated distance and dtrUe is the real distance, i.e. the ground truth provided by the GPS of the smartphone. [0053] Figure 7 represents the relative errors of the estimation by the shared model of the invention, without (part (a) of Figure 7) and with (part (b) of Figure 7) the step of personalization, of the distance travelled by the fleet of vehicles. Results are very satisfactory as relative error is 0.04 considering speed bins outputted by the shared model and only 0.03 after personalization of the models.
[0054] The first application of the invention is now presented through a regression model to estimate speed and distance travelled by a fleet of cars. With regression models, the same kinds of results as those produced with classification models to estimate speed are obtained. In this case, the accuracy is replaced with MAE (mean absolute error) to evaluate the speed models performances. MAE = where y- is the estimated value of speed and y,· is the ground truth provided by the GPS.
[0055] Figure 8 represents the regression performance with the shared model and personalized model according to the invention. Mean MAE is 1.5 m/s using the shared model and decreases to 1.2 m/s considering personalization that is quite acceptable. The speed estimation will be used as input of the mechanical issue model.
[0056] Figure 9 represents the performances about estimation of distance travelled by the fleet of vehicles using regression models, with the shared model (part (a) of Figure 9) and with the personalized model (part (b) of Figure 9) according to the invention. It shows the performance for the distance estimation based on the speed provided by the regression models. Results for distance estimation are still very satisfactory as relative error is about 0.05 considering speed outputted by the shared model and only 0.03 after personalization of the models.
[0057] The second application of the invention is presented through a vibrations analysis in a decentralized way to estimate wheel imbalance mechanical issue for a fleet of cars. For this experiment, a fleet of 7 cars was used. Before the start of acquisitions, each vehicle was taken to the shop to be sure it had no mechanical issue. Then, each driver has taken a first trip to produce data which correspond to the normal class. A second trip was taken after a weight was added to the front or rear wheel of the car. Data acquired during this trip were labeled as ‘imbalance’. The imbalance issue introduced by the weight could be more or less hard to detect as the weight was light, medium or heavy. 50% of the first trip was kept for training. The remaining part of this trip and the second trip were used for evaluation.
[0058] Figure 10 represents the duration of the acquisitions for various vehicles of the fleet, a first part of the acquisition without any mechanical issues and a second part with an imbalance trouble.
[0059] The models considered to track the mechanical issue are speed-dependent one-class models. It means that the normal class is modeled for each range of speed and then it is detected when the vibrations move away from the normal case.
Spectral features (e.g. flatness, entropy, bandwidth, band energy ratio...) are characteristics of mechanical issues. The features are calculated considering 5s segments (with 50% overlapping considering successive windows) on the longitudinal or transverse axis of the vehicle (the spherical coordinates with roll and pitch can also be used). The choice for the best axis depends on the mechanical issue. For example, the longitudinal axis is made the best choice to detect imbalance problem while features are calculated on transverse axis to detect wheel geometry/alignment problem. The model consists in estimating the mean m and standard deviation s of the spectral features m and s are updated online during the training process. The goal of mechanical issue detection consists in limiting the number of False Positive (FP) while keeping a good number of True Positive (TP). To do that, an alert will be sent to the driver only when the value of the spectral features is beyond m ± 3s. The training process of the one-class model is done until the number of FP remains stable.
[0060] Figure 11 represents the performance obtained with the shared model of the invention for the imbalance mechanical issue. The results reported were obtained considering the spectral flatness from the longitudinal axis as input feature. The number of devices used to update the model parameters (m and o) at each round of training was 5. Performances evaluation was based on FP and TP ratio. FP is the ratio between the number of alerts and the total number of segments of the testing part for the first (normal) trip whereas TP is the ratio between the number of alerts and the total number of segments of the second (imbalance) trip.
[0061] The capacity of using a shared model to detect imbalance issue is evaluated. On one hand, part (a) of Figure 11 shows the evolution of the FP during the learning process. It proves that the decentralized learning process according to the invention permits to well model the normal class as the number of FP decreases to 0.6% at the end of the training process. On the other hand, part (b) of Figurel 1 shows that the TP is comprised between 1.7% (Mitsubishi with a light imbalance) and 27.6% (Ford with a heavy imbalance). It appears that it is very hard to detect the light imbalance because the TP do not exceed 3.5% in this case. For the other cases (medium and heavy), a recurrent alert should inform the driver that his car has a mechanical issue.
It is noted that TP are calculated considering all the ranges of speed but the alerts are often activated only for high speeds. Thus, the more time the driver spends at high speeds, the more recurrent is the alert.
[0062] Figure 12 represents the performance improvement of the model accuracy for the imbalance mechanical issue thanks to the step of personalization of the invention.
[0063] The capacity of using personalized models to detect imbalance issue is evaluated. The starting round from which the process of personalization began is set to S=12. Figure 12 shows that the step 110 of personalization permits to improve both FP (0.4%) and TP (comprised between 4.0% and 32.6%) even for the cars with light imbalance.
[0064] The invention enables the maintenance of a fleet of vehicles based on the initialization of a global model, the parameters of which are transferred to the vehicles of the fleet where they are locally updated thanks to data locally collected. The updated parameters may be taken into account to upgrade the global model at the central server, and the parameters of the upgraded global model may be transferred to the vehicles. In order to ensure a certain level of accuracy for each vehicle, the invention gives the possibility to personalize the global model used by said vehicle.
[0065] The described invention offers a solution to maintain a fleet of vehicles based on the analysis of signals from one sensor, advantageously a 3-axis accelerometer, embedded in each vehicle of the fleet. The service provider has no need to collect data for the training and validation of the Machine Learning models that will be deployed inside each vehicle of the fleet to estimate speed or to detect mechanical issues. The invention proposes to train a shared model for all vehicles of the fleet but also to personalize it powering personalized experiences, and thus leading to a better accuracy of the model for a large variety of vehicles.

Claims

1. A method, implemented by computer, for maintenance of a fleet (30) of a plurality of vehicles (31, 32, 33), each vehicle (31, 32, 33) of the fleet (30) comprising a MEMS sensor (40), a processor (41) and a memory (42), characterized in that the method comprises a first phase (10) of machine learning of a global model and a second phase (20) of using the global model on each vehicle (31, 32, 33) of the fleet (30), the first phase (10) of machine learning comprising the following steps:
• computing the global model based on initial data on a central server (50) (step 100), thus providing initial global parameters of the global model,
• uploading the global model with the initial global parameters on the memory (42) of each vehicle (31 , 32, 33) of the fleet (30) (step 102),
• for at least one among the plurality of vehicles (31 , 32, 33) of the fleet (30), updating (step 104) by the processor (41) of said vehicle (31 , 32, 33) the initial global parameters based on data locally collected by the MEMS sensor (40) and stored in the memory (42) of said vehicle (31, 32, 33), thus providing updated local parameters of said vehicle (31 , 32, 33),
• for at least one among the plurality of vehicles (31, 32, 33) of the fleet (30), transferring the updated local parameters to the central server (50) (step 106),
• updating the global model based on the transferred updated local parameters by an aggregation algorithm on the central server (50) (step 108), the steps of the first phase (10) being performed until a predetermined convergence criterion is reached, in that the method further comprises a step (110) of personalization of the global model of at least one among the plurality of vehicles (31, 32, 33) of the fleet (30), the personalization of the global model being based on the data locally collected by the MEMS sensor (40) and stored in the memory (42) of said vehicle, and in that the method comprises, for each vehicle (31 , 32, 33) of the fleet (30), a step (112) of sending an alarm signal to a user of said vehicle, if a deviation between a feature obtained from the global model and a predefined feature is determined.
2. The method according to claim 1, wherein the step (110) of personalization of the global model of at least one among the plurality of vehicles of the fleet is performed during the first phase (10) of machine learning of the global model.
3. The method according to claim 2, wherein the step (110) of personalization of the global model of the at least one among the plurality of vehicles of the fleet is performed based on a distribution of the data locally collected, local performances or an iteration of the learning process.
4. The method according to claim 2 or 3, wherein the updated local parameters of the personalized global model of the at least one among the plurality of vehicles of the fleet is excluded from the step (106) of transferring the updated local parameters to the central server.
5. The method according to any one of claims 1 to 4, wherein the data locally collected by the MEMS sensor of each vehicle is the instantaneous acceleration of said vehicle.
6. A device for maintenance of a fleet of a plurality of vehicles, the device comprising means for implementing the steps of the method as claimed in any one of claims 1 to 5.
7. A computer program product, said computer program comprising code instructions making it possible to perform the steps of the method as claimed in any one of claims 1 to 5, when said program is run on a computer.
EP19861267.3A 2019-11-29 2019-11-29 Method for maintenance of a fleet of a plurality of vehicles Pending EP4066219A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2019/001413 WO2021105736A1 (en) 2019-11-29 2019-11-29 Method for maintenance of a fleet of a plurality of vehicles

Publications (1)

Publication Number Publication Date
EP4066219A1 true EP4066219A1 (en) 2022-10-05

Family

ID=69844858

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19861267.3A Pending EP4066219A1 (en) 2019-11-29 2019-11-29 Method for maintenance of a fleet of a plurality of vehicles

Country Status (2)

Country Link
EP (1) EP4066219A1 (en)
WO (1) WO2021105736A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709106B (en) * 2021-07-22 2023-11-07 一汽解放汽车有限公司 Data analysis system and method suitable for commercial vehicle internet of vehicles data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9639999B2 (en) * 2015-01-14 2017-05-02 Tata Consultancy Services Limited System and method for estimating speed of a vehicle
US20180315260A1 (en) * 2017-05-01 2018-11-01 PiMios, LLC Automotive diagnostics using supervised learning models
US11328210B2 (en) * 2017-12-29 2022-05-10 Micron Technology, Inc. Self-learning in distributed architecture for enhancing artificial neural network
US11373115B2 (en) * 2018-04-09 2022-06-28 Here Global B.V. Asynchronous parameter aggregation for machine learning

Also Published As

Publication number Publication date
WO2021105736A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
JP7329068B2 (en) Systems and methods for vehicle tire performance modeling and feedback
CN110850854A (en) Autonomous driver agent and policy server for providing policies to autonomous driver agents
Xu et al. An automated learning-based procedure for large-scale vehicle dynamics modeling on baidu apollo platform
Hallac et al. Drive2vec: Multiscale state-space embedding of vehicular sensor data
CN113544597A (en) Method for optimizing control signals for operating a vehicle
JP7053213B2 (en) Operation data analysis device
CN114556248A (en) Method for determining a sensor configuration
CN113658423B (en) Vehicle track abnormality detection method based on circulation gating unit
Kuefler et al. Burn-in demonstrations for multi-modal imitation learning
EP4066219A1 (en) Method for maintenance of a fleet of a plurality of vehicles
JP7415471B2 (en) Driving evaluation device, driving evaluation system, in-vehicle device, external evaluation device, and driving evaluation program
Bajic et al. Road roughness estimation using machine learning
Yiğit et al. Estimation of road surface type from brake pressure pulses of ABS
CN115943396A (en) Detecting vehicle faults and network attacks using machine learning
Singh et al. Application of machine learning & deep learning techniques in the context of use cases relevant for the tire industry
CN117376920A (en) Intelligent network connection automobile network attack detection, safety state estimation and control method
CN114269632A (en) Method and device for estimating a mechanically fed steering wheel torque on a steering wheel of a steering system of a motor vehicle
WO2022165602A1 (en) Method, system and computer readable medium for probabilistic spatiotemporal forecasting
Giuliacci et al. Recurrent Neural Network Model for On-Board Estimation of the Side-Slip Angle in a Four-Wheel Drive and Steering Vehicle
Xiao et al. DDK: A deep koopman approach for longitudinal and lateral control of autonomous ground vehicles
US10916074B2 (en) Vehicle wheel impact detection
Vasantharaj et al. A low-cost in-tire-pressure monitoring SoC using integer/floating-point type convolutional neural network inference engine
US20230055012A1 (en) Dynamic vehicle operation
Mao Road Surface Estimation Using Machine Learning
Mizrachi et al. Road surface characterization using crowdsourcing vehicles

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220602

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)