CN112183899A - Method, device, equipment and storage medium for determining safety degree prediction model - Google Patents

Method, device, equipment and storage medium for determining safety degree prediction model Download PDF

Info

Publication number
CN112183899A
CN112183899A CN202011218220.3A CN202011218220A CN112183899A CN 112183899 A CN112183899 A CN 112183899A CN 202011218220 A CN202011218220 A CN 202011218220A CN 112183899 A CN112183899 A CN 112183899A
Authority
CN
China
Prior art keywords
drivers
driver
safety
target
driving information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011218220.3A
Other languages
Chinese (zh)
Inventor
宋洪正
刘亚书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202011218220.3A priority Critical patent/CN112183899A/en
Publication of CN112183899A publication Critical patent/CN112183899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a method, apparatus, device, and storage medium for determining a security degree prediction model. The method described herein comprises: acquiring historical driving information of a plurality of drivers; dividing a plurality of drivers into a plurality of driver subsets based on historical driving information; respectively training a plurality of sub-safety degree prediction models corresponding to the plurality of driver subsets on the basis of historical driving information corresponding to the plurality of driver subsets; and determining a safety degree prediction model by combining the plurality of sub-safety degree prediction models, the safety degree prediction model being capable of determining a future driving safety degree of the target driver based on the target historical driving information of the target driver. In the embodiment of the present disclosure, by determining a safety degree prediction model and determining a driving safety degree using the model, it is possible to effectively predict the probability of occurrence of a risk event by a driver.

Description

Method, device, equipment and storage medium for determining safety degree prediction model
Technical Field
The present disclosure relates generally to the field of Artificial Intelligence (AI), and more particularly to methods, apparatus, devices, and computer-readable storage media for determining a safety prediction model.
Background
With the development of science and technology and networks, more and more people select travel services provided by service providers. The safety of passengers is particularly important for a service manager managing travel services. However, at present, an evaluation mechanism for the driving safety degree of the driver is lacking, and thus, a processing measure is often taken after a risk event occurs to the driver. This makes it impossible to effectively ensure the safety of the passenger, and also has a bad influence on the reputation, business, and the like of the service manager.
Disclosure of Invention
According to some embodiments of the present disclosure, a method, an apparatus, a device, and a computer-readable storage medium for determining a safety degree prediction model are provided.
In a first aspect of the disclosure, a method of determining a safety prediction model is provided. The method comprises the following steps: obtaining historical driving information of a plurality of drivers, the historical driving information at least indicating whether the plurality of drivers have occurred a risk event within a historical period of time and whether the plurality of drivers have occurred a risk event within a predetermined number of trips; dividing a plurality of drivers into a plurality of driver subsets based on historical driving information; respectively training a plurality of sub-safety degree prediction models corresponding to the plurality of driver subsets on the basis of historical driving information corresponding to the plurality of driver subsets; and determining a safety degree prediction model by combining the plurality of sub-safety degree prediction models, the safety degree prediction model being capable of determining a future driving safety degree of the target driver based on the target historical driving information of the target driver.
In a second aspect of the disclosure, an apparatus for determining a safety prediction model is provided. The device includes: a first historical driving information acquisition module configured to acquire historical driving information of a plurality of drivers, the historical driving information indicating at least whether the plurality of drivers have occurred a risk event within a historical period of time and whether the plurality of drivers have occurred a risk event within a predetermined number of trips; a first driver subset partitioning module configured to partition a plurality of drivers into a plurality of driver subsets based on historical driving information; a first sub-safety degree prediction model training module configured to train a plurality of sub-safety degree prediction models respectively corresponding to a plurality of subsets of drivers based on historical driving information respectively corresponding to the plurality of subsets of drivers; and a safety degree prediction model determination module configured to determine a safety degree prediction model capable of determining a future driving safety degree of the target driver based on the target historical driving information of the target driver by combining the plurality of sub-safety degree prediction models.
In a third aspect of the present disclosure, there is provided an electronic device comprising a memory and a processor, wherein the memory is for storing computer-executable instructions that are executed by the processor to implement a method according to the first aspect of the present disclosure.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided having computer-executable instructions stored thereon, wherein the computer-executable instructions are executed by a processor to implement a method according to the first aspect and of the present disclosure.
There is also provided, in accordance with some embodiments of the present disclosure, a method, apparatus, device and computer-readable storage medium for determining driving safety.
In a fifth aspect of the present disclosure, a method of determining a degree of driving safety is provided. The method comprises the following steps: acquiring target historical driving information of a target driver, wherein the target historical driving information at least indicates whether the target driver has occurred a risk event within a historical time period and whether the target driver has occurred a risk event within a predetermined number of trips; and determining a future driving safety degree of the target driver based on the target historical driving information using a safety degree prediction model, wherein the safety degree prediction model is determined by combining a plurality of sub-safety degree prediction models, and the plurality of sub-safety degree prediction models are trained based on historical driving information corresponding to a plurality of driver subsets of the plurality of drivers, respectively.
In a sixth aspect of the present disclosure, an apparatus for determining a degree of driving safety is provided. The device includes: a first target historical driving information acquisition module configured to acquire target historical driving information of a target driver, the target historical driving information indicating at least whether a risk event occurred to the target driver within a historical period of time and whether a risk event occurred to the target driver within a predetermined number of trips; and a future driving safety determination module configured to determine a future driving safety of the target driver based on the target historical driving information using a safety prediction model, wherein the safety prediction model is determined by combining a plurality of sub-safety prediction models, and the plurality of sub-safety prediction models are trained based on historical driving information corresponding to a plurality of subsets of drivers of the plurality of drivers, respectively.
In a seventh aspect of the present disclosure, there is provided an electronic device comprising a memory and a processor, wherein the memory is for storing computer-executable instructions that are executed by the processor to implement a method according to the fifth aspect of the present disclosure.
In an eighth aspect of the present disclosure, a computer-readable storage medium is provided having computer-executable instructions stored thereon, wherein the computer-executable instructions are executed by a processor to implement a method according to the fifth aspect and the method of the present disclosure.
In the embodiment of the disclosure, by determining the safety degree prediction model and determining the driving safety degree by using the model, the probability of the risk event occurring on the driving side can be effectively predicted, so that the safety of passengers is improved by means of pre-intervention.
Drawings
Features, advantages, and other aspects of various implementations of the disclosure will become more apparent with reference to the following detailed description when taken in conjunction with the accompanying drawings. Several implementations of the present disclosure are illustrated herein by way of example, and not by way of limitation, in the figures of the accompanying drawings:
FIG. 1 illustrates an example environment in accordance with some embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of an example method of determining a safety prediction model in accordance with some embodiments of the present disclosure;
FIG. 3 illustrates a flow chart of an example method of determining driving safety in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates a block diagram of an apparatus to determine a safety prediction model in accordance with some embodiments of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus to determine driving safety in accordance with some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of a computing device in which one or more embodiments of the present disclosure may be implemented.
Detailed Description
Preferred implementations of the present disclosure will be described in more detail below with reference to the accompanying drawings. While a preferred implementation of the present disclosure is shown in the drawings, it should be understood that the present disclosure may be implemented in various forms and should not be limited by the implementations set forth herein. Rather, these implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "embodiment" and "some embodiments" mean "at least some embodiments". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As used herein, a "model" may learn associations between respective inputs and outputs from training data, such that a corresponding output may be generated for a given input after training is complete. The generation of the model may be based on machine learning techniques. The "model" may also be referred to herein as a "learning model" or a "learning network," these terms being used interchangeably herein.
In general, machine learning can include at least two phases, a training phase and an application phase (also referred to as an inference phase). In the training phase, a given model may be trained using a large amount of training data, with iterations continuing until the model can derive consistent inferences from the training data that are similar to the inferences that human wisdom can make. By training, the model may be considered to be able to learn the association between inputs and outputs (also referred to as input-to-output mapping) from the training data. Parameter values of the trained model are determined. In the application phase, the model may process the actual inputs and determine the corresponding outputs using the trained parameter values.
As mentioned above, with the development of technologies and networks, more and more people choose to provide travel services as drivers. The safety of passengers is particularly important for a service manager managing travel services. It is desirable to provide a mechanism for evaluating the degree of driving safety of a driver.
Embodiments of the present disclosure provide a method of determining a safety degree prediction model and a driving safety degree. In the embodiment of the disclosure, by determining the safety degree prediction model and determining the driving safety degree by using the model, the probability of the risk event occurring on the driving side can be effectively predicted, so that the safety of passengers is improved by means of pre-intervention.
Fig. 1 illustrates an example environment 100 in accordance with some embodiments of the present disclosure. As shown in fig. 1, the example environment 100 includes a computing device 110 configured to determine a security degree prediction model 140. Specifically, the computing device 110 may obtain historical driving information 125 for a plurality of drivers from the database 120. Based on the historical driving information 125, the computing device 110 may divide the plurality of drivers into a plurality of driver subsets. The computing device 110 trains a plurality of sub-safety prediction models 130-1, 130-2, … …, 130-N (collectively or individually referred to herein as sub-safety prediction models 130 for ease of discussion, where N is an integer greater than 1) based on historical driving information corresponding to each of a plurality of subsets of driving parties. The computing device 110 determines a security degree prediction model 140 by combining a plurality of sub-security degree prediction models 130.
The determined safety prediction model 140 may be provided to one or more devices for safety prediction of the driver. As shown in fig. 1, the safety prediction model 140 is provided to a computing device 150. The computing device 150 may obtain target historical driving information 160 for the target driver and then determine a future driving safety 170 for the target driver by utilizing the trained safety prediction model 140.
Computing device 110 and/or computing device 150 may be any electronic device with computing capabilities, including a mobile device, a stationary device, or a portable device. Examples of computing device 110 and/or computing device 150 include, but are not limited to, servers, mainframes, edge computing nodes, end devices (such as mobile handsets, personal computers, etc.), and so forth. In some embodiments, computing device 110 may be a device with more powerful computing capabilities, such as a computing device in a cloud environment.
It should be understood that the example environment 100 for determining and using a safety prediction model is described for exemplary purposes only and does not imply any limitation as to the scope of the present disclosure. For example, embodiments of the present disclosure may also be applied to environments other than the example environment 100. For example, computing device 110 and computing device 150 may be the same device. It should also be understood that the specific numbers of devices or models described above are given for illustrative purposes only and do not imply any limitations on the scope of the disclosure. For example, embodiments of the present disclosure may also train more or fewer sub-safety prediction models 130.
The determination and application of the safety prediction model will be discussed in detail below with reference to the figures.
FIG. 2 illustrates a flow diagram of an example method 200 of determining a safety prediction model in accordance with an embodiment of the present disclosure. For example, the method 200 may be performed by the computing device 110 as shown in FIG. 1. It should be understood that method 200 may also be performed by other devices, and the scope of the present disclosure is not limited in this respect. It should also be understood that method 200 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At 210, the computing device 110 obtains historical driving information 125 for the plurality of drivers, the historical driving information 125 indicating at least whether the plurality of drivers have experienced a risk event within a historical period of time and whether the plurality of drivers have experienced a risk event within a predetermined number of trips. As will be discussed in detail below, the historical driving information 125 is used as training data for training the safety prediction model 140.
In some embodiments, the computing device 110 may obtain the following historical driving information 125 for a plurality of drivers: the point in time at which the risk event occurs for a plurality of drivers and the point in time of all trips completed. In some embodiments, the risk event may be, for example, a verbal and/or physical conflict with the passenger, an intentional detour, an unreasonable fee charged to the passenger, etc. From these historical driving information 125, the computing device 110 may determine whether a driver has experienced a risk event within a given historical period of time and whether a risk event has occurred within a predetermined number of trips. The historical time period may be, for example, 30 days, 60 days, 90 days, etc. The predetermined number of strokes may be, for example, 100 strokes, 300 strokes, 1000 strokes, etc.
The determination of whether a risk event has occurred within a historical period of time and within a predetermined number of trips is primarily due to the following considerations: the liveness of different drivers varies. For example, many drivers have been largely motivated to provide travel services, and these drivers will complete a certain number of trips almost every day. While some drivers only provide travel services at rest or occasionally, they may complete a trip for a long period of time. Obviously, a driver with a high level of liveness may have a correspondingly high probability of a risk event occurring within a given time period. If only whether a risk event has occurred within a given historical period of time is considered, the resulting model may be biased towards a more active driver. Therefore, to eliminate this effect, in order to obtain a more accurate safety prediction model, it is considered whether a risk event has occurred within a historical period of time and within a predetermined number of trips.
In some embodiments, in addition to information related to risk events, historical driving information 125 may include personal information such as the driver's age, driving age, gender, and the like. Additionally or alternatively, the historical driving information 125 may also include evaluated information of the driver. For example, the evaluated information of the driver may include a good evaluation rate, a bad evaluation rate, a complaint rate, a variance of evaluation values, and the like, which are received by the driver. In some embodiments, the historical driving information 125 may additionally or alternatively include abnormal behavior information of the driver. For example, the abnormal behavior information of the driver may include the number of times the driver changes the phone number, the number of times the driver changes the vehicle, whether travel service is prohibited, whether personal information and/or evaluated information is tampered with, and the like. In some embodiments, the historical driving information 125 may also include leasing company information to which the driver belongs. This is mainly due to the following considerations: the overall safety degree of the driver of some leasing companies is higher, and the overall safety degree of the driver of some leasing companies is lower.
It should be understood that the length of the historical time period, the number of predetermined trips, and the information included in the historical driving information 125 are shown for exemplary purposes only and do not imply any limitations on the scope of the present disclosure. For example, the computing device 110 may also obtain whether risk events occurred during other time periods and/or other numbers of trips. For example, the historical driving information 125 may also include other information. The present disclosure is not limited in this respect.
At 220, the computing device 110 divides the plurality of drivers into a plurality of driver subsets based on the historical driving information 125. As mentioned above, in order to avoid the final safety degree prediction model 140 being biased to the driver with high liveness, it is desirable to construct different training sample sets from the historical driving information to train different sub-safety degree prediction models 130. For model training, each training sample set may include training positive samples and training negative samples. In an embodiment of the present disclosure, it is desirable to implement the construction of the training sample set by the division of the driver.
In some embodiments, drivers who have experienced a risk event may be used as a training positive sample, and drivers who have not experienced a risk event may be used as a training negative sample. It is easily understood that the training sample set determined according to whether a risk event has occurred within the historical time period may be different from the training sample set determined according to whether a risk event has occurred within a predetermined number of trips, and thus the trained sub-safety degree prediction model 130 may also be different. Further, the training sample sets corresponding to different historical time periods and different numbers of trips may also be different. The sub-safety prediction model 130 trained in this way takes into account the influence of different information on the model training, so that the finally obtained safety prediction model 140 is more comprehensive and accurate.
In some embodiments, the computing device 110 may select at least one driver having experienced a risk event within a historical period of time from the plurality of drivers as a first group of drivers, select at least one driver having experienced a risk event within a predetermined number of trips from the plurality of drivers as a second group of drivers, and partition the first and second groups of drivers into different subsets of drivers. The two groups of drivers respectively serve as training positive samples to train different sub-safety degree prediction models.
For example, in some embodiments, the computing device 110 may partition at least one driver who experienced a risk event within 30 days into one subset of drivers and at least one driver who experienced a risk event within 100 trips into another subset of drivers.
Additionally or alternatively, in some embodiments, the computing device 110 may also train different sub-safety prediction models by dividing the drivers who have experienced the risk event over different historical time periods into different subsets of drivers. For example, the computing device 110 may classify at least one driver who has experienced a risk event within 30 days into one subset of drivers and at least one driver who has experienced a risk event within 60 days into another subset of drivers. In this way, the performance of the driver at different times can be taken into account.
Additionally or alternatively, in some embodiments, the computing device 110 may also train different sub-safety prediction models by dividing the drivers who have experienced the risk event over different numbers of trips into different subsets of drivers. For example, the computing device 110 may partition at least one driver who experienced a risk event within 100 trips into one subset of drivers and at least one driver who experienced a risk event within 300 trips into another subset of drivers.
Note that there may be intersections between the driver subsets. In other words, the same driver may belong to different subsets of drivers. For example, if a driver meets the requirements of having a risk event occurring within 30 days and a risk event occurring within 100 trips at the same time, the driver may be a training positive sample of a different subset of drivers.
As mentioned above, determining the first and second groups of drivers as training positive samples according to the historical time period and the predetermined number of trips, respectively, and dividing the two groups of drivers into different subsets of drivers as training sample sets to train the sub-safety degree prediction models can make the resulting safety degree prediction models more comprehensive and accurate.
In a real scenario, there are a large number of drivers with low liveness, and the probability of these drivers to have a risk event is naturally low, so that the number of negative examples in the database 120 is much larger than the number of positive examples. To balance the differences between the positive and negative examples, computing device 110 may select a number of drivers who did not experience a risk event with a difference within a predetermined threshold as training negative examples.
In some embodiments, the computing device 110 may determine that the number of drivers in the first set of drivers is a first number. The computing device 110 may then select, as the third group of drivers, a second number of drivers that have not experienced a risk event within the historical period of time from the plurality of drivers, the first number differing from the second number by less than a predetermined threshold. Next, the computing device 110 merges the first and third sets of drivers into a first subset of drivers, labeled as training positive and training negative examples, respectively.
In some embodiments, the computing device 110 may select different drivers who have not experienced a risk event as training negative examples to combine with the same training positive example drivers to form a subset of drivers. For example, for a first set of N drivers, the computing device 110 may select N non-risky drivers to form a subset of drivers to train a sub-safety prediction model and select N additional non-risky drivers to form another subset of drivers to train another sub-safety prediction model.
Similarly, for drivers associated with a predetermined number of trips, the computing device 110 may determine that the number of drivers in the second set of drivers is a third number, select a fourth number of drivers from the plurality of drivers that have not experienced a risk event within the predetermined number of trips as a fourth set of drivers, the third number differing from the fourth number by less than a predetermined threshold, and merge the second set of drivers and the fourth set of drivers into a second subset of drivers, the second set of drivers and the fourth set of drivers labeled as a training positive sample and a training negative sample, respectively.
By selecting positive and negative training samples whose number difference is less than a predetermined threshold, the number difference between the training samples can be balanced. In addition, different negative sample drivers are included in different driver subsets, so that the safety degree prediction model can learn more complete information.
Based on the plurality of driver subset partitions, at 230, the computing device 110 trains a plurality of sub-safety degree prediction models 130 corresponding to the plurality of driver subsets, respectively, according to the historical driving information 125 corresponding to the plurality of driver subsets, respectively.
Each of the sub-safety degree prediction models 130 may have the same type or different types of network structures. In some embodiments, the sub-safety prediction model 130 may be any type of machine learning model or deep learning model. Some examples of the sub-safety prediction model 130 include, but are not limited to, Support Vector Machine (SVM) models, bayesian models, Random Forest (RF) models, or various deep learning/neural network models, such as Recurrent Neural Networks (RNNs), etc.
In some embodiments, for any given subset of drivers from the plurality of subsets of drivers, the computing device 110 may determine a feature vector and a sample category indicator for each driver in the given subset of drivers based on the historical driving information 125 for the given subset of drivers.
The feature vector for each driver may include a plurality of dimensional values, and the values for each dimension may characterize the corresponding feature of the driver. For example, the computing device 110 may extract a feature vector of the driver according to the driver's goodness of comment, driving age, and whether travel service is prohibited from being provided in the driver's historical driving information 125. For example, if driver a's historical driving information 125 indicates that driver a has a good rating of 0.8, a complaint rate of 0.1, and has not been prohibited from providing travel services, then computing device 110 may determine that driver a's feature vector is [0.8,0.1,0 ]. In some embodiments, the computing device 110 may also perform some additional processing on the feature vectors. For example, the computing device 110 may peak-truncate outliers in the feature vector, complement missing values. This will also facilitate subsequent model training.
The sample class indicator corresponding to each driver indicates that the corresponding driver is a training positive sample or a training negative sample. For training positive samples, it is desirable to enable the sub-safety prediction model 130 to be trained at the time of training as: for the feature vector corresponding to the input training positive sample, the predicted probability of occurrence of the risk event is 1. For training negative examples, it is desirable to enable the sub-safety prediction model 130 to be trained at the time of training as: for the feature vector corresponding to the input training negative sample, the predicted probability of occurrence of the risk event is 0. The computing device 110 then trains the sub-safety prediction models 130 corresponding to the given subset of drivers based on the feature vectors and the class indicators.
For example, computing device 110 has determined from historical driving information 125 of driver a that the feature vector of driver a is [0.8,0.1,0], then computing device 110 may take the feature vector as an input to sub-safety degree prediction model 130. Further, the historical driving information 125 for driver a indicates that driver a has experienced a risk event on 30 days, then the computing device 110 may determine that the sample category indicator for driver a is 1 to indicate that driver a is a training positive sample. In this case, the output of the sub-safety degree prediction model 130 is 1.
Similarly, the computing device 110 may train the corresponding sub-safety prediction models 130 with the feature vectors and sample class indicators of other drivers in the given subset of drivers as inputs and outputs of the sub-safety prediction models 130, respectively.
In model training, the feature vectors of all drivers in the ith subset of drivers are used as the input to the model (where i ranges from [1, N ]]) And an expected output of model training may be determined from the sample class indicator for each driver. From the feature vectors and the sample class indicators, the computing device 110 may determine an ith sub-safety prediction model 130 (denoted as f)i)。fiThe following mapping may be indicated: b ═ fi(r), where r is the feature vector of the driver and b is the probability of the driver developing a risk event, which indicates the driver's future driving safety. The higher the probability, the lower the future driving safety degree; conversely, the lower the probability, the higher the future driving safety. Similarly, for a plurality of subsets of drivers, the computing device 110 may determine a plurality of sub-safety prediction models 130, respectively.
The computing device 110 may train the sub-safety prediction model 130 using a stochastic gradient descent method (SGD), or any of a variety of model training methods currently known or to be developed in the future. In some embodiments, computing device 110 may also utilize a Boosting algorithm to improve the training efficiency of sub-safety prediction model 130. Examples of boosting algorithms may include, for example, extreme gradient boosting (XGBoost) algorithms.
After determining the plurality of sub-safety prediction models 130, respectively, the computing device 110 determines the safety prediction model 140 by combining the plurality of sub-safety prediction models 130 at 240. The safety degree prediction model 140 may be provided for predicting the future driving safety degree of the target driver in practical applications.
In some embodiments, for a given driver, the computing device 110 may derive a plurality of probabilities X that the given driver has experienced a risk event from the plurality of sub-safety prediction models 1301、X2、…、XNAnd then performing multi-model voting on the obtained multiple probabilities to obtain a final probability. In some embodiments, the computing device 110 may average the plurality of probabilities to arrive at a final probability. In some embodiments, computing device 110 may useThe model fusion method is to train the safety degree prediction model 140 again by using a plurality of probabilities obtained from the plurality of sub-safety degree prediction models 130 as inputs and giving the driver whether a risk event occurs as an output.
In some embodiments, the probability of occurrence of a risk event derived from the safety prediction model 140 may be represented by the following equation (1):
Figure BDA0002761147670000121
where X represents the probability of occurrence of a risk event derived from each of the sub-safety prediction models 130, W represents the weight of each of the sub-safety prediction models 130, ffinalRepresenting the probability of a risk event occurring derived from the safety prediction model 140.
In some embodiments, the computing device 110 may range the resulting value to [0,1 ] by equation (2) below]Probability f offinalIs converted into a value range of [0,100 ]]Future driving safety degree of (1):
Y=a*log(P)+b (2)
where P is the probability determined from the safety prediction model 140. The value ranges of a and b can be selected according to the probability distribution. In this way, the probability of the occurrence of a risk event may be presented to the user in a more intuitive manner. The higher the probability determined by the safety degree prediction model 140, which means the lower the predicted future driving safety degree of the driver; conversely, if the probability is lower, the future driving safety degree is higher.
In some embodiments, the computing device 110 may also be trained using a subset of drivers with high liveness to derive an active safety prediction model corresponding thereto, and then for a given driver, compare the future driving safety derived from the active safety prediction model to the future driving safety derived from the safety prediction model 140 to derive a safety association for the two models. In this way, if the future driving safety degree 170 of the driver is determined using the active safety degree prediction model, it is also possible to simultaneously determine that the driver corresponds to the future driving safety degree 170 of the safety degree prediction model 140. So that the future driving safety 170 obtained using these two models can be normalized.
In the above-described exemplary embodiment, by dividing a plurality of drivers into a plurality of driver subsets according to the historical driving information to respectively train the sub-safety degree prediction models to determine the safety degree prediction model, a safety degree prediction model with higher accuracy can be obtained.
A degree of safety prediction model 140 determined according to some example embodiments of the present disclosure may be provided for predicting a future driving safety 170 of a target driver in a real application. FIG. 3 shows a flowchart of an example method 300 of determining driving safety in accordance with an embodiment of the present disclosure. For example, the method 300 may be performed by the computing device 150 as shown in FIG. 1. It should be understood that method 300 may also be performed by other devices, and the scope of the present disclosure is not limited in this respect. It should also be understood that method 300 may also include additional acts not shown and/or may omit acts shown, as the scope of the disclosure is not limited in this respect.
At 310, the computing device 150 obtains target historical driving information 160 for the target driver. In some embodiments, the computing device 150 may obtain the target historical driving information 160 from the database 120. In some embodiments, the computing device 150 may retrieve the target historical driving information 160 from a storage device other than the database 120. The target driver may be any driver to be evaluated. The target historical driving information 160 is determined by counting the historical trips of the target driver.
In some embodiments, the target historical driving information 160 may include personal information such as age, driving age, gender, etc. of the target driver and/or evaluated information and/or abnormal behavior information. The target historical driving information 160 for the target driver is similar to the historical driving information 125 for the driver discussed above with respect to 210 and will not be described in detail herein. Unlike the historical driving information 125 of the driver, the target historical driving information 160 does not include information as to whether the target driver has experienced a risk event, the probability of the target driver's future occurrence of a risk event is determined by using the safety degree prediction model 140, and the probability of the target driver's future occurrence of a risk event may be indicated by the future driving safety degree 170.
At 320, the computing device 150 determines a future driving safety 170 for the target driver based on the target historical driving information 160 using the safety prediction model 140.
In some embodiments, the computing device 150 may extract a feature vector corresponding to the target driver from the target historical driving information 160 and then determine an output of the safety prediction model 140 indicating a future driving safety 170 for the target driver by processing the feature vector with the safety prediction model 140. The extraction of the feature vector corresponding to the target driver is similar to the feature vector discussed above with respect to 230, and is not repeated here.
Since the degree of safety prediction model 140 is trained from historical driving information 125 corresponding to a plurality of subsets of drivers. Therefore, the safety degree prediction model 140 can accurately predict the future driving safety degree 170 of the target driver.
In the above-described exemplary embodiments, it is possible to predict the future driving safety degree of the driver by using the safety degree prediction model derived by the multi-model training. In addition, according to the future driving safety degree of the driver, the service manager of the travel service can effectively supervise the safety degree of the driver, so that the probability of occurrence of risk events can be reduced in a pre-intervention mode, and the safety of passengers is improved.
For example, the service manager may supervise the driver based on the driver's future driving safety. After the risk event occurs, the future driving safety degree of the driver can be used as one of the evidences. In addition, the service manager can only supervise the driver with lower driving safety, so that resources are saved, and the supervision efficiency is improved.
Examples of the method according to the present disclosure have been described in detail above with reference to fig. 1 to 3, in the following the implementation of the respective apparatus and device will be described.
According to an exemplary implementation of the present disclosure, an apparatus for determining a safety prediction model is provided. Fig. 4 illustrates a block diagram of an apparatus 400 to determine a security prediction model according to some embodiments of the present disclosure. The apparatus 400 may be embodied as or included in the computing device 110 of fig. 1.
As shown in fig. 4, the apparatus 400 includes: a first historical driving information acquisition module 410 configured to acquire historical driving information of a plurality of drivers, the historical driving information indicating at least whether the plurality of drivers have occurred a risk event within a historical period of time and whether the plurality of drivers have occurred a risk event within a predetermined number of trips; a first driver subset partitioning module 420 configured to partition the plurality of drivers into a plurality of driver subsets based on historical driving information; a first sub-safety degree prediction model training module 430 configured to train a plurality of sub-safety degree prediction models respectively corresponding to a plurality of subsets of drivers based on historical driving information respectively corresponding to the plurality of subsets of drivers; and a safety degree prediction model determination module 440 configured to determine a safety degree prediction model capable of determining a future driving safety degree of the target driver based on the target historical driving information of the target driver by combining a plurality of sub-safety degree prediction models.
In some embodiments, wherein the first driver subset partitioning module 420 comprises: a first set of driver selection modules configured to select, from a plurality of drivers, at least one driver having experienced a risk event within a historical period of time as a first set of drivers; a second group driver selection module configured to select, as a second group of drivers, at least one driver having a risk event occurring within a predetermined number of trips from the plurality of drivers; and a second driver subset partitioning module configured to partition the first group of drivers and the second group of drivers into different driver subsets.
In some embodiments, wherein the first driver subset partitioning module 420 further comprises: a first number of drivers determination module configured to determine a number of drivers in the first group of drivers as a first number; a third group driver selection module configured to select, as a third group of drivers, a second number of drivers that have not experienced a risk event within the historical period of time, the first number differing from the second number by less than a predetermined threshold; and a first driver subset determination module configured to merge a first and a third set of drivers into a first driver subset, the first and the third set of drivers labeled as training positive and training negative examples, respectively.
In some embodiments, wherein the first driver subset partitioning module 420 further comprises: a second driver number determination module configured to determine a number of drivers in the second group of drivers as a third number; a fourth set of driver selection modules configured to select, as a fourth set of drivers, a fourth number of drivers who have not experienced a risk event within a predetermined number of trips, the third number differing from the fourth number by less than a predetermined threshold; and a second driver subset determination module configured to combine a second and a fourth set of drivers into a second driver subset, the second and fourth set of drivers labeled as training positive and training negative examples, respectively.
In some embodiments, wherein the first sub-safety prediction model training module 430 comprises, for a given driver subset of the plurality of driver subsets: a feature vector and category indicator determination module configured to determine, based on historical driving information corresponding to a given subset of drivers, a feature vector and a sample category indicator corresponding to each of the drivers in the given subset of drivers, each sample category indicator indicating whether the driver is a training positive sample or a training negative sample; and a second sub-safety prediction model training module configured to train a sub-safety prediction model corresponding to the given driver subset based on the feature vector and the category indicator.
In some embodiments, the first historical driving information obtaining module 410 further includes: a second historical driving information acquisition module configured to acquire at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the driver.
According to an exemplary implementation manner of the disclosure, a device for determining the driving safety degree is further provided. Fig. 5 illustrates a block diagram of an apparatus 500 for determining driving safety according to some embodiments of the present disclosure. The apparatus 500 comprises: a first target historical driving information acquisition module 510 configured to acquire target historical driving information of a target driver, the target historical driving information indicating at least whether a risk event occurred to the target driver within a historical period of time and whether a risk event occurred to the target driver within a predetermined number of trips; and a future driving safety determination module 520 configured to determine a future driving safety of the target driver based on the target historical driving information using a safety prediction model, wherein the safety prediction model is determined by combining a plurality of sub-safety prediction models, and the plurality of sub-safety prediction models are trained based on historical driving information corresponding to a plurality of subsets of the plurality of drivers, respectively.
In some embodiments, the future driving safety determination module 520 further comprises: the characteristic vector extraction module is configured to extract a characteristic vector corresponding to a target driver based on the target historical driving information; and a model output determination module configured to determine an output of the degree-of-safety prediction model by processing the feature vectors with the plurality of sub degree-of-safety prediction models, respectively, to output a degree of safety of future driving indicating the target driver.
In some embodiments, the first target historical driving information acquisition module 510 includes: a second target historical driving information acquisition module configured to acquire at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the target driver.
Accordingly, embodiments of the present disclosure propose a method of determining a safety degree prediction model and a driving safety degree. In the embodiment of the disclosure, by determining the safety degree prediction model and determining the driving safety degree by using the model, the probability of the risk event occurring on the driving side can be effectively predicted, so that the safety of passengers is improved by means of pre-intervention.
Fig. 6 illustrates a block diagram of a computing device/server 600 in which one or more embodiments of the disclosure may be implemented. It should be understood that the computing device/server 600 illustrated in fig. 6 is merely exemplary, and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. Computing device/server 600 may be implemented as or included in computing device 110 and/or computing device 150 of fig. 1.
As shown in fig. 6, computing device/server 600 is in the form of a general purpose computing device. Components of computing device/server 600 may include, but are not limited to, one or more processors or processing units 610, memory 620, storage 630, one or more communication units 640, one or more input devices 650, and one or more output devices 660. The processing unit 610 may be a real or virtual processor and can perform various processes according to programs stored in the memory 620. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capability of computing device/server 600.
Computing device/server 600 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device/server 600 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 620 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 630 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device/server 600.
Computing device/server 600 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 620 may include a computer program product 625 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 640 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device/server 600 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communications connection. Thus, computing device/server 600 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 650 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 660 may be one or more output devices such as a display, speakers, printer, or the like. Computing device/server 600 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as desired, through communication unit 640, with one or more devices that enable a user to interact with computing device/server 600, or with any device (e.g., network card, modem, etc.) that enables computing device/server 600 to communicate with one or more other computing devices. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable information presentation device to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable information presentation device, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable information presentation device, and/or other apparatus to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer-readable program instructions may be loaded onto a computer, other programmable information presentation apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of various implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand various implementations disclosed herein.

Claims (22)

1. A method of determining a safety prediction model, comprising:
obtaining historical driving information of a plurality of drivers, the historical driving information indicating at least whether the plurality of drivers have experienced a risk event within a historical period of time and whether the plurality of drivers have experienced a risk event within a predetermined number of trips;
dividing the plurality of drivers into a plurality of driver subsets based on the historical driving information;
training a plurality of sub-safety degree prediction models respectively corresponding to the plurality of driver subsets on the basis of historical driving information respectively corresponding to the plurality of driver subsets; and
determining a safety degree prediction model by combining the plurality of sub-safety degree prediction models, the safety degree prediction model being capable of determining a future driving safety degree of a target driver based on target historical driving information of the target driver.
2. The method of claim 1, wherein dividing the plurality of drivers into the plurality of driver subsets comprises:
selecting at least one driver having experienced a risk event within the historical period of time from the plurality of drivers as a first set of drivers;
selecting at least one driver having experienced a risk event within the predetermined number of trips as a second set of drivers from the plurality of drivers; and
the first and second groups of drivers are divided into different subsets of drivers.
3. The method of claim 2, wherein dividing the plurality of drivers into the plurality of driver subsets further comprises:
determining a number of drivers in the first set of drivers as a first number;
selecting, as a third group of drivers, a second number of drivers from the plurality of drivers for which no risk event has occurred within the historical period of time, the first number differing from the second number by less than a predetermined threshold; and
merging the first and third sets of drivers into a first subset of drivers, the first and third sets of drivers labeled as training positive and training negative examples, respectively.
4. The method of claim 2, wherein dividing the plurality of drivers into the plurality of driver subsets further comprises:
determining a number of drivers in the second set of drivers as a third number;
selecting a fourth number of drivers who have not experienced a risk event within the predetermined number of trips from the plurality of drivers as a fourth set of drivers, the third number differing from the fourth number by less than a predetermined threshold; and
combining the second and fourth sets of drivers into a second subset of drivers, the second and fourth sets of drivers labeled as training positive and training negative examples, respectively.
5. The method of claim 1, wherein training a plurality of sub-safety prediction models comprises: for a given subset of drivers from the plurality of subsets of drivers,
determining feature vectors and sample category indicators corresponding to the drivers in the given subset of drivers based on historical driving information corresponding to the given subset of drivers, wherein each sample category indicator indicates that the driver is a training positive sample or a training negative sample; and
and training a sub-safety degree prediction model corresponding to the given driver subset based on the feature vector and the class indicator.
6. The method of claim 1, wherein obtaining the historical driving information further comprises:
obtaining at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the driver.
7. A method of determining driving safety, comprising:
acquiring target historical driving information of a target driver, wherein the target historical driving information at least indicates whether the target driver has experienced a risk event within a historical time period and whether the target driver has experienced a risk event within a predetermined number of trips; and
determining a future driving safety of the target driver based on the target historical driving information using a safety prediction model, wherein the safety prediction model is determined by combining a plurality of sub-safety prediction models, and the plurality of sub-safety prediction models are trained based on historical driving information corresponding to a plurality of driver subsets of a plurality of drivers, respectively.
8. The method of claim 7, wherein determining the degree of future driving safety of the target driver further comprises:
extracting a feature vector corresponding to the target driver based on the target historical driving information; and
determining an output of the safety degree prediction model indicating the future driving safety degree of the target driver by processing the feature vectors with the plurality of sub-safety degree prediction models, respectively.
9. The method of claim 7, wherein obtaining the target historical driving information of the target driver comprises:
obtaining at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the target driver.
10. An apparatus for determining a safety prediction model, comprising:
a first historical driving information acquisition module configured to acquire historical driving information of a plurality of drivers, the historical driving information indicating at least whether a risk event occurred within a historical period of time for the plurality of drivers and whether a risk event occurred within a predetermined number of trips for the plurality of drivers;
a first driver subset partitioning module configured to partition the plurality of drivers into a plurality of driver subsets based on the historical driving information;
a first sub-safety degree prediction model training module configured to train a plurality of sub-safety degree prediction models respectively corresponding to the plurality of subsets of drivers based on historical driving information respectively corresponding to the plurality of subsets of drivers; and
a safety degree prediction model determination module configured to determine a safety degree prediction model by combining the plurality of sub-safety degree prediction models, the safety degree prediction model being capable of determining a future driving safety degree of a target driver based on target historical driving information of the target driver.
11. The apparatus of claim 10, wherein the first driver-side subset partitioning module comprises:
a first set of driver selection modules configured to select at least one driver having an occurrence of a risk event within the historical period of time from the plurality of drivers as a first set of drivers;
a second group driver selection module configured to select, as a second group of drivers, at least one driver from the plurality of drivers who has experienced a risk event within the predetermined number of trips; and
a second driver subset partitioning module configured to partition the first group of drivers and the second group of drivers into different driver subsets.
12. The apparatus of claim 11, wherein the first driver subset partitioning module further comprises:
a first number of drivers determination module configured to determine a number of drivers in the first set of drivers as a first number;
a third group driver selection module configured to select, as a third group of drivers, a second number of drivers from the plurality of drivers for which no risk event has occurred within the historical period of time, the first number differing from the second number by less than a predetermined threshold; and
a first driver subset determination module configured to merge the first and third sets of drivers into a first driver subset, the first and third sets of drivers labeled as training positive and training negative examples, respectively.
13. The apparatus of claim 11, wherein the first driver subset partitioning module further comprises:
a second number of drivers determination module configured to determine the number of drivers in the second set of drivers as a third number;
a fourth set of driver selection modules configured to select, as a fourth set of drivers, a fourth number of drivers who have not experienced a risk event within the predetermined number of trips, the third number differing from the fourth number by less than a predetermined threshold; and
a second driver subset determination module configured to merge the second and fourth sets of drivers into a second driver subset, the second and fourth sets of drivers labeled as training positive and training negative examples, respectively.
14. The apparatus of claim 10, wherein the first sub-safety prediction model training module comprises, for a given driver subset of the plurality of driver subsets:
a feature vector and category indicator determination module configured to determine, based on historical driving information corresponding to the given subset of drivers, a feature vector and a sample category indicator corresponding to each of the drivers in the given subset of drivers, each sample category indicator indicating whether the driver is a training positive sample or a training negative sample; and
a second sub-safety prediction model training module configured to train a sub-safety prediction model corresponding to the given subset of drivers based on the feature vector and the category indicator.
15. The apparatus according to claim 10, wherein the first historical driving information acquisition module further includes:
a second historical driving information acquisition module configured to acquire at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the driver.
16. An apparatus for determining driving safety, comprising:
a first target historical driving information acquisition module configured to acquire target historical driving information of a target driver, the target historical driving information indicating at least whether a risk event occurred to the target driver within a historical period of time and whether a risk event occurred to the target driver within a predetermined number of trips; and
a future driving safety determination module configured to determine a future driving safety of the target driver based on the target historical driving information using a safety prediction model, wherein the safety prediction model is determined by combining a plurality of sub-safety prediction models, and the plurality of sub-safety prediction models are trained based on historical driving information corresponding to a plurality of subsets of drivers of a plurality of drivers, respectively.
17. The apparatus of claim 16, wherein the future driving safety determination module further comprises:
a feature vector extraction module configured to extract a feature vector corresponding to the target driver based on the target historical driving information; and
a model output determination module configured to determine an output of the degree of safety prediction model indicating the degree of safety of the future driving of the target driver by processing the feature vectors with the plurality of sub-degree of safety prediction models, respectively.
18. The apparatus according to claim 16, wherein the first target historical driving information acquisition module includes:
a second target historical driving information acquisition module configured to acquire at least one of: the age, driving age, sex, evaluated information, and abnormal behavior information of the target driver.
19. An electronic device, comprising:
a processor; and
a memory storing computer-executable instructions that, when executed by the processor, are configured to implement the method of any of claims 1 to 6.
20. An electronic device, comprising:
a processor; and
a memory storing computer-executable instructions that, when executed by the processor, are configured to implement the method of any of claims 7 to 9.
21. A computer-readable storage medium having computer-executable instructions stored thereon, wherein the computer-executable instructions are executed by a processor to implement the method of any one of claims 1 to 6.
22. A computer-readable storage medium having computer-executable instructions stored thereon, wherein the computer-executable instructions are executed by a processor to implement the method of any of claims 7 to 9.
CN202011218220.3A 2020-11-04 2020-11-04 Method, device, equipment and storage medium for determining safety degree prediction model Pending CN112183899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011218220.3A CN112183899A (en) 2020-11-04 2020-11-04 Method, device, equipment and storage medium for determining safety degree prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011218220.3A CN112183899A (en) 2020-11-04 2020-11-04 Method, device, equipment and storage medium for determining safety degree prediction model

Publications (1)

Publication Number Publication Date
CN112183899A true CN112183899A (en) 2021-01-05

Family

ID=73917040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011218220.3A Pending CN112183899A (en) 2020-11-04 2020-11-04 Method, device, equipment and storage medium for determining safety degree prediction model

Country Status (1)

Country Link
CN (1) CN112183899A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113178071A (en) * 2021-04-22 2021-07-27 深圳壹账通智能科技有限公司 Driving risk level identification method and device, electronic equipment and readable storage medium
CN113177980A (en) * 2021-04-29 2021-07-27 北京百度网讯科技有限公司 Target object speed determination method and device for automatic driving and electronic equipment
CN113466713A (en) * 2021-07-15 2021-10-01 北京工业大学 Lithium battery safety degree estimation method and device based on random forest

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450497A (en) * 2014-07-31 2016-03-30 国际商业机器公司 Method and device for generating clustering model and carrying out clustering based on clustering model
CN105930943A (en) * 2016-07-11 2016-09-07 上海安吉星信息服务有限公司 Method and device for predicting driving risk
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN109739216A (en) * 2019-01-25 2019-05-10 深圳普思英察科技有限公司 The test method and system of the practical drive test of automated driving system
US20190143994A1 (en) * 2017-04-18 2019-05-16 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for determining safety score of driver
CN110060538A (en) * 2019-04-08 2019-07-26 上海云之驾科技股份有限公司 Personalized artificial based on historical data modeling intelligently drives training and practices system and method
US10392022B1 (en) * 2018-02-28 2019-08-27 Calamp Corp. Systems and methods for driver scoring with machine learning
CN110458649A (en) * 2019-07-11 2019-11-15 北京三快在线科技有限公司 Information recommendation method, device, electronic equipment and readable storage medium storing program for executing
CN110544373A (en) * 2019-08-21 2019-12-06 北京交通大学 truck early warning information extraction and risk identification method based on Beidou Internet of vehicles
CN110837979A (en) * 2019-11-18 2020-02-25 吉旗物联科技(上海)有限公司 Safe driving risk prediction method and device based on random forest
CN111105110A (en) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 Driving risk determination method, device, medium and computing equipment
CN111444657A (en) * 2020-03-10 2020-07-24 五邑大学 Method and device for constructing fatigue driving prediction model and storage medium
CN111862585A (en) * 2019-07-23 2020-10-30 北京嘀嘀无限科技发展有限公司 System and method for traffic prediction

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450497A (en) * 2014-07-31 2016-03-30 国际商业机器公司 Method and device for generating clustering model and carrying out clustering based on clustering model
CN105930943A (en) * 2016-07-11 2016-09-07 上海安吉星信息服务有限公司 Method and device for predicting driving risk
US20190143994A1 (en) * 2017-04-18 2019-05-16 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for determining safety score of driver
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
US10392022B1 (en) * 2018-02-28 2019-08-27 Calamp Corp. Systems and methods for driver scoring with machine learning
CN111105110A (en) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 Driving risk determination method, device, medium and computing equipment
CN109739216A (en) * 2019-01-25 2019-05-10 深圳普思英察科技有限公司 The test method and system of the practical drive test of automated driving system
CN110060538A (en) * 2019-04-08 2019-07-26 上海云之驾科技股份有限公司 Personalized artificial based on historical data modeling intelligently drives training and practices system and method
CN110458649A (en) * 2019-07-11 2019-11-15 北京三快在线科技有限公司 Information recommendation method, device, electronic equipment and readable storage medium storing program for executing
CN111862585A (en) * 2019-07-23 2020-10-30 北京嘀嘀无限科技发展有限公司 System and method for traffic prediction
CN110544373A (en) * 2019-08-21 2019-12-06 北京交通大学 truck early warning information extraction and risk identification method based on Beidou Internet of vehicles
CN110837979A (en) * 2019-11-18 2020-02-25 吉旗物联科技(上海)有限公司 Safe driving risk prediction method and device based on random forest
CN111444657A (en) * 2020-03-10 2020-07-24 五邑大学 Method and device for constructing fatigue driving prediction model and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113178071A (en) * 2021-04-22 2021-07-27 深圳壹账通智能科技有限公司 Driving risk level identification method and device, electronic equipment and readable storage medium
CN113177980A (en) * 2021-04-29 2021-07-27 北京百度网讯科技有限公司 Target object speed determination method and device for automatic driving and electronic equipment
CN113177980B (en) * 2021-04-29 2023-12-26 北京百度网讯科技有限公司 Target object speed determining method and device for automatic driving and electronic equipment
CN113466713A (en) * 2021-07-15 2021-10-01 北京工业大学 Lithium battery safety degree estimation method and device based on random forest
CN113466713B (en) * 2021-07-15 2024-04-12 北京工业大学 Lithium battery safety degree estimation method and device based on random forest

Similar Documents

Publication Publication Date Title
WO2022037337A1 (en) Distributed training method and apparatus for machine learning model, and computer device
CN112183899A (en) Method, device, equipment and storage medium for determining safety degree prediction model
CN103370722B (en) The system and method that actual volatility is predicted by small echo and nonlinear kinetics
CN112418341A (en) Model fusion method, prediction method, device, equipment and storage medium
JP2023520970A (en) Lithium battery SOC estimation method, apparatus, and computer-readable storage medium
Brando et al. Modelling heterogeneous distributions with an uncountable mixture of asymmetric laplacians
US20220122000A1 (en) Ensemble machine learning model
CN115545103A (en) Abnormal data identification method, label identification method and abnormal data identification device
CN113268403A (en) Time series analysis and prediction method, device, equipment and storage medium
CN111159481A (en) Edge prediction method and device of graph data and terminal equipment
Wang et al. A two‐stage method for bus passenger load prediction using automatic passenger counting data
Nalepa et al. Adaptive guided ejection search for pickup and delivery with time windows
Brunet et al. Implications of model indeterminacy for explanations of automated decisions
US10313457B2 (en) Collaborative filtering in directed graph
Tembine Mean field stochastic games: Convergence, Q/H-learning and optimality
CN111325255B (en) Specific crowd delineating method and device, electronic equipment and storage medium
CN112767032A (en) Information processing method and device, electronic equipment and storage medium
CN111783883A (en) Abnormal data detection method and device
CN116737373A (en) Load balancing method, device, computer equipment and storage medium
CN108037979B (en) Virtual machine performance degradation evaluation method based on Bayesian network containing hidden variables
CN116739649A (en) User response potential evaluation method and device
CN116245630A (en) Anti-fraud detection method and device, electronic equipment and medium
CN113656586B (en) Emotion classification method, emotion classification device, electronic equipment and readable storage medium
CN111339468B (en) Information pushing method, device, electronic equipment and storage medium
CN113918345A (en) Capacity calculation method and device for configuration hardware, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination