US11061911B2 - Driving data analyzer - Google Patents

Driving data analyzer Download PDF

Info

Publication number
US11061911B2
US11061911B2 US16/158,865 US201816158865A US11061911B2 US 11061911 B2 US11061911 B2 US 11061911B2 US 201816158865 A US201816158865 A US 201816158865A US 11061911 B2 US11061911 B2 US 11061911B2
Authority
US
United States
Prior art keywords
data
driving data
driving
vehicle
sequences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/158,865
Other versions
US20190114345A1 (en
Inventor
Hideaki MISAWA
Kazuhito TAKENAKA
Tadahiro Taniguchi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Denso Corp
Ritsumeikan Trust
Original Assignee
Denso Corp
Ritsumeikan Trust
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso Corp, Ritsumeikan Trust filed Critical Denso Corp
Assigned to DENSO CORPORATION, THE RITSUMEIKAN TRUST reassignment DENSO CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MISAWA, HIDEAKI, TAKENAKA, KAZUHITO, TANIGUCHI, TADAHIRO
Publication of US20190114345A1 publication Critical patent/US20190114345A1/en
Application granted granted Critical
Publication of US11061911B2 publication Critical patent/US11061911B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3082Vector coding
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/06Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3068Precoding preceding compression, e.g. Burrows-Wheeler transformation
    • H03M7/3079Context modeling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/30Driving style
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction

Definitions

  • the present disclosure relates to technologies for analyzing driving data of a vehicle.
  • a driving assist system aims to assist driver's driving of a vehicle according to various driving situations.
  • the driving assist system suitably analyzes the driving situations of an own vehicle.
  • Japanese Patent Application Publication No. 2013-250663 discloses a technology that collects a large amount of various items of data associated with driver's driving operations and behaviors of an own vehicle as a driving data group. Then, this technology analyzes the driving data group to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes.
  • drivers may not necessarily perform the same driving operations, i.e. driving actions, in the same driving scene. For example, when driving an own vehicle to go around a curve, a driver sufficiently decelerates the own vehicle immediately before the curve, and thereafter, turns the steering wheel, while another driver may turn the steering wheel at the same time as decelerating the own vehicle.
  • various types of vehicles have individually different travelling characteristics for each type. These driving habits and the travelling characteristics depend on the driving data group.
  • a first driving scene in the extracted driving scenes may be different from a second driving scene in the extracted driving scenes although the first and second driving scenes represent the same or similar driving situations. For example, if different drivers drive the same type of vehicle on the same road, the driving scenes for the different drivers may be extracted as different driving scenes. In addition, if the same driver drives different types of vehicles on the same road, the driving scenes for the different types of vehicles may be extracted as different driving scenes.
  • one aspect of the present disclosure seeks to provide technologies, each of which is capable of analyzing a driving data group independently of such external factors.
  • the driving data analyzer includes a data collector collects, from at least one vehicle, driving data sequences while each of the driving data sequences is correlated with identification data.
  • Each driving data sequence includes sequential driving data items, and each driving data item represents at least one of a driver's operation of at least one vehicle and a behavior of the at least one vehicle based on the at least one of a driver's operation.
  • the identification data represents a type of at least one external factor that contributes to variations in the driving data items.
  • the driving data analyzer includes a feature extractor applies a data compression network model to the driving data sequences to thereby extract, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor.
  • This configuration of the driving data analyzer extracts, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor. This therefore enables a robust data analysis based on the at least one latent feature to be carried out.
  • FIG. 1 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the first embodiment of the present disclosure
  • FIG. 2 is a block diagram schematically illustrating an example of the internal structure of a feature model according to the first embodiment
  • FIG. 3 is a block diagram schematically illustrating an example of how each of an encoder and a decoder of the individual network illustrated in FIG. 2 is operated;
  • FIG. 4 is a graph schematically illustrating an example of relationships among a target driving data sequence of a driver, a predicted data sequence of another driver, and a predicted data sequence of the former driver thereamong;
  • FIG. 5 is s a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the second embodiment of the present disclosure
  • FIG. 6 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the third embodiment of the present disclosure
  • FIG. 7 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the fourth embodiment of the present disclosure.
  • FIG. 8 is a block diagram schematically illustrating multi LSTM layers if an encoder comprised of the multi LSTM layers.
  • FIG. 9 is a graph schematically illustrating an example of a training data sequence.
  • the following describes a driving data analyzer 1 according to the first embodiment of the present disclosure with reference to FIGS. 1 to 4 .
  • the driving data analyzer 1 includes in-vehicle units 10 respectively installed in a plurality of vehicles V 1 , . . . , Vn, and a server 20 communicable by radio with the in-vehicle units 10 .
  • the in-vehicle units 10 serve as mobile terminals at least partly provided in the respective vehicles V 1 to Vn.
  • Each of the in-vehicle units 10 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 10 a , a memory device 10 b , and an input unit 10 c .
  • the memory device 10 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
  • each in-vehicle unit 10 can run one or more programs, i.e. sets of program instructions, stored in the memory device 10 b , thus implementing various functions of the in-vehicle unit 10 as software operations.
  • the CPU 10 a can run programs stored in the memory device 10 b , thus performing one or more methods in accordance with the corresponding one or more programs.
  • At least one of the various functions of at least one in-vehicle unit 10 can be implemented as a hardware electronic circuit.
  • the various functions of at least one in-vehicle unit 10 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits.
  • the input unit 10 c of each in-vehicle unit 10 enables a driver of the corresponding vehicle to enter various commands and/or various data items to the CPU 10 a of the corresponding in-vehicle unit 10 .
  • the server 20 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 20 a and a memory device 20 b .
  • the memory device 20 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
  • the CPU 20 a of the server 20 can run one or more programs, i.e. program instructions, stored in the memory device 20 b , thus implementing various functions of the server 20 as software operations.
  • the CPU 20 a can run programs stored in the memory device 20 b , thus performing one or more routines in accordance with the corresponding one or more programs.
  • At least one of the various functions of the server 20 can be implemented as a hardware electronic circuit.
  • the various functions of at least one server 20 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits.
  • the in-vehicle unit 10 installed in each of the vehicles V 1 to Vn includes a data obtainer 11 and a data transmitter 12 .
  • the CPU 10 a of the in-vehicle unit 10 runs corresponding one or more programs stored in the memory device 10 b , thus implementing the functional modules 11 and 12 .
  • the data obtainer 11 is communicably connected to sensors, i.e. in-vehicle sensors, SS installed in the corresponding vehicle. Note that the data obtainer 11 can include the in-vehicle sensors SS.
  • the in-vehicle sensors SS include a first type of sensors each repeatedly measuring a driving data item including at least one of
  • Operation data item D 1 indicative of driver's operations of at least one of driver-operable devices installed in the corresponding vehicle
  • the operation data items D 1 include
  • the behavioral data items D 2 include
  • the in-vehicle sensors SS can include a second type of sensors, such as a radar sensor, an image sensor, and a weather sensor, each repeatedly measuring a situation data item D 3 that is useful in specifying a driving situation of the corresponding vehicle, which includes at least one of
  • An image data item including an image of a region, such as a front region, located around the corresponding vehicle captured by the image sensor, which is one of the second type of sensors, installed to the corresponding vehicle
  • a road attribute item indicative of the attribute of a road on which the corresponding vehicle is travelling such as a straight road, a curved road, an expressway, and/or an ordinary road, which can be expressed from map data and the image data item
  • Weather information indicative of a weather condition such as a shine, i.e. fine condition, a cloudy condition, or a rain condition around the corresponding vehicle
  • the data obtainer 11 obtains, from the first and second types of sensors, a driving data group including the operation data items D 1 , behavioral data items D 2 , and situation data items D 3 in a predetermined measurement cycle. Then, the data obtainer 11 sequentially outputs, to the data transmitter 12 , the driving data groups measured in the measurement cycle as target driving data sequences.
  • the data obtainer 11 also obtains an identification data item indicative of the type of at least one external factor that contributes to variations in the driving data items.
  • the first embodiment uses, as the at least one external factor, the identity of a driver of the corresponding vehicle. That is, the type of the at least one external factor for each of the vehicles V 1 to Vn represents a corresponding driver.
  • Unique identification data (ID) items D 4 are for example assigned to respective drivers that can use at least one of the vehicles V 1 to Vn according to the first embodiment.
  • the driver when a driver uses one of the vehicles V 1 to Vn, the driver operates the input unit 10 c of the corresponding one of the in-vehicle units 10 c to thereby enter the driver's ID data item D 4 to the CPU 10 a , and the CPU 10 a stores the entered driver's ID data item D 4 in the memory device 10 b.
  • unique ID data items can be assigned to respective keys prepared for each of the vehicles V 1 to Vn, and when a driver inserts an assigned key for one of the vehicles V 1 to Vn into an ignition lock of one of the vehicles V 1 to Vn, the CPU 10 a , which is communicable with the ignition lock, reads the ID data item assigned to the inserted key.
  • ID data items of drivers who have the authority to use a selected vehicle in the vehicles V 1 to Vn are recorded beforehand in the memory device 10 b of the in-vehicle unit 10 of the selected vehicle. Then, when a driver who has the authority to use the selected vehicle enters information about him or her, the CPU 10 a can receive the information, and extract one of the recorded ID data items; the extracted ID data item matches with the entered information.
  • the data obtainer 11 outputs, to the data transmitter 12 , the ID data item of a current driver of the corresponding vehicle.
  • the data transmitter 12 is configured to
  • the data transmitter 12 can be configured to indirectly transmit the target driving data sequences and the ID data item to the server 20 via infrastructures provided on, for example, roadsides.
  • the data transmitter 12 can be comprised of a mobile communicator, such as a cellular phone, which can communicate with the server via radio communication networks.
  • the server 20 serves as a fixed station directly communicable or indirectly communicable via infrastructures or the radio communication networks with the vehicles V 1 to Vn, i.e. their in-vehicle devices 10 .
  • the server 20 includes a data collector 21 , a driving data database (DB) 22 , a model generator 23 , a data extractor 24 , an analytical data DB 25 , and an analyzing unit 26 .
  • the CPU 20 a of the server 20 executes one or more corresponding programs stored in the memory device 20 b , thus implementing the functional modules 21 , 23 , 24 , and 26 .
  • the memory device 20 b can include predetermined storage areas allocated to serve as the respective driving data DB 22 and analytical data DB 25 , or include at least one mass storage, such as a semiconductor memory or a magnetic memory, such as a hard disk drive, that serves as the driving data DB 22 and analytical data DB 25 .
  • the data collector 21 collects the target driving data sequences and the ID data item output from each of the in-vehicle devices 10 , and stores, in the driving data DB 22 , the collected target driving data sequences and ID data item for each of the in-vehicle devices 10 such that the collected target driving data sequences for each of the in-vehicle devices 10 are correlated with the ID data item for the corresponding in-vehicle device 10 .
  • the target driving data sequences include sequential sets of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 . That is, at time (t ⁇ 1), a set of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 , which are obtained by the data obtainer 11 , is collected by the data collector 12 . After the lapse of one measurement cycle since the time (t ⁇ 1), a next set of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 , which are obtained by the data obtainer 11 , is collected by the data collector 12 at time t.
  • the target vector sequences are stored.
  • the data collector 12 is also configured to transform the ID data item for each of the vehicles V 1 to Vn, i.e. each of the drivers, into a multidimensional target vector whose elements are each have 0 or 1; the number of dimensions of the multidimensional target vector corresponds to the number of drivers of the respective vehicles V 1 to Vn. For example, if the number of vehicles V 1 to Vn is four, so that the number of drivers is four, the respective ID data items are expressed as an identification vector b 1 (1, 0, 0, 0), an identification vector b 2 (0, 1, 0, 0), an identification vector b 3 (0, 0, 1, 0), and an identification vector b 4 (0, 0, 0, 1). That is, any one of the drivers can be expressed as an identification vector bd in which d identifies any one of the drivers.
  • the model generator 23 is configured to for example perform, every regular internal or irregular interval, a training task that
  • a feature model 30 such as a data compression model or a dimensionality reduction model, in accordance with the target vector sequences to thereby generate a trained feature model 30
  • the feature model 30 includes an encoder 31 , an intermediate layer, i.e. a driver translation layer, 32 , and a decoder 33 . That is, the feature model 30 is designed as a neural network comprised of a common encoder-decoder model to which the intermediate layer (driver translation layer) 32 has been added.
  • the encoder 31 is configured to extract latent features, i.e. essential features, from actual input data, and the decoder 33 is configured to use the extracted latent features to thereby reconstruct or predict input data.
  • latent features i.e. essential features
  • the decoder 33 is configured to use the extracted latent features to thereby reconstruct or predict input data.
  • LSTM long short term memory
  • RNN recurrent neural networks
  • the LSTM layer has a memory cell, and the state of the memory cell, which will be referred to as a cell state, and an output in the LSTM layer of the encoder 31 at time t will be respectively expressed as CE t and hE t .
  • the cell state and output in the LSTM layer of the decoder 33 at time t will be respectively expressed as CD t and hD t .
  • each of the cell state CE t and output hE t shows latent features of the input target vector at time t.
  • the encoder 31 includes a single LSTM layer comprised of sequentially connected LSTM nodes 31 a 1 to 31 a M. Note that, in order to easily understand the operations of the encoder 31 , the single LSTM layer is comprised of the temporally developed LSTM nodes 31 a 1 to 31 a M.
  • the first LSTM node 31 a 1 When M target vectors x t+1 to x t+M , i.e. measured driving data items, obtained within a specified input period from time (t+1) to time (t+M) inclusive are sequentially input to the encoder 31 , the first LSTM node 31 a 1 generates the cell state CE t+1 and output hE t+1 in accordance with the target vector x t+1 when the target vector x t+1 is input thereto, and supplies the cell state CE t+1 and output hE t+1 to the next LSTM node 31 a 2 .
  • the interval between the time (t+k) and to the time (t+k+1) adjacent thereto is set to one measurement cycle; k is any positive integer including zero.
  • the second LSTM node 31 a 2 generates the cell state CE t+2 and output hE t+2 in accordance with the cell state CE t+1 , output hE t+1 , and target vector x t+2 when the target vector x t+2 is input thereto, and supplies the cell state CE t+2 and output hE t+2 to the next LSTM node 31 a 3 .
  • the M-th LSTM node 31 a M when the target vector x t+M is input to the M-th LSTM node 31 a M, the M-th LSTM node 31 a M generates the cell state CE t+M and output hE t+M in accordance with the cell state CE t+M ⁇ 1 , output hE t+M ⁇ 1 , and target vector x t+M , and supplies the cell state CE t+M and output hE t+M to the intermediate layer 32 .
  • the encoder 31 encodes the M target vectors x t+1 to x t+M to thereby output, as encoded, i.e. compressed, data item, the cell state CE t+M and output hE t+M .
  • the number of dimensions of each of the cell state CE t+M and output hE t+M is smaller than the number of the multidimensional target vectors x t+1 to x t+M , so that each of the cell state CE t+M and output hE t+M represents the latent features, i.e. common important features, included in the multidimensional target vectors x t+1 to x t+M .
  • the encoder 31 can be comprised of multi LSTM layers.
  • FIG. 8 schematically illustrates an example of an encoder 31 X having such a multi LSTM layers. Specifically, FIG. 8 illustrates a first layer of LSTM nodes 31 a 11 to 31 a M 1 that are sequentially connected to which the M target vectors x t+1 to x t+M are respectively input. FIG. 8 also illustrates a second layer of LSTM nodes 31 a 12 to 31 a M 2 that are sequentially connected, and are respectively connected to the LSTM nodes 31 a 11 to 31 a M 1 .
  • the LSTM node 31 a M 2 supplies, based on the cell state and output sent from the LSTM node 31 a (M ⁇ 1) 2 and the cell state and output sent from the LSTM node 31 a M 1 , the cell state CE t+M and output hE t+M to the intermediate layer 32 .
  • the intermediate layer 32 is configured as, for example, a neural network to which the cell state CE t+M and output hE t+M sent from the encoder 31 , and the identification vector bd.
  • the intermediate layer 32 is comprised of two partial networks, that is, a common network 321 and an individual network 322 .
  • the common network 321 is configured to receive the cell state CE t+M , and output, to the decoder 33 , the cell state CE t+M , i.e. the latent features included in the target vectors x t+1 to x t+M , without change as an initial cell state CD t+M for the decoder 33 .
  • the individual network 322 includes a single dense layer, i.e. a single fully connected layer, or multiple dense layers, i.e. multiple fully connected layers.
  • parameters which include, for example, connection weights and/or connection biases, between nodes of the different layers, and parameters of the LSTM layer, have been trained and can be repeatedly trained to increase the analysis accuracy of the driving data analyzer 1 . How to train the parameters included in the feature model 30 will be described later.
  • the individual network 322 merges the output hE t+M , i.e. the latent features included in the target vectors x t+1 to x t+M , with the identification vector bd to thereby output a merged result indicative of the latent features included in the target vectors x t+1 to x t+M , which are correlated with the identification vector bd.
  • the merged result or its part is input to the decoder 33 as the initial cell state CD t+M for the decoder 33 , and the merged result or its remaining part is output as a latent feature vector Bd of the driver specified by the identification vector bd.
  • a merge network configured to merge the output hE t+M with the identification vector bd in accordance with a predetermined rule can be used as the individual network 322 .
  • the decoder 33 for example includes a single LSTM layer comprised of sequentially connected LSTM nodes 33 a 1 to 33 a N.
  • the single LSTM layer is comprised of the temporally developed LSTM nodes 33 a 1 to 33 a N.
  • the cell state CD t+M and the output hE t+M are provided as initial values.
  • an initial value of 0 is input to the first LSTM 33 a 1 as its input.
  • the decoder 33 When the cell state CD t+M , the output hE t+M , and the initial value of 0 are input to the first LSTM 33 a 1 , the decoder 33 performs a decoding operation N times to thereby sequentially output predicted sequential vectors y t+M+1 to y t+M+N within a specified prediction period from the time (t+M+1) to time (t+M+N) that sequentially follow the specified input period from the time (t+1) to the time (t+M).
  • the first LSTM node 33 a 1 based on the cell state CD t+M , output hD t+M , and the input of the initial value of 0, the first LSTM node 33 a 1 generates, at the first decoding operation,
  • the first LSTM node 33 a 1 supplies the cell state CD t+M+1 and the output hD t+M+1 to the next LSTM node 33 a 2 , and also supplies the predicted sequential vector y t+M+1 to the next LSTM node 33 a 2 as its input.
  • the second LSTM node 33 a 2 based on the cell state CD t+M+1 , output hD t+M+1 , and the input of the predicted sequential vector y t+M+1 , the second LSTM node 33 a 2 generates, at the second decoding operation,
  • the second LSTM node 33 a 2 supplies the cell state CD t+M+2 and the output hD t+M+2 to the next LSTM node 33 a 3 , and also supplies the predicted sequential vector y t+M+2 to the next LSTM node 33 a 3 as its input.
  • the third LSTM node 33 a 3 to the (N ⁇ 1)-th LSTM node 33 a (N ⁇ 1) sequentially perform their decoding operations in the same manner as the first or second LSTM node 33 a 1 .
  • the N-th LSTM node 33 a N generates, at the N-th decoding operation,
  • the N-th LSTM node 33 a N outputs the predicted sequential vector y t+M+N .
  • the decoder 33 is configured to decode the cell state CD t+M , output hD t+M , and the input of the initial value of 0 into the predicted sequential vectors y t+M+1 to y t+M+N in response to the input data sequential vectors x t+1 to x t+M .
  • the features of the predicted sequential vector y t+M+1 output from the first LSTM node 33 a 1 are embedded in the latent features CD t+M+2 and hD t+M+2 of the second LSTM node 33 a 1
  • the features of the predicted sequential vector y t+M+2 output from the second LSTM node 33 a 2 are embedded in the latent features CD t+M+3 and hD t+M+3 of the third LSTM node 33 a 3 , . . .
  • the decoder 33 can be comprised of multi LSTM layers.
  • model generator 23 trains the feature model 30 if the feature model 30 has been untrained or if it is determined that the feature model 30 should be trained.
  • the feature model generator 23 generates, based on the target driving data sequences, i.e. target vector sequences, stored in the driving data DB 22 , training data items for the individual drivers, i.e. for the individual in-vehicle devices 10 , and supervised data items that are respectively paired to the training data items.
  • the feature model generator 23 reads, from the driving data DB 22 , the target vectors x t+1 to x t+M obtained within the specified period from the time (t+1) to the time (t+M) inclusive for each driver used as the training data items.
  • the feature model generator 23 reads, from the driving data DB 22 , the target vectors x t+M+1 to x t+M+N obtained within the specified period from the time (t+M+1) to the time (t+M+N) inclusive for each driver used as the supervised data items.
  • FIG. 9 merely illustrates how vehicle's speed as an element of the target vectors varies over time, each of the other driving data items varies a corresponding element of the target vectors.
  • the feature generator 23 trains the feature model 30 using the training data items and the supervised data items that are respectively paired to the training data items.
  • the feature generator 23 performs a training task of
  • the feature generator 23 repeats the training task while changing values of the parameters of the feature model 30 until the square error calculated for the current training task has reached a predetermined minimum value. This enables the parameters of the feature model 30 to be trained.
  • the feature generator 23 repeats the training task for the single neutral network comprised of the encoder 31 , the intermediate layer 32 , and the decoder 33 of the feature model 30 while changing values of all the parameters of the single neural network until the square error calculated for the current training task has reached a predetermined minimum value. This enables all the parameters of the feature model 30 to be collectively trained.
  • the training algorithm for training the feature model 30 schematically described above is, what is called a back propagation through time (BPIT) algorithm, but the training algorithm for training the feature model 30 is not limited to the BPIT algorithm.
  • BPIT back propagation through time
  • the feature generator 23 can be configured to perform a known sliding window algorithm for the target driving data sequences, i.e. target vector sequences stored in the driving data DB 22 , to thereby extract training data items and supervised data items that are respectively paired to the training data items. Then, the feature generator 23 can be configured to use the pairs of the training data items and supervised data items as a training data set.
  • the above training enables the feature model 30 to predict, based on the target vectors obtained within the period (M ⁇ T), target vectors within the period (N ⁇ T) that follows the period (M ⁇ T) when the measurement cycle is represented by T; the measurement cycle T in the first embodiment is for example set to 1 seconds hereinafter.
  • This training of the feature model 30 enables
  • the data extractor 24 which serves as, for example, a feature extractor, is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 generated, i.e. trained, by the model generator 23 ; the analytical data is used for the analyzing unit 26 to analyze various phenomena or events associated with driving. Then, the data extractor 24 is configured to store the analytical data in the analytical data DB 25 .
  • the following describes how the data extractor 24 extracts, from the collected target driving data sequences, the analytical data using, for example, the following approaches.
  • the data extractor 24 extracts, as the analytical data, first common analytical data including the cell state CD t+M , which is output from the common network 321 of the intermediate layer 32 as a result of the input of the target vectors x t+1 to x t+M and the specified identification vector bd to the feature model 30 .
  • the extracted common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
  • the data extractor 24 extracts, as the analytical data, second common analytical data including the predicted sequential vectors y t°M+1 to y t+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors x t+1 to x t+M and the specified identification vector bd to the feature model 30 . That is, the extracted second common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
  • setting each element of the identification vector bd to 0 or 1 enables any driver to be specified. That is, because setting each element of the identification vector bd to 0 represents that no drivers are specified, and setting each element of the identification vector bd to 1 represents that all drivers are specified, this means that a particular driver is not specified, in other words, that any driver is specified.
  • the analytical data extracted by the second approach shows specific driving data items, i.e. data items dependent on specific driving behaviors themselves, it is easy for a user of the driving data analyzer 1 to intuitively understand what the analytical data mean.
  • the data extractor 24 extracts, as the analytical data, the predicted sequential vectors y t+M+1 to y t+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors x t+1 to x t+M and the identification vector bd to the feature model 30 .
  • the identification vector bd represents a driver correlated with the target driving data items, i.e. target vectors, x t+1 to x t+M , which will be referred to as a target driver (driver A in FIG. 4 )
  • the data extractor 24 enables the extracted predicted sequential vectors y t+M+1 to y t+M+N , which show a predicted future behavior of the target driver A and/or a predicted future behavior of the vehicle corresponding to the target driver A within the future prediction period (N ⁇ T), to be obtained (see dashed curve in FIG. 4 ).
  • the identification vector bd represents a driver who is not correlated with the target driving data items, i.e. target vectors, x t+1 to x t+M , which will be referred to as a non-target driver (driver B in FIG. 4 )
  • the data extractor 24 enables the extracted predicted sequential vectors y t+M+1 to y t+M+N , which show a predicted future behavior of the non-target driver B and/or a predicted future behavior of the vehicle corresponding to the non-target driver B within the future prediction period (N ⁇ T), to be obtained (see dot-and-dash curve in FIG. 4 ).
  • the data extractor 24 is capable of obtaining predicted driving data items for the specified driver upon there are target data items for another driver.
  • solid curve represents the target data items of the target driver.
  • the analyzing unit 26 which serves as, for example, a data analyzer, is configured to execute, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
  • the analyzing unit 26 can be configured to analyze other common statistical tasks for the target driving data items, i.e. target vectors, in accordance with the analytical data stored in the analytical data DB 25 .
  • the analyzing unit 26 can be configured to analyze, based on the first or second common analytical data, the target driving data items, i.e. target vectors, to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes.
  • This driving scene extracting task is disclosed in Japanese Patent Application Publication No. 2013-250663 or US patent Publication U.S. Pat. No. 9,527,384. The disclosure of each of patent Publications U.S. Pat. No. 9,527,384 and JP 2013-250663 is incorporated entirely herein by reference.
  • This driving scene extracting task therefore reduces cases where, although different drivers travel the same type of vehicle on the same road, the driving scenes for the different drivers are extracted as different driving scenes due to, for example, the difference between their driving habits. This results in an improvement of the accuracy of analyzing the driving scenes.
  • the analyzing unit 26 can be configured to extract, as the analytical data, target driving data items for all the drivers in a specified driving situation, and execute, for example, a common statistical task that analyzes the driving behavior of each driver in the same specified driving situation.
  • the driving data analyzer 1 according to the first embodiment described in detail above obtains the following technical benefits.
  • the driving data analyzer 1 is configured to extract, from the collected target driving data items of the drivers, common feature data that represents at least one latent feature included in the collected target driving data items using the feature model 30 ; the at least latent feature is at least one common feature being latent in the collected target driving data items independently from the drivers. Then, the driving data analyzer 1 is configured to analyze, based on the common feature data, at least information included in the collected target driving data items. This therefore enables a robust analyzed result, which is independent from the drivers, to be obtained.
  • the driving data analyzer 1 is configured to use, as the feature model 30 , a neural network comprised of the encoder 31 , the intermediate layer 32 linked to the encoder 31 , and the decoder 33 linked to the intermediate layer 32 , in other words, comprised of a common encoder-decoder model to which the intermediate layer 32 has been added.
  • the intermediate layer 32 is comprised of the common network 321 and the individual network 322 .
  • the common network 321 is configured to
  • the individual network 322 is configured to
  • Training of the neural network based on a known training approach therefore enables the latent features independent from the drivers to be stored in the common network 321 , and the latent features for each driver to be stored in the individual network 322 .
  • the driving data analyzer 1 makes it possible to predict, from the collected driving data items for a specified driver indicated by the identification vector bd, a future behavior of the specified driver using the feature model 30 .
  • the driving data analyzer 1 makes it possible to predict a future behavior of the non-target driver using the feature model 30 . This therefore enables the target driving data items for all the drivers to be mutually compared with each other to thereby analyze the compared results.
  • a driving data analyzer 1 A according to the second embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1 A according to the second embodiment, and omits or simplifies descriptions of like parts between the first and second embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
  • the first embodiment is configured such that the server 20 generates the analytical data, but the second embodiment is configured such that each in-vehicle device 10 A generates the analytical data.
  • the driving data analyzer 1 A includes the in-vehicle units 10 A respectively installed in the vehicles V 1 , . . . , Vn, and a server 20 A communicable by radio with the in-vehicle units 10 A.
  • the in-vehicle unit 10 A installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , a data transmitter 12 a , a model receiver 13 , and a data extractor 14 .
  • the server 20 A includes a data collector 21 a , the driving data DB 22 , the model generator 23 , the analytical data DB 25 , the analyzing unit 26 , and a model transmitter 27 .
  • the model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10 A, the feature model 30 each time the training of the feature model 30 is completed.
  • the model receiver 13 of each in-vehicle device 10 A is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20 A.
  • the data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13 ; the analytical data is used for the analyzing unit 26 of the server 20 A to analyze various phenomena or events associated with driving.
  • the extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
  • the data transmitter 12 a is configured to
  • the data collector 21 a of the server 20 A is configured to
  • the driving data analyzer 1 A according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
  • the driving data analyzer 1 A is configured such that each in-vehicle device 10 A extracts, from the collected target driving data sequences, the analytical data, making it possible to reduce the processing load of the server 20 A.
  • a driving data analyzer 1 B according to the third embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1 B according to the third embodiment, and omits or simplifies descriptions of like parts between the first and third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
  • the first embodiment is configured such that the analyzing unit 26 of the server 20 executes, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
  • the third embodiment is configured such that
  • a server 20 B transmits the analytical data to each in-vehicle device 10 B
  • Each in-vehicle device 10 B performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data
  • the driving data analyzer 1 B includes the in-vehicle units 10 B respectively installed in the vehicles V 1 , . . . , Vn, and the server 20 B communicable by radio with the in-vehicle units 10 B.
  • the in-vehicle unit 10 B installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , an analytical data receiver 15 , and a driving assist executor 16 .
  • the server 20 B includes the data collector 21 , the driving data DB 22 , the model generator 23 , the data extractor 24 , the analytical data DB 25 , and an analytical data transmitter 28 in place of the analyzing unit 26 .
  • the analytical data transmitter 28 is configured to transmit, i.e. broadcast, to each in-vehicle device 10 B, the analytical data stored in the analytical data DB 25 .
  • the analytical data receiver 15 of each in-vehicle device 10 B is configured to receive the analytical data.
  • the driving assist executor 16 of each in-vehicle device 10 B is configured to
  • This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
  • the driving data analyzer 1 B according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
  • the driving data analyzer 1 B is configured such that each in-vehicle device 10 B executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This results in an improvement of the driving safety of each vehicle.
  • a driving data analyzer 1 C according to the fourth embodiment differs from each of the driving data analyzers 1 to 1 B in the following points. So, the following mainly describes the different points of the driving data analyzer 1 C according to the fourth embodiment, and omits or simplifies descriptions of like parts between the fourth and each of the first to third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
  • the first embodiment is configured such that the server 20 generates the analytical data, and executes, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
  • each in-vehicle device 10 C generates the analytical data, and performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data.
  • the driving data analyzer 1 C includes the in-vehicle units 10 C respectively installed in the vehicles V 1 , . . . , Vn, and a server 20 C communicable by radio with the in-vehicle units 10 C.
  • the in-vehicle unit 10 C installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , the data transmitter 12 , the model receiver 13 , a data extractor 14 , and the driving assist executor 16 .
  • the server 20 C includes the data collector 21 , the driving data DB 22 , the model generator 23 , and the model transmitter 27 .
  • the model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10 C, the feature model 30 each time the training of the feature model 30 is completed.
  • the model receiver 13 of each in-vehicle device 10 C is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20 C.
  • the data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13 ; the analytical data is used for the driving assist executor 16 to execute various driving assist tasks.
  • the extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
  • the driving assist executor 16 of each in-vehicle device 10 C is configured to
  • This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
  • the driving data analyzer 1 C according to the third embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first and third embodiments.
  • the driving data analyzer 1 C is configured such that each in-vehicle device 10 C extracts, from the collected target driving data sequences, the analytical data, and executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This enables each in-vehicle device 10 C to merely perform communications required to receive the feature model 30 , making it possible to obtain
  • Each of the first and second embodiments is configured such that the encoder 31 supplies the cell state CE t+M to the common network 321 as its input, and supplies the output hE t+M to the individual network 322 as its input.
  • the present disclosure is not limited to this configuration.
  • the encoder 31 can supply the cell state CE t+M to the individual network 322 as its input, and supply the output hE t+M to the common network 321 as its input. In addition, the encoder 31 can supply any one of the cell state CE t+M and the output hE t+M to each of the common network 321 and the individual network 322 .
  • Each of the encoder 31 and the decoder 33 is comprised of a single LSTM layer or multi LSTM layers, but can be comprised of, for example, a common RNN or a gated recurrent unit (GRU). Because the GRU does not have a cell state, the output of the GRU shows latent features of an input target vector. For this reason, the encoder 31 using the GRU can be configured to send a predetermined number of elements of the output of the GRU to the common network 321 , and sent the remaining elements to the individual network 322 .
  • GRU gated recurrent unit
  • Each of the first to fourth embodiments is configured to use, as the at least one external factors, the identify of a driver, i.e. driver's driving habits, himself or herself of each vehicle, but the present disclosure is not limited thereto.
  • each in-vehicle device 10 can obtain the identification data item indicative of another external factor that causes the driving data items to vary, such as
  • the data obtainer 11 of each in-vehicle device 10 can obtain the identification data item indicative of a combination of the above external factors.
  • the feature generator 23 can be configured to
  • each of the first to fourth embodiments can be distributed as plural elements, and the functions that plural elements have can be combined into one element. At least part of the structure of each of the first to fourth embodiments can be replaced with a known structure having the same function as the at least part of the structure of the corresponding embodiment. A part of the structure of each of the first to fourth embodiments can be eliminated. At least part of the structure of each of the first to fourth embodiments can be added to or replaced with the structures of the other embodiments. All aspects included in the technological ideas specified by the language employed by the claims constitute embodiments of the present invention.
  • the present disclosure can be implemented by various embodiments in addition to the driving data analyzer; the various embodiments include driving data analyzing systems each including in-vehicle devices and a server, programs for serving a computer as each of the in-vehicle devices and the server, storage media storing the programs, and driving data analyzing methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Traffic Control Systems (AREA)

Abstract

In a driving data analyzer, a data collector collects, from at least one vehicle, driving data sequences while each of the driving data sequences is correlated with identification data. Each driving data sequence includes sequential driving data items, and each driving data item represents at least one of a driver's operation of at least one vehicle and a behavior of the at least one vehicle based on the at least one of a driver's operation. The identification data represents a type of at least one external factor that contributes to variations in the driving data items. A feature extractor applies a data compression network model to the driving data sequences to thereby extract, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor.

Description

CROSS REFERENCE TO RELATED APPLICATION
This application is based on and claims the benefit of priority from Japanese Patent Application No. 2017-199379 filed on Oct. 13, 2017, the disclosure of which is incorporated in its entirety herein by reference.
TECHNICAL FIELD
The present disclosure relates to technologies for analyzing driving data of a vehicle.
BACKGROUND
A driving assist system aims to assist driver's driving of a vehicle according to various driving situations. For achieving such an object, the driving assist system suitably analyzes the driving situations of an own vehicle.
Japanese Patent Application Publication No. 2013-250663 discloses a technology that collects a large amount of various items of data associated with driver's driving operations and behaviors of an own vehicle as a driving data group. Then, this technology analyzes the driving data group to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes.
SUMMARY
The inventors of the present application have studied in detail the published technology, and correspondingly have found the following issue.
Specifically, because drivers may have different driving habits, all drivers may not necessarily perform the same driving operations, i.e. driving actions, in the same driving scene. For example, when driving an own vehicle to go around a curve, a driver sufficiently decelerates the own vehicle immediately before the curve, and thereafter, turns the steering wheel, while another driver may turn the steering wheel at the same time as decelerating the own vehicle. Additionally, various types of vehicles have individually different travelling characteristics for each type. These driving habits and the travelling characteristics depend on the driving data group.
If the published technology extracts driving scenes in accordance with the driving data group on which the driving habits and the travelling characteristics depend, a first driving scene in the extracted driving scenes may be different from a second driving scene in the extracted driving scenes although the first and second driving scenes represent the same or similar driving situations. For example, if different drivers drive the same type of vehicle on the same road, the driving scenes for the different drivers may be extracted as different driving scenes. In addition, if the same driver drives different types of vehicles on the same road, the driving scenes for the different types of vehicles may be extracted as different driving scenes.
This may make it difficult to extract, from the driving data group, driving scenes, each of which is
(1) Independent of external factors including driver's habits and different types of vehicles
(2) Common to all drivers and all types of vehicles
In view of the circumstances set forth above, one aspect of the present disclosure seeks to provide technologies, each of which is capable of analyzing a driving data group independently of such external factors.
According to an exemplary aspect of the present disclosure, there is provided a driving data analyzer. The driving data analyzer includes a data collector collects, from at least one vehicle, driving data sequences while each of the driving data sequences is correlated with identification data. Each driving data sequence includes sequential driving data items, and each driving data item represents at least one of a driver's operation of at least one vehicle and a behavior of the at least one vehicle based on the at least one of a driver's operation. The identification data represents a type of at least one external factor that contributes to variations in the driving data items. The driving data analyzer includes a feature extractor applies a data compression network model to the driving data sequences to thereby extract, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor.
This configuration of the driving data analyzer extracts, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor. This therefore enables a robust data analysis based on the at least one latent feature to be carried out.
BRIEF DESCRIPTION OF THE DRAWINGS
Other aspects of the present disclosure will become apparent from the following description of embodiments with reference to the accompanying drawings in which:
FIG. 1 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the first embodiment of the present disclosure;
FIG. 2 is a block diagram schematically illustrating an example of the internal structure of a feature model according to the first embodiment;
FIG. 3 is a block diagram schematically illustrating an example of how each of an encoder and a decoder of the individual network illustrated in FIG. 2 is operated;
FIG. 4 is a graph schematically illustrating an example of relationships among a target driving data sequence of a driver, a predicted data sequence of another driver, and a predicted data sequence of the former driver thereamong;
FIG. 5 is s a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the second embodiment of the present disclosure;
FIG. 6 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the third embodiment of the present disclosure;
FIG. 7 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the fourth embodiment of the present disclosure;
FIG. 8 is a block diagram schematically illustrating multi LSTM layers if an encoder comprised of the multi LSTM layers; and
FIG. 9 is a graph schematically illustrating an example of a training data sequence.
DETAILED DESCRIPTION OF EMBODIMENT
The following describes embodiments of the present disclosure with reference to the accompanying drawings. In the embodiments, like parts between the embodiments, to which like reference characters are assigned, are omitted or simplified to avoid redundant description.
First Embodiment
The following describes a driving data analyzer 1 according to the first embodiment of the present disclosure with reference to FIGS. 1 to 4.
Referring to FIG. 1, the driving data analyzer 1 includes in-vehicle units 10 respectively installed in a plurality of vehicles V1, . . . , Vn, and a server 20 communicable by radio with the in-vehicle units 10.
The in-vehicle units 10 serve as mobile terminals at least partly provided in the respective vehicles V1 to Vn. Each of the in-vehicle units 10 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 10 a, a memory device 10 b, and an input unit 10 c. The memory device 10 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
For example, the CPU 10 a of each in-vehicle unit 10 can run one or more programs, i.e. sets of program instructions, stored in the memory device 10 b, thus implementing various functions of the in-vehicle unit 10 as software operations. In other words, the CPU 10 a can run programs stored in the memory device 10 b, thus performing one or more methods in accordance with the corresponding one or more programs. At least one of the various functions of at least one in-vehicle unit 10 can be implemented as a hardware electronic circuit. For example, the various functions of at least one in-vehicle unit 10 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits. The input unit 10 c of each in-vehicle unit 10 enables a driver of the corresponding vehicle to enter various commands and/or various data items to the CPU 10 a of the corresponding in-vehicle unit 10.
Similarly, the server 20 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 20 a and a memory device 20 b. The memory device 20 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
For example, the CPU 20 a of the server 20 can run one or more programs, i.e. program instructions, stored in the memory device 20 b, thus implementing various functions of the server 20 as software operations. In other words, the CPU 20 a can run programs stored in the memory device 20 b, thus performing one or more routines in accordance with the corresponding one or more programs. At least one of the various functions of the server 20 can be implemented as a hardware electronic circuit. For example, the various functions of at least one server 20 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits.
Referring to FIG. 1, the in-vehicle unit 10 installed in each of the vehicles V1 to Vn includes a data obtainer 11 and a data transmitter 12.
As described above, the CPU 10 a of the in-vehicle unit 10 runs corresponding one or more programs stored in the memory device 10 b, thus implementing the functional modules 11 and 12.
The data obtainer 11 is communicably connected to sensors, i.e. in-vehicle sensors, SS installed in the corresponding vehicle. Note that the data obtainer 11 can include the in-vehicle sensors SS.
The in-vehicle sensors SS include a first type of sensors each repeatedly measuring a driving data item including at least one of
1. Operation data item D1 indicative of driver's operations of at least one of driver-operable devices installed in the corresponding vehicle
2. Behavioral data items D2 indicative of behaviors of the corresponding vehicle
For example, the operation data items D1 include
1. The quantity or state of a driver's operation of a driver-operable accelerator pedal of the corresponding vehicle linked to a throttle valve
2. The quantity or state of a driver's operation of a driver-operable brake pedal of the corresponding vehicle linked to a hydraulic brake system (not shown) in the corresponding vehicle
3. The steering angle of the corresponding vehicle
For example, the behavioral data items D2 include
1. The speed of the corresponding vehicle
2. The acceleration of the corresponding vehicle
3. The yaw rate of the corresponding vehicle
The in-vehicle sensors SS can include a second type of sensors, such as a radar sensor, an image sensor, and a weather sensor, each repeatedly measuring a situation data item D3 that is useful in specifying a driving situation of the corresponding vehicle, which includes at least one of
1. An image data item including an image of a region, such as a front region, located around the corresponding vehicle captured by the image sensor, which is one of the second type of sensors, installed to the corresponding vehicle
2. A road attribute item indicative of the attribute of a road on which the corresponding vehicle is travelling, such as a straight road, a curved road, an expressway, and/or an ordinary road, which can be expressed from map data and the image data item
3. A current location data item indicative of the current location of the corresponding vehicle
4. Weather information indicative of a weather condition, such as a shine, i.e. fine condition, a cloudy condition, or a rain condition around the corresponding vehicle
The data obtainer 11 obtains, from the first and second types of sensors, a driving data group including the operation data items D1, behavioral data items D2, and situation data items D3 in a predetermined measurement cycle. Then, the data obtainer 11 sequentially outputs, to the data transmitter 12, the driving data groups measured in the measurement cycle as target driving data sequences.
The data obtainer 11 also obtains an identification data item indicative of the type of at least one external factor that contributes to variations in the driving data items. The first embodiment uses, as the at least one external factor, the identity of a driver of the corresponding vehicle. That is, the type of the at least one external factor for each of the vehicles V1 to Vn represents a corresponding driver. Unique identification data (ID) items D4 are for example assigned to respective drivers that can use at least one of the vehicles V1 to Vn according to the first embodiment.
For example, when a driver uses one of the vehicles V1 to Vn, the driver operates the input unit 10 c of the corresponding one of the in-vehicle units 10 c to thereby enter the driver's ID data item D4 to the CPU 10 a, and the CPU 10 a stores the entered driver's ID data item D4 in the memory device 10 b.
As another example, unique ID data items can be assigned to respective keys prepared for each of the vehicles V1 to Vn, and when a driver inserts an assigned key for one of the vehicles V1 to Vn into an ignition lock of one of the vehicles V1 to Vn, the CPU 10 a, which is communicable with the ignition lock, reads the ID data item assigned to the inserted key.
As a further example, ID data items of drivers who have the authority to use a selected vehicle in the vehicles V1 to Vn are recorded beforehand in the memory device 10 b of the in-vehicle unit 10 of the selected vehicle. Then, when a driver who has the authority to use the selected vehicle enters information about him or her, the CPU 10 a can receive the information, and extract one of the recorded ID data items; the extracted ID data item matches with the entered information.
The data obtainer 11 outputs, to the data transmitter 12, the ID data item of a current driver of the corresponding vehicle.
The data transmitter 12 is configured to
(1) Receive the target driving data sequences and the ID data item output from the data obtainer 11
(2) Transmit the target driving data sequences and the ID data item to the server 20.
The data transmitter 12 can be configured to indirectly transmit the target driving data sequences and the ID data item to the server 20 via infrastructures provided on, for example, roadsides. The data transmitter 12 can be comprised of a mobile communicator, such as a cellular phone, which can communicate with the server via radio communication networks.
The server 20 serves as a fixed station directly communicable or indirectly communicable via infrastructures or the radio communication networks with the vehicles V1 to Vn, i.e. their in-vehicle devices 10.
Referring to FIG. 1, the server 20 includes a data collector 21, a driving data database (DB) 22, a model generator 23, a data extractor 24, an analytical data DB 25, and an analyzing unit 26. As described above, the CPU 20 a of the server 20 executes one or more corresponding programs stored in the memory device 20 b, thus implementing the functional modules 21, 23, 24, and 26. The memory device 20 b can include predetermined storage areas allocated to serve as the respective driving data DB 22 and analytical data DB 25, or include at least one mass storage, such as a semiconductor memory or a magnetic memory, such as a hard disk drive, that serves as the driving data DB 22 and analytical data DB 25.
The data collector 21 collects the target driving data sequences and the ID data item output from each of the in-vehicle devices 10, and stores, in the driving data DB 22, the collected target driving data sequences and ID data item for each of the in-vehicle devices 10 such that the collected target driving data sequences for each of the in-vehicle devices 10 are correlated with the ID data item for the corresponding in-vehicle device 10.
The target driving data sequences include sequential sets of the operational data items D1, the behavior data items D2, and the situation data items D3. That is, at time (t−1), a set of the operational data items D1, the behavior data items D2, and the situation data items D3, which are obtained by the data obtainer 11, is collected by the data collector 12. After the lapse of one measurement cycle since the time (t−1), a next set of the operational data items D1, the behavior data items D2, and the situation data items D3, which are obtained by the data obtainer 11, is collected by the data collector 12 at time t. After the lapse of one measurement cycle since the time t, a next set of the operational data items D1, the behavior data items D2, and the situation data items D3, which are obtained by the data obtainer 11, is collected by the data collector 12 at time (t+1).
The data collector 12 is configured to transform these data items D1 to D3 obtained at time t such that they are expressed as a multidimensional target vector xt in which these data items D1 to D3 are arranged as its elements in a predetermined order. That is, if data items X1, X2, . . . , XK are obtained as the data items D1 to D3 at the time t, the multidimensional target vector xt is expressed as the following equation xt=(X1, X2, . . . , XK). That is, the target vectors xt (t=1, 2, . . . ) represent target vector sequences.
Specifically, in the driving data DB 22, the target vector sequences are stored.
The data collector 12 is also configured to transform the ID data item for each of the vehicles V1 to Vn, i.e. each of the drivers, into a multidimensional target vector whose elements are each have 0 or 1; the number of dimensions of the multidimensional target vector corresponds to the number of drivers of the respective vehicles V1 to Vn. For example, if the number of vehicles V1 to Vn is four, so that the number of drivers is four, the respective ID data items are expressed as an identification vector b1 (1, 0, 0, 0), an identification vector b2 (0, 1, 0, 0), an identification vector b3 (0, 0, 1, 0), and an identification vector b4 (0, 0, 0, 1). That is, any one of the drivers can be expressed as an identification vector bd in which d identifies any one of the drivers.
The model generator 23 is configured to for example perform, every regular internal or irregular interval, a training task that
(1) Captures the target vector sequences for all the in-vehicle devices 10
(2) Trains a feature model 30, such as a data compression model or a dimensionality reduction model, in accordance with the target vector sequences to thereby generate a trained feature model 30
Referring to FIG. 2, the feature model 30 includes an encoder 31, an intermediate layer, i.e. a driver translation layer, 32, and a decoder 33. That is, the feature model 30 is designed as a neural network comprised of a common encoder-decoder model to which the intermediate layer (driver translation layer) 32 has been added.
In the encoder-decoder mode, the encoder 31 is configured to extract latent features, i.e. essential features, from actual input data, and the decoder 33 is configured to use the extracted latent features to thereby reconstruct or predict input data. As each of the encoder 31 and the decoder 33, a long short term memory (LSTM) layer, which is one type of recurrent neural networks (RNN), is used in the first embodiment. Because the LSTM layer is a known technology, the detailed descriptions of this are omitted.
Note that the LSTM layer has a memory cell, and the state of the memory cell, which will be referred to as a cell state, and an output in the LSTM layer of the encoder 31 at time t will be respectively expressed as CEt and hEt. Similarly, the cell state and output in the LSTM layer of the decoder 33 at time t will be respectively expressed as CDt and hDt. Note that each of the cell state CEt and output hEt shows latent features of the input target vector at time t.
As illustrated in FIG. 3, the encoder 31 includes a single LSTM layer comprised of sequentially connected LSTM nodes 31 a 1 to 31 aM. Note that, in order to easily understand the operations of the encoder 31, the single LSTM layer is comprised of the temporally developed LSTM nodes 31 a 1 to 31 aM.
When M target vectors xt+1 to xt+M, i.e. measured driving data items, obtained within a specified input period from time (t+1) to time (t+M) inclusive are sequentially input to the encoder 31, the first LSTM node 31 a 1 generates the cell state CEt+1 and output hEt+1 in accordance with the target vector xt+1 when the target vector xt+1 is input thereto, and supplies the cell state CEt+1 and output hEt+1 to the next LSTM node 31 a 2. For example, as described above, the interval between the time (t+k) and to the time (t+k+1) adjacent thereto is set to one measurement cycle; k is any positive integer including zero.
Similarly, the second LSTM node 31 a 2 generates the cell state CEt+2 and output hEt+2 in accordance with the cell state CEt+1, output hEt+1, and target vector xt+2 when the target vector xt+2 is input thereto, and supplies the cell state CEt+2 and output hEt+2 to the next LSTM node 31 a 3.
That is, when the target vector xt+M is input to the M-th LSTM node 31 aM, the M-th LSTM node 31 aM generates the cell state CEt+M and output hEt+M in accordance with the cell state CEt+M−1, output hEt+M−1, and target vector xt+M, and supplies the cell state CEt+M and output hEt+M to the intermediate layer 32.
In other words, when the M target vectors xt+1 to xt+M are sequentially input to the encoder 31, the encoder 31 encodes the M target vectors xt+1 to xt+M to thereby output, as encoded, i.e. compressed, data item, the cell state CEt+M and output hEt+M. The number of dimensions of each of the cell state CEt+M and output hEt+M is smaller than the number of the multidimensional target vectors xt+1 to xt+M, so that each of the cell state CEt+M and output hEt+M represents the latent features, i.e. common important features, included in the multidimensional target vectors xt+1 to xt+M.
In addition, the encoder 31 can be comprised of multi LSTM layers. For example, FIG. 8 schematically illustrates an example of an encoder 31X having such a multi LSTM layers. Specifically, FIG. 8 illustrates a first layer of LSTM nodes 31 a 11 to 31 aM1 that are sequentially connected to which the M target vectors xt+1 to xt+M are respectively input. FIG. 8 also illustrates a second layer of LSTM nodes 31 a 12 to 31 aM2 that are sequentially connected, and are respectively connected to the LSTM nodes 31 a 11 to 31 aM1. That is, the LSTM node 31 aM2 supplies, based on the cell state and output sent from the LSTM node 31 a(M−1)2 and the cell state and output sent from the LSTM node 31 aM1, the cell state CEt+M and output hEt+M to the intermediate layer 32.
Referring to FIG. 2, the intermediate layer 32 is configured as, for example, a neural network to which the cell state CEt+M and output hEt+M sent from the encoder 31, and the identification vector bd.
Specifically, the intermediate layer 32 is comprised of two partial networks, that is, a common network 321 and an individual network 322.
The common network 321 is configured to receive the cell state CEt+M, and output, to the decoder 33, the cell state CEt+M, i.e. the latent features included in the target vectors xt+1 to xt+M, without change as an initial cell state CDt+M for the decoder 33.
The individual network 322 includes a single dense layer, i.e. a single fully connected layer, or multiple dense layers, i.e. multiple fully connected layers.
Note that, in the feature model 30, parameters, which include, for example, connection weights and/or connection biases, between nodes of the different layers, and parameters of the LSTM layer, have been trained and can be repeatedly trained to increase the analysis accuracy of the driving data analyzer 1. How to train the parameters included in the feature model 30 will be described later.
Specifically, the individual network 322 merges the output hEt+M, i.e. the latent features included in the target vectors xt+1 to xt+M, with the identification vector bd to thereby output a merged result indicative of the latent features included in the target vectors xt+1 to xt+M, which are correlated with the identification vector bd.
The merged result or its part is input to the decoder 33 as the initial cell state CDt+M for the decoder 33, and the merged result or its remaining part is output as a latent feature vector Bd of the driver specified by the identification vector bd.
Although the single or multiple dense layers are used as the individual network 322, a merge network configured to merge the output hEt+M with the identification vector bd in accordance with a predetermined rule can be used as the individual network 322.
As illustrated in FIG. 3, the decoder 33 for example includes a single LSTM layer comprised of sequentially connected LSTM nodes 33 a 1 to 33 aN. Note that, in order to easily understand the operations of the decoder 33, the single LSTM layer is comprised of the temporally developed LSTM nodes 33 a 1 to 33 aN. To the first LSTM 33 a 1, the cell state CDt+M and the output hEt+M are provided as initial values. In addition, an initial value of 0 is input to the first LSTM 33 a 1 as its input.
When the cell state CDt+M, the output hEt+M, and the initial value of 0 are input to the first LSTM 33 a 1, the decoder 33 performs a decoding operation N times to thereby sequentially output predicted sequential vectors yt+M+1 to yt+M+N within a specified prediction period from the time (t+M+1) to time (t+M+N) that sequentially follow the specified input period from the time (t+1) to the time (t+M).
Specifically, based on the cell state CDt+M, output hDt+M, and the input of the initial value of 0, the first LSTM node 33 a 1 generates, at the first decoding operation,
(1) The cell state CDt+M+1 and the output hDt+M+1
(2) The predicted sequential vector yt+M+1
Then, the first LSTM node 33 a 1 supplies the cell state CDt+M+1 and the output hDt+M+1 to the next LSTM node 33 a 2, and also supplies the predicted sequential vector yt+M+1 to the next LSTM node 33 a 2 as its input.
Similarly, based on the cell state CDt+M+1, output hDt+M+1, and the input of the predicted sequential vector yt+M+1, the second LSTM node 33 a 2 generates, at the second decoding operation,
(1) The cell state CDt+M+2 and the output hDt+M+2
(2) A predicted sequential vector yt+M+2
Then, the second LSTM node 33 a 2 supplies the cell state CDt+M+2 and the output hDt+M+2 to the next LSTM node 33 a 3, and also supplies the predicted sequential vector yt+M+2 to the next LSTM node 33 a 3 as its input.
The third LSTM node 33 a 3 to the (N−1)-th LSTM node 33 a(N−1) sequentially perform their decoding operations in the same manner as the first or second LSTM node 33 a 1.
Then, based on the cell state CDt+M+N−1, output hDt+M+N−1, and the input of the predicted sequential vector yt+M+N−1, the N-th LSTM node 33 aN generates, at the N-th decoding operation,
(1) The cell state CDt+M+N and the output hDt+M+N
(2) A predicted sequential vector yt+M+N
The N-th LSTM node 33 aN outputs the predicted sequential vector yt+M+N. In other words, when the cell state CDt+M, output hDt+M, and the input of the initial value of 0 are input to the decoder 33, the decoder 33 is configured to decode the cell state CDt+M, output hDt+M, and the input of the initial value of 0 into the predicted sequential vectors yt+M+1 to yt+M+N in response to the input data sequential vectors xt+1 to xt+M.
Note that the features of the predicted sequential vector yt+M+1 output from the first LSTM node 33 a 1 are embedded in the latent features CDt+M+2 and hDt+M+2 of the second LSTM node 33 a 1, the features of the predicted sequential vector yt+M+2 output from the second LSTM node 33 a 2 are embedded in the latent features CDt+M+3 and hDt+M+3 of the third LSTM node 33 a 3, . . . , and the features of the predicted sequential vector yt+M+N−1 output from the (N−1-th LSTM node 33 a(N−1) are embedded in the latent features CDt+M+N and hDt+M+N of the N-th LSTM node 33 aN
Like the encoder 31, the decoder 33 can be comprised of multi LSTM layers.
Next, the following describes how the model generator 23 trains the feature model 30 if the feature model 30 has been untrained or if it is determined that the feature model 30 should be trained.
First, the feature model generator 23 generates, based on the target driving data sequences, i.e. target vector sequences, stored in the driving data DB 22, training data items for the individual drivers, i.e. for the individual in-vehicle devices 10, and supervised data items that are respectively paired to the training data items.
Specifically, as illustrated in FIG. 9, the feature model generator 23 reads, from the driving data DB 22, the target vectors xt+1 to xt+M obtained within the specified period from the time (t+1) to the time (t+M) inclusive for each driver used as the training data items. Next, the feature model generator 23 reads, from the driving data DB 22, the target vectors xt+M+1 to xt+M+N obtained within the specified period from the time (t+M+1) to the time (t+M+N) inclusive for each driver used as the supervised data items.
Note that the number N is for example set to be equal to the number M, but can be set to be different from the number M. FIG. 9 merely illustrates how vehicle's speed as an element of the target vectors varies over time, each of the other driving data items varies a corresponding element of the target vectors.
Then, the feature generator 23 trains the feature model 30 using the training data items and the supervised data items that are respectively paired to the training data items.
Specifically, the feature generator 23 performs a training task of
(1) Inputting the training data items xt+1 to xt+M to the encoder 31 of the feature model 30, so that the decoder 33 of the feature model 30 outputs the predicted sequential vectors yt+M+1 to yt+M+N.
(2) Calculating the square of the difference between each element of the supervised data items xt+M+1 to xt+M+N and the corresponding element of the predicted sequential vectors yt+M+1 to yt+M+N,
(3) Calculating the sum of the calculated squares of the respective differences as a square error
Thereafter, the feature generator 23 repeats the training task while changing values of the parameters of the feature model 30 until the square error calculated for the current training task has reached a predetermined minimum value. This enables the parameters of the feature model 30 to be trained.
Specifically, the feature generator 23 repeats the training task for the single neutral network comprised of the encoder 31, the intermediate layer 32, and the decoder 33 of the feature model 30 while changing values of all the parameters of the single neural network until the square error calculated for the current training task has reached a predetermined minimum value. This enables all the parameters of the feature model 30 to be collectively trained.
The training algorithm for training the feature model 30 schematically described above is, what is called a back propagation through time (BPIT) algorithm, but the training algorithm for training the feature model 30 is not limited to the BPIT algorithm.
The feature generator 23 can be configured to perform a known sliding window algorithm for the target driving data sequences, i.e. target vector sequences stored in the driving data DB 22, to thereby extract training data items and supervised data items that are respectively paired to the training data items. Then, the feature generator 23 can be configured to use the pairs of the training data items and supervised data items as a training data set.
The above training enables the feature model 30 to predict, based on the target vectors obtained within the period (M×T), target vectors within the period (N×T) that follows the period (M×T) when the measurement cycle is represented by T; the measurement cycle T in the first embodiment is for example set to 1 seconds hereinafter.
This training of the feature model 30 enables
(1) The cell state CDt+M, which represents the latent features included in the target vectors xt+1 to xt+M and stored in the common network 321, to be trained
(2) The output hEt+M, which represents the latent features for each driver included in the target vectors xt+1 to xt+M and stored in the individual network 322, to be trained.
The data extractor 24, which serves as, for example, a feature extractor, is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 generated, i.e. trained, by the model generator 23; the analytical data is used for the analyzing unit 26 to analyze various phenomena or events associated with driving. Then, the data extractor 24 is configured to store the analytical data in the analytical data DB 25.
The following describes how the data extractor 24 extracts, from the collected target driving data sequences, the analytical data using, for example, the following approaches.
I. First Approach
When the target driving data items, i.e. target vectors, xt+1 to xt+M within a specified period (M×T) and a specified identification vector bd are input to the feature model 30, the data extractor 24 extracts, as the analytical data, first common analytical data including the cell state CDt+M, which is output from the common network 321 of the intermediate layer 32 as a result of the input of the target vectors xt+1 to xt+M and the specified identification vector bd to the feature model 30.
That is, the extracted common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
II. Second Approach
When the target driving data items, i.e. target vectors, xt+1 to xt+M within a specified period (M×T) and an identification vector bd indicative of any driver are input to the feature model 30, the data extractor 24 extracts, as the analytical data, second common analytical data including the predicted sequential vectors yt°M+1 to yt+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors xt+1 to xt+M and the specified identification vector bd to the feature model 30. That is, the extracted second common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
For example, setting each element of the identification vector bd to 0 or 1 enables any driver to be specified. That is, because setting each element of the identification vector bd to 0 represents that no drivers are specified, and setting each element of the identification vector bd to 1 represents that all drivers are specified, this means that a particular driver is not specified, in other words, that any driver is specified.
Note that, because the analytical data extracted by the first approach shows the latent features themselves, it may be difficult for a user of the driving data analyzer 1 to intuitively understand what the analytical data means.
In contrast, because the analytical data extracted by the second approach shows specific driving data items, i.e. data items dependent on specific driving behaviors themselves, it is easy for a user of the driving data analyzer 1 to intuitively understand what the analytical data mean.
III. Third Approach
When the target driving data items, i.e. target vectors, xt+1 to xt+M within a specified input period (M×T) and an identification vector bd indicative of a particular driver are input to the feature model 30, the data extractor 24 extracts, as the analytical data, the predicted sequential vectors yt+M+1 to yt+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors xt+1 to xt+M and the identification vector bd to the feature model 30.
If the identification vector bd represents a driver correlated with the target driving data items, i.e. target vectors, xt+1 to xt+M, which will be referred to as a target driver (driver A in FIG. 4), the data extractor 24 enables the extracted predicted sequential vectors yt+M+1 to yt+M+N, which show a predicted future behavior of the target driver A and/or a predicted future behavior of the vehicle corresponding to the target driver A within the future prediction period (N×T), to be obtained (see dashed curve in FIG. 4).
In contrast, if the identification vector bd represents a driver who is not correlated with the target driving data items, i.e. target vectors, xt+1 to xt+M, which will be referred to as a non-target driver (driver B in FIG. 4), the data extractor 24 enables the extracted predicted sequential vectors yt+M+1 to yt+M+N, which show a predicted future behavior of the non-target driver B and/or a predicted future behavior of the vehicle corresponding to the non-target driver B within the future prediction period (N×T), to be obtained (see dot-and-dash curve in FIG. 4).
That is, even if there are no target driving data items, i.e. target vectors, of a specified driver on which the data extractor 24 focuses in a driving situation, the data extractor 24 is capable of obtaining predicted driving data items for the specified driver upon there are target data items for another driver. Note that, in FIG. 4, solid curve represents the target data items of the target driver.
The analyzing unit 26, which serves as, for example, a data analyzer, is configured to execute, based on the analytical data stored in the analytical data DB 25, various analytical tasks that analyze information included in the target driving data items, i.e. target vectors. The analyzing unit 26 can be configured to analyze other common statistical tasks for the target driving data items, i.e. target vectors, in accordance with the analytical data stored in the analytical data DB 25.
For example, the analyzing unit 26 can be configured to analyze, based on the first or second common analytical data, the target driving data items, i.e. target vectors, to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes. This driving scene extracting task is disclosed in Japanese Patent Application Publication No. 2013-250663 or US patent Publication U.S. Pat. No. 9,527,384. The disclosure of each of patent Publications U.S. Pat. No. 9,527,384 and JP 2013-250663 is incorporated entirely herein by reference.
This driving scene extracting task therefore reduces cases where, although different drivers travel the same type of vehicle on the same road, the driving scenes for the different drivers are extracted as different driving scenes due to, for example, the difference between their driving habits. This results in an improvement of the accuracy of analyzing the driving scenes.
As another example, the analyzing unit 26 can be configured to extract, as the analytical data, target driving data items for all the drivers in a specified driving situation, and execute, for example, a common statistical task that analyzes the driving behavior of each driver in the same specified driving situation.
The driving data analyzer 1 according to the first embodiment described in detail above obtains the following technical benefits.
Specifically, the driving data analyzer 1 is configured to extract, from the collected target driving data items of the drivers, common feature data that represents at least one latent feature included in the collected target driving data items using the feature model 30; the at least latent feature is at least one common feature being latent in the collected target driving data items independently from the drivers. Then, the driving data analyzer 1 is configured to analyze, based on the common feature data, at least information included in the collected target driving data items. This therefore enables a robust analyzed result, which is independent from the drivers, to be obtained.
Additionally, the driving data analyzer 1 is configured to use, as the feature model 30, a neural network comprised of the encoder 31, the intermediate layer 32 linked to the encoder 31, and the decoder 33 linked to the intermediate layer 32, in other words, comprised of a common encoder-decoder model to which the intermediate layer 32 has been added. The intermediate layer 32 is comprised of the common network 321 and the individual network 322.
The common network 321 is configured to
(1) Receive the cell state CEt+M, which is part of the output information from the encoder 31 and represents the latent features included in the target driving data items
(2) Output, to the decoder 33, the cell state CEt+M without change.
The individual network 322 is configured to
(1) Merge the output hEt+M, which represents the latent features included in the target driving data items and is another part of the output information from the encoder 31, with the identification vector bd
(2) Supply, to the decoder 33, the merged result
Training of the neural network based on a known training approach therefore enables the latent features independent from the drivers to be stored in the common network 321, and the latent features for each driver to be stored in the individual network 322.
The driving data analyzer 1 makes it possible to predict, from the collected driving data items for a specified driver indicated by the identification vector bd, a future behavior of the specified driver using the feature model 30.
For example, even if the identification vector bd represents a driver who is not correlated with the target driving data items to be called as a non-target driver, the driving data analyzer 1 makes it possible to predict a future behavior of the non-target driver using the feature model 30. This therefore enables the target driving data items for all the drivers to be mutually compared with each other to thereby analyze the compared results.
Second Embodiment
The following describes the second embodiment of the present disclosure with reference to FIG. 5. A driving data analyzer 1A according to the second embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1A according to the second embodiment, and omits or simplifies descriptions of like parts between the first and second embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
The first embodiment is configured such that the server 20 generates the analytical data, but the second embodiment is configured such that each in-vehicle device 10A generates the analytical data.
Referring to FIG. 5, the driving data analyzer 1A includes the in-vehicle units 10A respectively installed in the vehicles V1, . . . , Vn, and a server 20A communicable by radio with the in-vehicle units 10A.
The in-vehicle unit 10A installed in each of the vehicles V1 to Vn includes the data obtainer 11, a data transmitter 12 a, a model receiver 13, and a data extractor 14.
The server 20A includes a data collector 21 a, the driving data DB 22, the model generator 23, the analytical data DB 25, the analyzing unit 26, and a model transmitter 27.
The model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10A, the feature model 30 each time the training of the feature model 30 is completed.
The model receiver 13 of each in-vehicle device 10A is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20A.
The data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13; the analytical data is used for the analyzing unit 26 of the server 20A to analyze various phenomena or events associated with driving. The extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
The data transmitter 12 a is configured to
(1) Receive the target driving data sequences and the ID data item output from the data obtainer 11
(2) Transmit the target driving data sequences and the ID data item to the server 20
(3) Receive the analytical data extracted by the data extractor 14
(4) Transmit the analytical data to the server 20A
The data collector 21 a of the server 20A is configured to
(1) Collect the target driving data sequences the ID data item, and the analytical data output from each of the in-vehicle devices 10A
(2) Store, in the driving data DB 22, the collected target driving data sequences and ID data item for each of the in-vehicle devices 10A such that the collected target driving data sequences for each of the in-vehicle devices 10A correlated with the ID data item for the corresponding data obtainer 11
(3) Store the analytical data in the analytical data DB
As described above, the driving data analyzer 1A according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
Specifically, the driving data analyzer 1A is configured such that each in-vehicle device 10A extracts, from the collected target driving data sequences, the analytical data, making it possible to reduce the processing load of the server 20A.
Third Embodiment
The following describes the third embodiment of the present disclosure with reference to FIG. 6. A driving data analyzer 1B according to the third embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1B according to the third embodiment, and omits or simplifies descriptions of like parts between the first and third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
The first embodiment is configured such that the analyzing unit 26 of the server 20 executes, based on the analytical data stored in the analytical data DB 25, various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
In contrast, the third embodiment is configured such that
(1) A server 20B transmits the analytical data to each in-vehicle device 10B
(2) Each in-vehicle device 10B performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data
Referring to FIG. 6, the driving data analyzer 1B includes the in-vehicle units 10B respectively installed in the vehicles V1, . . . , Vn, and the server 20B communicable by radio with the in-vehicle units 10B.
The in-vehicle unit 10B installed in each of the vehicles V1 to Vn includes the data obtainer 11, an analytical data receiver 15, and a driving assist executor 16.
The server 20B includes the data collector 21, the driving data DB 22, the model generator 23, the data extractor 24, the analytical data DB 25, and an analytical data transmitter 28 in place of the analyzing unit 26.
The analytical data transmitter 28 is configured to transmit, i.e. broadcast, to each in-vehicle device 10B, the analytical data stored in the analytical data DB 25.
The analytical data receiver 15 of each in-vehicle device 10B is configured to receive the analytical data.
The driving assist executor 16 of each in-vehicle device 10B is configured to
(1) Predict a future behavior of the driver and/or a future behavior of the corresponding vehicle
(2) Execute, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks to control at least one of target actuators TA installed in the corresponding vehicle, such as a steering motor, a driving motor if the corresponding vehicle is a hybrid vehicle or an electrical vehicle, a throttle valve, a brake actuator for each wheel, and/or a warning device
This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
As described above, the driving data analyzer 1B according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
Specifically, the driving data analyzer 1B is configured such that each in-vehicle device 10B executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This results in an improvement of the driving safety of each vehicle.
Fourth Embodiment
The following describes the fourth embodiment of the present disclosure with reference to FIG. 7. A driving data analyzer 1C according to the fourth embodiment differs from each of the driving data analyzers 1 to 1B in the following points. So, the following mainly describes the different points of the driving data analyzer 1C according to the fourth embodiment, and omits or simplifies descriptions of like parts between the fourth and each of the first to third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
The first embodiment is configured such that the server 20 generates the analytical data, and executes, based on the analytical data stored in the analytical data DB 25, various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
In contrast, the fourth embodiment is configured such that each in-vehicle device 10C generates the analytical data, and performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data.
Referring to FIG. 7, the driving data analyzer 1C includes the in-vehicle units 10C respectively installed in the vehicles V1, . . . , Vn, and a server 20C communicable by radio with the in-vehicle units 10C.
The in-vehicle unit 10C installed in each of the vehicles V1 to Vn includes the data obtainer 11, the data transmitter 12, the model receiver 13, a data extractor 14, and the driving assist executor 16.
The server 20C includes the data collector 21, the driving data DB 22, the model generator 23, and the model transmitter 27.
The model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10C, the feature model 30 each time the training of the feature model 30 is completed.
The model receiver 13 of each in-vehicle device 10C is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20C.
The data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13; the analytical data is used for the driving assist executor 16 to execute various driving assist tasks. The extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
The driving assist executor 16 of each in-vehicle device 10C is configured to
(1) Predict a future behavior of the driver and/or a future behavior of the corresponding vehicle
(2) Execute, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks to control at least one of the target actuators TA installed in the corresponding vehicle, such as a steering motor, a driving motor if the corresponding vehicle is a hybrid vehicle or an electrical vehicle, a throttle valve, a brake actuator for each wheel, and/or a warning device
This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
As described above, the driving data analyzer 1C according to the third embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first and third embodiments.
Specifically, the driving data analyzer 1C is configured such that each in-vehicle device 10C extracts, from the collected target driving data sequences, the analytical data, and executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This enables each in-vehicle device 10C to merely perform communications required to receive the feature model 30, making it possible to obtain
(1) Reduction in communication loads of the server 20C
(2) Continuous driving assist of the vehicles V1 to Vn even if communications problem between the server 20C and each in-vehicle device 10 arises.
Modifications
The present disclosure is not limited to the descriptions of each of the first to fourth embodiments, and the descriptions of each of the first to fourth embodiments can be widely modified within the scope of the present disclosure.
Each of the first and second embodiments is configured such that the encoder 31 supplies the cell state CEt+M to the common network 321 as its input, and supplies the output hEt+M to the individual network 322 as its input. However, the present disclosure is not limited to this configuration.
Specifically, the encoder 31 can supply the cell state CEt+M to the individual network 322 as its input, and supply the output hEt+M to the common network 321 as its input. In addition, the encoder 31 can supply any one of the cell state CEt+M and the output hEt+M to each of the common network 321 and the individual network 322.
Each of the encoder 31 and the decoder 33 is comprised of a single LSTM layer or multi LSTM layers, but can be comprised of, for example, a common RNN or a gated recurrent unit (GRU). Because the GRU does not have a cell state, the output of the GRU shows latent features of an input target vector. For this reason, the encoder 31 using the GRU can be configured to send a predetermined number of elements of the output of the GRU to the common network 321, and sent the remaining elements to the individual network 322.
Each of the first to fourth embodiments is configured to use, as the at least one external factors, the identify of a driver, i.e. driver's driving habits, himself or herself of each vehicle, but the present disclosure is not limited thereto.
Specifically, the data obtainer 11 of each in-vehicle device 10 can obtain the identification data item indicative of another external factor that causes the driving data items to vary, such as
(1) The type of the corresponding vehicle
(2) The current weather in a region in which the corresponding vehicle is travelling
(3) The age of the driver of the corresponding vehicle
(4) The sex of the driver of the corresponding vehicle
The data obtainer 11 of each in-vehicle device 10 can obtain the identification data item indicative of a combination of the above external factors.
Each of the first to fourth embodiments is configured to
(1) Prepare target vector sequences within a predetermined period as training data items and future vector sequences based on the target vector sequences as supervised data items paired to the respective training data items
(2) Train the feature model 30 based on the target vector sequences and respectively paired supervised data items, to thereby cause the feature model 30 to output predicted sequential vectors
(3) Compare the supervised data items with the respective predicted sequence vectors
The present disclosure is however not limited to the above embodiments.
Specifically, the feature generator 23 can be configured to
(1) Prepare target vector sequences within a predetermined period as training data items and the same target vector sequences as supervised data items paired to the respective training data items
(2) Train the feature model 30 based on the target vector sequences and respectively paired supervised data items, to thereby cause the feature model 30 to output calculated target vector sequences
(3) Compare the supervised data items with the respective calculated target vector sequences.
The functions of one element in each of the first to fourth embodiments can be distributed as plural elements, and the functions that plural elements have can be combined into one element. At least part of the structure of each of the first to fourth embodiments can be replaced with a known structure having the same function as the at least part of the structure of the corresponding embodiment. A part of the structure of each of the first to fourth embodiments can be eliminated. At least part of the structure of each of the first to fourth embodiments can be added to or replaced with the structures of the other embodiments. All aspects included in the technological ideas specified by the language employed by the claims constitute embodiments of the present invention.
The present disclosure can be implemented by various embodiments in addition to the driving data analyzer; the various embodiments include driving data analyzing systems each including in-vehicle devices and a server, programs for serving a computer as each of the in-vehicle devices and the server, storage media storing the programs, and driving data analyzing methods.
While the illustrative embodiments of the present disclosure have been described herein, the present disclosure is not limited to the embodiments described herein, but includes any and all embodiments having modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alternations as would be appreciated by those having ordinary skill in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive.

Claims (12)

What is claimed is:
1. A driving data analyzer comprising:
a data collector configured to collect, from at least one vehicle, driving data sequences while each of the driving data sequences is correlated with identification data, each of the driving data sequences including sequential driving data items, each of the driving data items representing at least one of a driver's operation of the at least one vehicle and a behavior of the at least one vehicle based on the at least one of a driver's operation, the identification data representing a type of at least one external factor that contributes to variations in the driving data items; and
a feature extractor configured to apply a data compression network model to the driving data sequences to thereby extract, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor, wherein the data compression network model comprises:
an encoder configured to encode the driving data sequences to output at least one encoded data item;
an intermediate layer configured such that the at least one encoded data item of the encoder and the identification data are input to the intermediate layer; and
a decoder configured to decode an output of the intermediate layer to thereby output decoded driving data sequences,
wherein the intermediate layer comprises:
a common network configured to:
receive at least a first part of the at least one encoded data item; and
directly output the at least first part of the at least one encoded data item to the decoder; and
an individual network configured to:
receive at least a second part of the at least one encoded data item; and
supply, to the decoder, an output including the at least second part of the at least one encoded data item to which the identification data is added.
2. The driving data analyzer according to claim 1, further comprising:
a data analyzer configured to analyze, based on at least the at least one latent feature, the driving data sequences.
3. The driving data analyzer according to claim 1, wherein:
the feature extractor is configured to extract, from the driving data sequences, an output of the common network as the at least one latent feature.
4. The driving data analyzer according to claim 1, wherein:
the type of the at least one external factor includes at least first and second types of the at least one external factor; and
the feature extractor is configured to:
apply the data compression network model to the driving data sequences while the identification data that specifies one of the first and second types of the at least one external factor is input to the individual network to thereby extract, from the driving data sequences, an output of the individual network as at least one individual feature dependent on a specified one of the first and second types of the at least one external factor.
5. The driving data analyzer according to claim 1, wherein:
the type of the at least one external factor includes at least first and second types of the at least one external factor; and
the feature extractor is configured to:
apply the data compression network model to the driving data sequences while the identification data that specifies one of the first and second types of the at least one external factor is input to the individual network to thereby extract, from the driving data sequences, at least one individual feature dependent on the other of the first and second types of the at least one external factor.
6. The driving data analyzer according to claim 1, wherein:
the type of the at least one external factor includes at least first and second types of the at least one external factor; and
the feature extractor is configured to apply the data compression network model to the driving data sequences to thereby extract, from the driving data sequences, an output of the decoder as the at least one latent feature when the identification data specifies all the at least first and second types the at least one external factor or specifies no types of the at least one external factor.
7. The driving data analyzer according to claim 1, wherein:
the feature extractor is configured to input, to the decoder, an initial value of zero to thereby start input of the driving data sequences to the encoder.
8. The driving data analyzer according to claim 1, further comprising:
a model trainer configured to train the data compression model using training driving data sequences within a predetermined period until reconstructed driving data sequences output from the decoder are matched with the training driving data sequences.
9. The driving data analyzer according to claim 1, further comprising:
a model trainer configured to train the data compression model using training driving data sequences within a predetermined first period and next driving data sequences following the training driving data sequences within a predetermined second period as supervised driving data sequences that are paired to the respective training driving data sequences until decoded predicted driving data sequences output from the decoder are respectively matched with the supervised driving data sequences.
10. The driving data analyzer according to claim 9, further comprising:
a data obtainer configured to obtain the driving data sequences and the identification data of a target vehicle that is the at least one vehicle in real time,
the feature extractor being configured to:
apply the data compression network model to the driving data sequences of the target vehicle to thereby extract, from the driving data sequences of the target vehicle, predicted driving data sequences; and
a vehicle controller configured to execute travelling control of the target vehicle in accordance with the extracted predicted driving data sequences.
11. The driving data analyzer according to claim 10, further comprising:
a server incorporating therein the data collector, the model trainer, and the feature extractor; and
at least one in-vehicle device installed in the at least one vehicle and incorporating therein the data obtainer and the vehicle controller,
wherein:
the server is configured by the data collector to collect, from the at least one in-vehicle device, the driving data sequences obtained by the data obtainer; and
the at least one in-vehicle device is configured to obtain, from the server, the predicted driving data sequences for the at least one vehicle extracted by the feature extractor.
12. The driving data analyzer according to claim 10, further comprising:
a server incorporating therein the data collector and the model trainer; and
at least one in-vehicle device installed in the at least one vehicle and incorporating therein the data obtainer, the vehicle controller, and feature extractor,
wherein:
the server is configured by the data collector to collect, from the at least one in-vehicle device, the driving data sequences obtained by the data obtainer; and
the at least one in-vehicle device is configured to obtain, from the server, the data compression network model trained by the model trainer.
US16/158,865 2017-10-13 2018-10-12 Driving data analyzer Active 2039-03-30 US11061911B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2017-199379 2017-10-13
JPJP2017-199379 2017-10-13
JP2017199379A JP7053213B2 (en) 2017-10-13 2017-10-13 Operation data analysis device

Publications (2)

Publication Number Publication Date
US20190114345A1 US20190114345A1 (en) 2019-04-18
US11061911B2 true US11061911B2 (en) 2021-07-13

Family

ID=66097483

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/158,865 Active 2039-03-30 US11061911B2 (en) 2017-10-13 2018-10-12 Driving data analyzer

Country Status (2)

Country Link
US (1) US11061911B2 (en)
JP (1) JP7053213B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6937658B2 (en) * 2017-10-17 2021-09-22 日立Astemo株式会社 Predictive controller and method
US10832140B2 (en) * 2019-01-30 2020-11-10 StradVision, Inc. Method and device for providing information for evaluating driving habits of driver by detecting driving scenarios occurring during driving
KR20210026112A (en) * 2019-08-29 2021-03-10 주식회사 선택인터내셔날 Detecting method for using unsupervised learning and apparatus and method for detecting vehicle theft using the same
US11498575B2 (en) * 2019-08-29 2022-11-15 Suntech International Ltd. Unsupervised learning-based detection method and driver profile- based vehicle theft detection device and method using same
CN112677983B (en) * 2021-01-07 2022-04-12 浙江大学 System for recognizing driving style of driver
CN117422169B (en) * 2023-10-18 2024-06-25 酷哇科技有限公司 Vehicle insurance user driving behavior analysis and prediction method and device based on causal intervention

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013250663A (en) 2012-05-30 2013-12-12 Denso Corp Driving scene recognition device
US9495874B1 (en) * 2012-04-13 2016-11-15 Google Inc. Automated system and method for modeling the behavior of vehicles and other agents
US10156848B1 (en) * 2016-01-22 2018-12-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing during emergencies

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2940042B2 (en) * 1990-01-23 1999-08-25 日産自動車株式会社 Vehicle control strategy device
JP3902543B2 (en) 2002-12-17 2007-04-11 本田技研工業株式会社 Road traffic simulation device
JP2009073465A (en) 2007-08-28 2009-04-09 Fuji Heavy Ind Ltd Safe driving support system
JP2009234442A (en) 2008-03-27 2009-10-15 Equos Research Co Ltd Driving operation support device
JP5856387B2 (en) 2011-05-16 2016-02-09 トヨタ自動車株式会社 Vehicle data analysis method and vehicle data analysis system
WO2016170785A1 (en) 2015-04-21 2016-10-27 パナソニックIpマネジメント株式会社 Information processing system, information processing method, and program
US10345449B2 (en) 2016-12-02 2019-07-09 Verizon Connect Ireland Limited Vehicle classification using a recurrent neural network (RNN)
JP6579495B2 (en) 2017-03-29 2019-09-25 マツダ株式会社 Vehicle driving support system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9495874B1 (en) * 2012-04-13 2016-11-15 Google Inc. Automated system and method for modeling the behavior of vehicles and other agents
JP2013250663A (en) 2012-05-30 2013-12-12 Denso Corp Driving scene recognition device
US10156848B1 (en) * 2016-01-22 2018-12-18 State Farm Mutual Automobile Insurance Company Autonomous vehicle routing during emergencies

Also Published As

Publication number Publication date
US20190114345A1 (en) 2019-04-18
JP2019074849A (en) 2019-05-16
JP7053213B2 (en) 2022-04-12

Similar Documents

Publication Publication Date Title
US11061911B2 (en) Driving data analyzer
Hanselmann et al. CANet: An unsupervised intrusion detection system for high dimensional CAN bus data
CN108372857B (en) Efficient context awareness by event occurrence and episode memory review for autonomous driving systems
CN107697070B (en) Driving behavior prediction method and device and unmanned vehicle
Mohammadi et al. Future reference prediction in model predictive control based driving simulators
JP2022089806A (en) Modeling operation profiles of vehicle
CN112396254A (en) Destination prediction method, destination prediction device, destination prediction medium, and electronic device
CN118282780B (en) New energy automobile vehicle-mounted network intrusion detection method, equipment and storage medium
US20230177241A1 (en) Method for determining similar scenarios, training method, and training controller
CN111461426A (en) High-precision travel time length prediction method based on deep learning
CN116415200A (en) Abnormal vehicle track abnormality detection method and system based on deep learning
Khodayari et al. Improved adaptive neuro fuzzy inference system car‐following behaviour model based on the driver–vehicle delay
CN112559585A (en) Traffic space-time sequence single-step prediction method, system and storage medium
CN115909239A (en) Vehicle intention recognition method and device, computer equipment and storage medium
CN117376920A (en) Intelligent network connection automobile network attack detection, safety state estimation and control method
CN116467615A (en) Clustering method and device for vehicle tracks, storage medium and electronic device
KR102196027B1 (en) LSTM-based steering behavior monitoring device and its method
CN112462759B (en) Evaluation method, system and computer storage medium of rule control algorithm
CN118171723A (en) Method, device, equipment, storage medium and program product for deploying intelligent driving strategy
CN111104953A (en) Driving behavior feature detection method and device, electronic equipment and computer-readable storage medium
CN112164223B (en) Intelligent traffic information processing method and device based on cloud platform
CN113078974A (en) Method for neural network sparse channel generation and inference
CN116989818B (en) Track generation method and device, electronic equipment and readable storage medium
Zeng et al. Towards building reliable deep learning based driver identification systems
RU2724596C1 (en) Method, apparatus, a central device and a system for recognizing a distribution shift in the distribution of data and / or features of input data

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: DENSO CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MISAWA, HIDEAKI;TAKENAKA, KAZUHITO;TANIGUCHI, TADAHIRO;SIGNING DATES FROM 20181026 TO 20181029;REEL/FRAME:047871/0206

Owner name: THE RITSUMEIKAN TRUST, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MISAWA, HIDEAKI;TAKENAKA, KAZUHITO;TANIGUCHI, TADAHIRO;SIGNING DATES FROM 20181026 TO 20181029;REEL/FRAME:047871/0206

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE