US11061911B2 - Driving data analyzer - Google Patents
Driving data analyzer Download PDFInfo
- Publication number
- US11061911B2 US11061911B2 US16/158,865 US201816158865A US11061911B2 US 11061911 B2 US11061911 B2 US 11061911B2 US 201816158865 A US201816158865 A US 201816158865A US 11061911 B2 US11061911 B2 US 11061911B2
- Authority
- US
- United States
- Prior art keywords
- data
- driving data
- driving
- vehicle
- sequences
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3082—Vector coding
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W40/09—Driving style or behaviour
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/06—Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3068—Precoding preceding compression, e.g. Burrows-Wheeler transformation
- H03M7/3079—Context modeling
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/30—Driving style
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
Definitions
- the present disclosure relates to technologies for analyzing driving data of a vehicle.
- a driving assist system aims to assist driver's driving of a vehicle according to various driving situations.
- the driving assist system suitably analyzes the driving situations of an own vehicle.
- Japanese Patent Application Publication No. 2013-250663 discloses a technology that collects a large amount of various items of data associated with driver's driving operations and behaviors of an own vehicle as a driving data group. Then, this technology analyzes the driving data group to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes.
- drivers may not necessarily perform the same driving operations, i.e. driving actions, in the same driving scene. For example, when driving an own vehicle to go around a curve, a driver sufficiently decelerates the own vehicle immediately before the curve, and thereafter, turns the steering wheel, while another driver may turn the steering wheel at the same time as decelerating the own vehicle.
- various types of vehicles have individually different travelling characteristics for each type. These driving habits and the travelling characteristics depend on the driving data group.
- a first driving scene in the extracted driving scenes may be different from a second driving scene in the extracted driving scenes although the first and second driving scenes represent the same or similar driving situations. For example, if different drivers drive the same type of vehicle on the same road, the driving scenes for the different drivers may be extracted as different driving scenes. In addition, if the same driver drives different types of vehicles on the same road, the driving scenes for the different types of vehicles may be extracted as different driving scenes.
- one aspect of the present disclosure seeks to provide technologies, each of which is capable of analyzing a driving data group independently of such external factors.
- the driving data analyzer includes a data collector collects, from at least one vehicle, driving data sequences while each of the driving data sequences is correlated with identification data.
- Each driving data sequence includes sequential driving data items, and each driving data item represents at least one of a driver's operation of at least one vehicle and a behavior of the at least one vehicle based on the at least one of a driver's operation.
- the identification data represents a type of at least one external factor that contributes to variations in the driving data items.
- the driving data analyzer includes a feature extractor applies a data compression network model to the driving data sequences to thereby extract, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor.
- This configuration of the driving data analyzer extracts, from the driving data sequences, at least one latent feature independently from the type of the at least one external factor. This therefore enables a robust data analysis based on the at least one latent feature to be carried out.
- FIG. 1 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the first embodiment of the present disclosure
- FIG. 2 is a block diagram schematically illustrating an example of the internal structure of a feature model according to the first embodiment
- FIG. 3 is a block diagram schematically illustrating an example of how each of an encoder and a decoder of the individual network illustrated in FIG. 2 is operated;
- FIG. 4 is a graph schematically illustrating an example of relationships among a target driving data sequence of a driver, a predicted data sequence of another driver, and a predicted data sequence of the former driver thereamong;
- FIG. 5 is s a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the second embodiment of the present disclosure
- FIG. 6 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the third embodiment of the present disclosure
- FIG. 7 is a block diagram schematically illustrating an example of the functional structure of a driving data analyzer according to the fourth embodiment of the present disclosure.
- FIG. 8 is a block diagram schematically illustrating multi LSTM layers if an encoder comprised of the multi LSTM layers.
- FIG. 9 is a graph schematically illustrating an example of a training data sequence.
- the following describes a driving data analyzer 1 according to the first embodiment of the present disclosure with reference to FIGS. 1 to 4 .
- the driving data analyzer 1 includes in-vehicle units 10 respectively installed in a plurality of vehicles V 1 , . . . , Vn, and a server 20 communicable by radio with the in-vehicle units 10 .
- the in-vehicle units 10 serve as mobile terminals at least partly provided in the respective vehicles V 1 to Vn.
- Each of the in-vehicle units 10 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 10 a , a memory device 10 b , and an input unit 10 c .
- the memory device 10 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
- each in-vehicle unit 10 can run one or more programs, i.e. sets of program instructions, stored in the memory device 10 b , thus implementing various functions of the in-vehicle unit 10 as software operations.
- the CPU 10 a can run programs stored in the memory device 10 b , thus performing one or more methods in accordance with the corresponding one or more programs.
- At least one of the various functions of at least one in-vehicle unit 10 can be implemented as a hardware electronic circuit.
- the various functions of at least one in-vehicle unit 10 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits.
- the input unit 10 c of each in-vehicle unit 10 enables a driver of the corresponding vehicle to enter various commands and/or various data items to the CPU 10 a of the corresponding in-vehicle unit 10 .
- the server 20 is configured mainly as at least one known microcomputer including a CPU, i.e. a processor, 20 a and a memory device 20 b .
- the memory device 20 b includes, for example, at least one of semiconductor memories, such as a RAM, a ROM, and a flash memory. These semiconductor memories include at least one non-transitory computer-readable storage medium.
- the CPU 20 a of the server 20 can run one or more programs, i.e. program instructions, stored in the memory device 20 b , thus implementing various functions of the server 20 as software operations.
- the CPU 20 a can run programs stored in the memory device 20 b , thus performing one or more routines in accordance with the corresponding one or more programs.
- At least one of the various functions of the server 20 can be implemented as a hardware electronic circuit.
- the various functions of at least one server 20 can be implemented by a combination of electronic circuits including digital circuits, which include many logic gates, analog circuits, digital/analog hybrid circuits, or hardware/software hybrid circuits.
- the in-vehicle unit 10 installed in each of the vehicles V 1 to Vn includes a data obtainer 11 and a data transmitter 12 .
- the CPU 10 a of the in-vehicle unit 10 runs corresponding one or more programs stored in the memory device 10 b , thus implementing the functional modules 11 and 12 .
- the data obtainer 11 is communicably connected to sensors, i.e. in-vehicle sensors, SS installed in the corresponding vehicle. Note that the data obtainer 11 can include the in-vehicle sensors SS.
- the in-vehicle sensors SS include a first type of sensors each repeatedly measuring a driving data item including at least one of
- Operation data item D 1 indicative of driver's operations of at least one of driver-operable devices installed in the corresponding vehicle
- the operation data items D 1 include
- the behavioral data items D 2 include
- the in-vehicle sensors SS can include a second type of sensors, such as a radar sensor, an image sensor, and a weather sensor, each repeatedly measuring a situation data item D 3 that is useful in specifying a driving situation of the corresponding vehicle, which includes at least one of
- An image data item including an image of a region, such as a front region, located around the corresponding vehicle captured by the image sensor, which is one of the second type of sensors, installed to the corresponding vehicle
- a road attribute item indicative of the attribute of a road on which the corresponding vehicle is travelling such as a straight road, a curved road, an expressway, and/or an ordinary road, which can be expressed from map data and the image data item
- Weather information indicative of a weather condition such as a shine, i.e. fine condition, a cloudy condition, or a rain condition around the corresponding vehicle
- the data obtainer 11 obtains, from the first and second types of sensors, a driving data group including the operation data items D 1 , behavioral data items D 2 , and situation data items D 3 in a predetermined measurement cycle. Then, the data obtainer 11 sequentially outputs, to the data transmitter 12 , the driving data groups measured in the measurement cycle as target driving data sequences.
- the data obtainer 11 also obtains an identification data item indicative of the type of at least one external factor that contributes to variations in the driving data items.
- the first embodiment uses, as the at least one external factor, the identity of a driver of the corresponding vehicle. That is, the type of the at least one external factor for each of the vehicles V 1 to Vn represents a corresponding driver.
- Unique identification data (ID) items D 4 are for example assigned to respective drivers that can use at least one of the vehicles V 1 to Vn according to the first embodiment.
- the driver when a driver uses one of the vehicles V 1 to Vn, the driver operates the input unit 10 c of the corresponding one of the in-vehicle units 10 c to thereby enter the driver's ID data item D 4 to the CPU 10 a , and the CPU 10 a stores the entered driver's ID data item D 4 in the memory device 10 b.
- unique ID data items can be assigned to respective keys prepared for each of the vehicles V 1 to Vn, and when a driver inserts an assigned key for one of the vehicles V 1 to Vn into an ignition lock of one of the vehicles V 1 to Vn, the CPU 10 a , which is communicable with the ignition lock, reads the ID data item assigned to the inserted key.
- ID data items of drivers who have the authority to use a selected vehicle in the vehicles V 1 to Vn are recorded beforehand in the memory device 10 b of the in-vehicle unit 10 of the selected vehicle. Then, when a driver who has the authority to use the selected vehicle enters information about him or her, the CPU 10 a can receive the information, and extract one of the recorded ID data items; the extracted ID data item matches with the entered information.
- the data obtainer 11 outputs, to the data transmitter 12 , the ID data item of a current driver of the corresponding vehicle.
- the data transmitter 12 is configured to
- the data transmitter 12 can be configured to indirectly transmit the target driving data sequences and the ID data item to the server 20 via infrastructures provided on, for example, roadsides.
- the data transmitter 12 can be comprised of a mobile communicator, such as a cellular phone, which can communicate with the server via radio communication networks.
- the server 20 serves as a fixed station directly communicable or indirectly communicable via infrastructures or the radio communication networks with the vehicles V 1 to Vn, i.e. their in-vehicle devices 10 .
- the server 20 includes a data collector 21 , a driving data database (DB) 22 , a model generator 23 , a data extractor 24 , an analytical data DB 25 , and an analyzing unit 26 .
- the CPU 20 a of the server 20 executes one or more corresponding programs stored in the memory device 20 b , thus implementing the functional modules 21 , 23 , 24 , and 26 .
- the memory device 20 b can include predetermined storage areas allocated to serve as the respective driving data DB 22 and analytical data DB 25 , or include at least one mass storage, such as a semiconductor memory or a magnetic memory, such as a hard disk drive, that serves as the driving data DB 22 and analytical data DB 25 .
- the data collector 21 collects the target driving data sequences and the ID data item output from each of the in-vehicle devices 10 , and stores, in the driving data DB 22 , the collected target driving data sequences and ID data item for each of the in-vehicle devices 10 such that the collected target driving data sequences for each of the in-vehicle devices 10 are correlated with the ID data item for the corresponding in-vehicle device 10 .
- the target driving data sequences include sequential sets of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 . That is, at time (t ⁇ 1), a set of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 , which are obtained by the data obtainer 11 , is collected by the data collector 12 . After the lapse of one measurement cycle since the time (t ⁇ 1), a next set of the operational data items D 1 , the behavior data items D 2 , and the situation data items D 3 , which are obtained by the data obtainer 11 , is collected by the data collector 12 at time t.
- the target vector sequences are stored.
- the data collector 12 is also configured to transform the ID data item for each of the vehicles V 1 to Vn, i.e. each of the drivers, into a multidimensional target vector whose elements are each have 0 or 1; the number of dimensions of the multidimensional target vector corresponds to the number of drivers of the respective vehicles V 1 to Vn. For example, if the number of vehicles V 1 to Vn is four, so that the number of drivers is four, the respective ID data items are expressed as an identification vector b 1 (1, 0, 0, 0), an identification vector b 2 (0, 1, 0, 0), an identification vector b 3 (0, 0, 1, 0), and an identification vector b 4 (0, 0, 0, 1). That is, any one of the drivers can be expressed as an identification vector bd in which d identifies any one of the drivers.
- the model generator 23 is configured to for example perform, every regular internal or irregular interval, a training task that
- a feature model 30 such as a data compression model or a dimensionality reduction model, in accordance with the target vector sequences to thereby generate a trained feature model 30
- the feature model 30 includes an encoder 31 , an intermediate layer, i.e. a driver translation layer, 32 , and a decoder 33 . That is, the feature model 30 is designed as a neural network comprised of a common encoder-decoder model to which the intermediate layer (driver translation layer) 32 has been added.
- the encoder 31 is configured to extract latent features, i.e. essential features, from actual input data, and the decoder 33 is configured to use the extracted latent features to thereby reconstruct or predict input data.
- latent features i.e. essential features
- the decoder 33 is configured to use the extracted latent features to thereby reconstruct or predict input data.
- LSTM long short term memory
- RNN recurrent neural networks
- the LSTM layer has a memory cell, and the state of the memory cell, which will be referred to as a cell state, and an output in the LSTM layer of the encoder 31 at time t will be respectively expressed as CE t and hE t .
- the cell state and output in the LSTM layer of the decoder 33 at time t will be respectively expressed as CD t and hD t .
- each of the cell state CE t and output hE t shows latent features of the input target vector at time t.
- the encoder 31 includes a single LSTM layer comprised of sequentially connected LSTM nodes 31 a 1 to 31 a M. Note that, in order to easily understand the operations of the encoder 31 , the single LSTM layer is comprised of the temporally developed LSTM nodes 31 a 1 to 31 a M.
- the first LSTM node 31 a 1 When M target vectors x t+1 to x t+M , i.e. measured driving data items, obtained within a specified input period from time (t+1) to time (t+M) inclusive are sequentially input to the encoder 31 , the first LSTM node 31 a 1 generates the cell state CE t+1 and output hE t+1 in accordance with the target vector x t+1 when the target vector x t+1 is input thereto, and supplies the cell state CE t+1 and output hE t+1 to the next LSTM node 31 a 2 .
- the interval between the time (t+k) and to the time (t+k+1) adjacent thereto is set to one measurement cycle; k is any positive integer including zero.
- the second LSTM node 31 a 2 generates the cell state CE t+2 and output hE t+2 in accordance with the cell state CE t+1 , output hE t+1 , and target vector x t+2 when the target vector x t+2 is input thereto, and supplies the cell state CE t+2 and output hE t+2 to the next LSTM node 31 a 3 .
- the M-th LSTM node 31 a M when the target vector x t+M is input to the M-th LSTM node 31 a M, the M-th LSTM node 31 a M generates the cell state CE t+M and output hE t+M in accordance with the cell state CE t+M ⁇ 1 , output hE t+M ⁇ 1 , and target vector x t+M , and supplies the cell state CE t+M and output hE t+M to the intermediate layer 32 .
- the encoder 31 encodes the M target vectors x t+1 to x t+M to thereby output, as encoded, i.e. compressed, data item, the cell state CE t+M and output hE t+M .
- the number of dimensions of each of the cell state CE t+M and output hE t+M is smaller than the number of the multidimensional target vectors x t+1 to x t+M , so that each of the cell state CE t+M and output hE t+M represents the latent features, i.e. common important features, included in the multidimensional target vectors x t+1 to x t+M .
- the encoder 31 can be comprised of multi LSTM layers.
- FIG. 8 schematically illustrates an example of an encoder 31 X having such a multi LSTM layers. Specifically, FIG. 8 illustrates a first layer of LSTM nodes 31 a 11 to 31 a M 1 that are sequentially connected to which the M target vectors x t+1 to x t+M are respectively input. FIG. 8 also illustrates a second layer of LSTM nodes 31 a 12 to 31 a M 2 that are sequentially connected, and are respectively connected to the LSTM nodes 31 a 11 to 31 a M 1 .
- the LSTM node 31 a M 2 supplies, based on the cell state and output sent from the LSTM node 31 a (M ⁇ 1) 2 and the cell state and output sent from the LSTM node 31 a M 1 , the cell state CE t+M and output hE t+M to the intermediate layer 32 .
- the intermediate layer 32 is configured as, for example, a neural network to which the cell state CE t+M and output hE t+M sent from the encoder 31 , and the identification vector bd.
- the intermediate layer 32 is comprised of two partial networks, that is, a common network 321 and an individual network 322 .
- the common network 321 is configured to receive the cell state CE t+M , and output, to the decoder 33 , the cell state CE t+M , i.e. the latent features included in the target vectors x t+1 to x t+M , without change as an initial cell state CD t+M for the decoder 33 .
- the individual network 322 includes a single dense layer, i.e. a single fully connected layer, or multiple dense layers, i.e. multiple fully connected layers.
- parameters which include, for example, connection weights and/or connection biases, between nodes of the different layers, and parameters of the LSTM layer, have been trained and can be repeatedly trained to increase the analysis accuracy of the driving data analyzer 1 . How to train the parameters included in the feature model 30 will be described later.
- the individual network 322 merges the output hE t+M , i.e. the latent features included in the target vectors x t+1 to x t+M , with the identification vector bd to thereby output a merged result indicative of the latent features included in the target vectors x t+1 to x t+M , which are correlated with the identification vector bd.
- the merged result or its part is input to the decoder 33 as the initial cell state CD t+M for the decoder 33 , and the merged result or its remaining part is output as a latent feature vector Bd of the driver specified by the identification vector bd.
- a merge network configured to merge the output hE t+M with the identification vector bd in accordance with a predetermined rule can be used as the individual network 322 .
- the decoder 33 for example includes a single LSTM layer comprised of sequentially connected LSTM nodes 33 a 1 to 33 a N.
- the single LSTM layer is comprised of the temporally developed LSTM nodes 33 a 1 to 33 a N.
- the cell state CD t+M and the output hE t+M are provided as initial values.
- an initial value of 0 is input to the first LSTM 33 a 1 as its input.
- the decoder 33 When the cell state CD t+M , the output hE t+M , and the initial value of 0 are input to the first LSTM 33 a 1 , the decoder 33 performs a decoding operation N times to thereby sequentially output predicted sequential vectors y t+M+1 to y t+M+N within a specified prediction period from the time (t+M+1) to time (t+M+N) that sequentially follow the specified input period from the time (t+1) to the time (t+M).
- the first LSTM node 33 a 1 based on the cell state CD t+M , output hD t+M , and the input of the initial value of 0, the first LSTM node 33 a 1 generates, at the first decoding operation,
- the first LSTM node 33 a 1 supplies the cell state CD t+M+1 and the output hD t+M+1 to the next LSTM node 33 a 2 , and also supplies the predicted sequential vector y t+M+1 to the next LSTM node 33 a 2 as its input.
- the second LSTM node 33 a 2 based on the cell state CD t+M+1 , output hD t+M+1 , and the input of the predicted sequential vector y t+M+1 , the second LSTM node 33 a 2 generates, at the second decoding operation,
- the second LSTM node 33 a 2 supplies the cell state CD t+M+2 and the output hD t+M+2 to the next LSTM node 33 a 3 , and also supplies the predicted sequential vector y t+M+2 to the next LSTM node 33 a 3 as its input.
- the third LSTM node 33 a 3 to the (N ⁇ 1)-th LSTM node 33 a (N ⁇ 1) sequentially perform their decoding operations in the same manner as the first or second LSTM node 33 a 1 .
- the N-th LSTM node 33 a N generates, at the N-th decoding operation,
- the N-th LSTM node 33 a N outputs the predicted sequential vector y t+M+N .
- the decoder 33 is configured to decode the cell state CD t+M , output hD t+M , and the input of the initial value of 0 into the predicted sequential vectors y t+M+1 to y t+M+N in response to the input data sequential vectors x t+1 to x t+M .
- the features of the predicted sequential vector y t+M+1 output from the first LSTM node 33 a 1 are embedded in the latent features CD t+M+2 and hD t+M+2 of the second LSTM node 33 a 1
- the features of the predicted sequential vector y t+M+2 output from the second LSTM node 33 a 2 are embedded in the latent features CD t+M+3 and hD t+M+3 of the third LSTM node 33 a 3 , . . .
- the decoder 33 can be comprised of multi LSTM layers.
- model generator 23 trains the feature model 30 if the feature model 30 has been untrained or if it is determined that the feature model 30 should be trained.
- the feature model generator 23 generates, based on the target driving data sequences, i.e. target vector sequences, stored in the driving data DB 22 , training data items for the individual drivers, i.e. for the individual in-vehicle devices 10 , and supervised data items that are respectively paired to the training data items.
- the feature model generator 23 reads, from the driving data DB 22 , the target vectors x t+1 to x t+M obtained within the specified period from the time (t+1) to the time (t+M) inclusive for each driver used as the training data items.
- the feature model generator 23 reads, from the driving data DB 22 , the target vectors x t+M+1 to x t+M+N obtained within the specified period from the time (t+M+1) to the time (t+M+N) inclusive for each driver used as the supervised data items.
- FIG. 9 merely illustrates how vehicle's speed as an element of the target vectors varies over time, each of the other driving data items varies a corresponding element of the target vectors.
- the feature generator 23 trains the feature model 30 using the training data items and the supervised data items that are respectively paired to the training data items.
- the feature generator 23 performs a training task of
- the feature generator 23 repeats the training task while changing values of the parameters of the feature model 30 until the square error calculated for the current training task has reached a predetermined minimum value. This enables the parameters of the feature model 30 to be trained.
- the feature generator 23 repeats the training task for the single neutral network comprised of the encoder 31 , the intermediate layer 32 , and the decoder 33 of the feature model 30 while changing values of all the parameters of the single neural network until the square error calculated for the current training task has reached a predetermined minimum value. This enables all the parameters of the feature model 30 to be collectively trained.
- the training algorithm for training the feature model 30 schematically described above is, what is called a back propagation through time (BPIT) algorithm, but the training algorithm for training the feature model 30 is not limited to the BPIT algorithm.
- BPIT back propagation through time
- the feature generator 23 can be configured to perform a known sliding window algorithm for the target driving data sequences, i.e. target vector sequences stored in the driving data DB 22 , to thereby extract training data items and supervised data items that are respectively paired to the training data items. Then, the feature generator 23 can be configured to use the pairs of the training data items and supervised data items as a training data set.
- the above training enables the feature model 30 to predict, based on the target vectors obtained within the period (M ⁇ T), target vectors within the period (N ⁇ T) that follows the period (M ⁇ T) when the measurement cycle is represented by T; the measurement cycle T in the first embodiment is for example set to 1 seconds hereinafter.
- This training of the feature model 30 enables
- the data extractor 24 which serves as, for example, a feature extractor, is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 generated, i.e. trained, by the model generator 23 ; the analytical data is used for the analyzing unit 26 to analyze various phenomena or events associated with driving. Then, the data extractor 24 is configured to store the analytical data in the analytical data DB 25 .
- the following describes how the data extractor 24 extracts, from the collected target driving data sequences, the analytical data using, for example, the following approaches.
- the data extractor 24 extracts, as the analytical data, first common analytical data including the cell state CD t+M , which is output from the common network 321 of the intermediate layer 32 as a result of the input of the target vectors x t+1 to x t+M and the specified identification vector bd to the feature model 30 .
- the extracted common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
- the data extractor 24 extracts, as the analytical data, second common analytical data including the predicted sequential vectors y t°M+1 to y t+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors x t+1 to x t+M and the specified identification vector bd to the feature model 30 . That is, the extracted second common analytical data represents the latent features of the collected target driving data sequences; the latent features are independent from drivers.
- setting each element of the identification vector bd to 0 or 1 enables any driver to be specified. That is, because setting each element of the identification vector bd to 0 represents that no drivers are specified, and setting each element of the identification vector bd to 1 represents that all drivers are specified, this means that a particular driver is not specified, in other words, that any driver is specified.
- the analytical data extracted by the second approach shows specific driving data items, i.e. data items dependent on specific driving behaviors themselves, it is easy for a user of the driving data analyzer 1 to intuitively understand what the analytical data mean.
- the data extractor 24 extracts, as the analytical data, the predicted sequential vectors y t+M+1 to y t+M+N within the specified prediction period from the time (t+M+1) to the time (t+M+N), which are output from the decoder 33 as a result of the input of the target vectors x t+1 to x t+M and the identification vector bd to the feature model 30 .
- the identification vector bd represents a driver correlated with the target driving data items, i.e. target vectors, x t+1 to x t+M , which will be referred to as a target driver (driver A in FIG. 4 )
- the data extractor 24 enables the extracted predicted sequential vectors y t+M+1 to y t+M+N , which show a predicted future behavior of the target driver A and/or a predicted future behavior of the vehicle corresponding to the target driver A within the future prediction period (N ⁇ T), to be obtained (see dashed curve in FIG. 4 ).
- the identification vector bd represents a driver who is not correlated with the target driving data items, i.e. target vectors, x t+1 to x t+M , which will be referred to as a non-target driver (driver B in FIG. 4 )
- the data extractor 24 enables the extracted predicted sequential vectors y t+M+1 to y t+M+N , which show a predicted future behavior of the non-target driver B and/or a predicted future behavior of the vehicle corresponding to the non-target driver B within the future prediction period (N ⁇ T), to be obtained (see dot-and-dash curve in FIG. 4 ).
- the data extractor 24 is capable of obtaining predicted driving data items for the specified driver upon there are target data items for another driver.
- solid curve represents the target data items of the target driver.
- the analyzing unit 26 which serves as, for example, a data analyzer, is configured to execute, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
- the analyzing unit 26 can be configured to analyze other common statistical tasks for the target driving data items, i.e. target vectors, in accordance with the analytical data stored in the analytical data DB 25 .
- the analyzing unit 26 can be configured to analyze, based on the first or second common analytical data, the target driving data items, i.e. target vectors, to correspondingly extract driving scenes each representing similar driving situations, thus performing driver's driving assist in accordance with each of the driving scenes.
- This driving scene extracting task is disclosed in Japanese Patent Application Publication No. 2013-250663 or US patent Publication U.S. Pat. No. 9,527,384. The disclosure of each of patent Publications U.S. Pat. No. 9,527,384 and JP 2013-250663 is incorporated entirely herein by reference.
- This driving scene extracting task therefore reduces cases where, although different drivers travel the same type of vehicle on the same road, the driving scenes for the different drivers are extracted as different driving scenes due to, for example, the difference between their driving habits. This results in an improvement of the accuracy of analyzing the driving scenes.
- the analyzing unit 26 can be configured to extract, as the analytical data, target driving data items for all the drivers in a specified driving situation, and execute, for example, a common statistical task that analyzes the driving behavior of each driver in the same specified driving situation.
- the driving data analyzer 1 according to the first embodiment described in detail above obtains the following technical benefits.
- the driving data analyzer 1 is configured to extract, from the collected target driving data items of the drivers, common feature data that represents at least one latent feature included in the collected target driving data items using the feature model 30 ; the at least latent feature is at least one common feature being latent in the collected target driving data items independently from the drivers. Then, the driving data analyzer 1 is configured to analyze, based on the common feature data, at least information included in the collected target driving data items. This therefore enables a robust analyzed result, which is independent from the drivers, to be obtained.
- the driving data analyzer 1 is configured to use, as the feature model 30 , a neural network comprised of the encoder 31 , the intermediate layer 32 linked to the encoder 31 , and the decoder 33 linked to the intermediate layer 32 , in other words, comprised of a common encoder-decoder model to which the intermediate layer 32 has been added.
- the intermediate layer 32 is comprised of the common network 321 and the individual network 322 .
- the common network 321 is configured to
- the individual network 322 is configured to
- Training of the neural network based on a known training approach therefore enables the latent features independent from the drivers to be stored in the common network 321 , and the latent features for each driver to be stored in the individual network 322 .
- the driving data analyzer 1 makes it possible to predict, from the collected driving data items for a specified driver indicated by the identification vector bd, a future behavior of the specified driver using the feature model 30 .
- the driving data analyzer 1 makes it possible to predict a future behavior of the non-target driver using the feature model 30 . This therefore enables the target driving data items for all the drivers to be mutually compared with each other to thereby analyze the compared results.
- a driving data analyzer 1 A according to the second embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1 A according to the second embodiment, and omits or simplifies descriptions of like parts between the first and second embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
- the first embodiment is configured such that the server 20 generates the analytical data, but the second embodiment is configured such that each in-vehicle device 10 A generates the analytical data.
- the driving data analyzer 1 A includes the in-vehicle units 10 A respectively installed in the vehicles V 1 , . . . , Vn, and a server 20 A communicable by radio with the in-vehicle units 10 A.
- the in-vehicle unit 10 A installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , a data transmitter 12 a , a model receiver 13 , and a data extractor 14 .
- the server 20 A includes a data collector 21 a , the driving data DB 22 , the model generator 23 , the analytical data DB 25 , the analyzing unit 26 , and a model transmitter 27 .
- the model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10 A, the feature model 30 each time the training of the feature model 30 is completed.
- the model receiver 13 of each in-vehicle device 10 A is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20 A.
- the data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13 ; the analytical data is used for the analyzing unit 26 of the server 20 A to analyze various phenomena or events associated with driving.
- the extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
- the data transmitter 12 a is configured to
- the data collector 21 a of the server 20 A is configured to
- the driving data analyzer 1 A according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
- the driving data analyzer 1 A is configured such that each in-vehicle device 10 A extracts, from the collected target driving data sequences, the analytical data, making it possible to reduce the processing load of the server 20 A.
- a driving data analyzer 1 B according to the third embodiment differs from the driving data analyzer 1 in the following points. So, the following mainly describes the different points of the driving data analyzer 1 B according to the third embodiment, and omits or simplifies descriptions of like parts between the first and third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
- the first embodiment is configured such that the analyzing unit 26 of the server 20 executes, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
- the third embodiment is configured such that
- a server 20 B transmits the analytical data to each in-vehicle device 10 B
- Each in-vehicle device 10 B performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data
- the driving data analyzer 1 B includes the in-vehicle units 10 B respectively installed in the vehicles V 1 , . . . , Vn, and the server 20 B communicable by radio with the in-vehicle units 10 B.
- the in-vehicle unit 10 B installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , an analytical data receiver 15 , and a driving assist executor 16 .
- the server 20 B includes the data collector 21 , the driving data DB 22 , the model generator 23 , the data extractor 24 , the analytical data DB 25 , and an analytical data transmitter 28 in place of the analyzing unit 26 .
- the analytical data transmitter 28 is configured to transmit, i.e. broadcast, to each in-vehicle device 10 B, the analytical data stored in the analytical data DB 25 .
- the analytical data receiver 15 of each in-vehicle device 10 B is configured to receive the analytical data.
- the driving assist executor 16 of each in-vehicle device 10 B is configured to
- This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
- the driving data analyzer 1 B according to the second embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first embodiment.
- the driving data analyzer 1 B is configured such that each in-vehicle device 10 B executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This results in an improvement of the driving safety of each vehicle.
- a driving data analyzer 1 C according to the fourth embodiment differs from each of the driving data analyzers 1 to 1 B in the following points. So, the following mainly describes the different points of the driving data analyzer 1 C according to the fourth embodiment, and omits or simplifies descriptions of like parts between the fourth and each of the first to third embodiments, to which identical or like reference characters are assigned, thus eliminating redundant description.
- the first embodiment is configured such that the server 20 generates the analytical data, and executes, based on the analytical data stored in the analytical data DB 25 , various analytical tasks that analyze information included in the target driving data items, i.e. target vectors.
- each in-vehicle device 10 C generates the analytical data, and performs various cruise assist tasks, i.e. various driving assist tasks, in accordance with the analytical data.
- the driving data analyzer 1 C includes the in-vehicle units 10 C respectively installed in the vehicles V 1 , . . . , Vn, and a server 20 C communicable by radio with the in-vehicle units 10 C.
- the in-vehicle unit 10 C installed in each of the vehicles V 1 to Vn includes the data obtainer 11 , the data transmitter 12 , the model receiver 13 , a data extractor 14 , and the driving assist executor 16 .
- the server 20 C includes the data collector 21 , the driving data DB 22 , the model generator 23 , and the model transmitter 27 .
- the model transmitter 27 is configured to cyclically or periodically transmit, i.e. broadcast, to each in-vehicle device 10 C, the feature model 30 each time the training of the feature model 30 is completed.
- the model receiver 13 of each in-vehicle device 10 C is configured to receive the feature model 30 each time the feature model 30 is transmitted from the server 20 C.
- the data extractor 14 is configured to extract, from the collected target driving data sequences, analytical data based on the feature model 30 received by the model receiver 13 ; the analytical data is used for the driving assist executor 16 to execute various driving assist tasks.
- the extraction task performed by the data extractor 14 is substantially identical to the extraction task performed by the data extractor 24 according to the first embodiment.
- the driving assist executor 16 of each in-vehicle device 10 C is configured to
- This at least one of the various driving assist tasks enables the corresponding vehicle to be safely travelled.
- the driving data analyzer 1 C according to the third embodiment described in detail above obtains the following technical benefit in addition to the above technical benefits obtained by the first and third embodiments.
- the driving data analyzer 1 C is configured such that each in-vehicle device 10 C extracts, from the collected target driving data sequences, the analytical data, and executes, based on the future behavior of the driver and/or the future behavior of the corresponding vehicle, at least one of the various driving assist tasks. This enables each in-vehicle device 10 C to merely perform communications required to receive the feature model 30 , making it possible to obtain
- Each of the first and second embodiments is configured such that the encoder 31 supplies the cell state CE t+M to the common network 321 as its input, and supplies the output hE t+M to the individual network 322 as its input.
- the present disclosure is not limited to this configuration.
- the encoder 31 can supply the cell state CE t+M to the individual network 322 as its input, and supply the output hE t+M to the common network 321 as its input. In addition, the encoder 31 can supply any one of the cell state CE t+M and the output hE t+M to each of the common network 321 and the individual network 322 .
- Each of the encoder 31 and the decoder 33 is comprised of a single LSTM layer or multi LSTM layers, but can be comprised of, for example, a common RNN or a gated recurrent unit (GRU). Because the GRU does not have a cell state, the output of the GRU shows latent features of an input target vector. For this reason, the encoder 31 using the GRU can be configured to send a predetermined number of elements of the output of the GRU to the common network 321 , and sent the remaining elements to the individual network 322 .
- GRU gated recurrent unit
- Each of the first to fourth embodiments is configured to use, as the at least one external factors, the identify of a driver, i.e. driver's driving habits, himself or herself of each vehicle, but the present disclosure is not limited thereto.
- each in-vehicle device 10 can obtain the identification data item indicative of another external factor that causes the driving data items to vary, such as
- the data obtainer 11 of each in-vehicle device 10 can obtain the identification data item indicative of a combination of the above external factors.
- the feature generator 23 can be configured to
- each of the first to fourth embodiments can be distributed as plural elements, and the functions that plural elements have can be combined into one element. At least part of the structure of each of the first to fourth embodiments can be replaced with a known structure having the same function as the at least part of the structure of the corresponding embodiment. A part of the structure of each of the first to fourth embodiments can be eliminated. At least part of the structure of each of the first to fourth embodiments can be added to or replaced with the structures of the other embodiments. All aspects included in the technological ideas specified by the language employed by the claims constitute embodiments of the present invention.
- the present disclosure can be implemented by various embodiments in addition to the driving data analyzer; the various embodiments include driving data analyzing systems each including in-vehicle devices and a server, programs for serving a computer as each of the in-vehicle devices and the server, storage media storing the programs, and driving data analyzing methods.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (12)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-199379 | 2017-10-13 | ||
JPJP2017-199379 | 2017-10-13 | ||
JP2017199379A JP7053213B2 (en) | 2017-10-13 | 2017-10-13 | Operation data analysis device |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190114345A1 US20190114345A1 (en) | 2019-04-18 |
US11061911B2 true US11061911B2 (en) | 2021-07-13 |
Family
ID=66097483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/158,865 Active 2039-03-30 US11061911B2 (en) | 2017-10-13 | 2018-10-12 | Driving data analyzer |
Country Status (2)
Country | Link |
---|---|
US (1) | US11061911B2 (en) |
JP (1) | JP7053213B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6937658B2 (en) * | 2017-10-17 | 2021-09-22 | 日立Astemo株式会社 | Predictive controller and method |
US10832140B2 (en) * | 2019-01-30 | 2020-11-10 | StradVision, Inc. | Method and device for providing information for evaluating driving habits of driver by detecting driving scenarios occurring during driving |
KR20210026112A (en) * | 2019-08-29 | 2021-03-10 | 주식회사 선택인터내셔날 | Detecting method for using unsupervised learning and apparatus and method for detecting vehicle theft using the same |
US11498575B2 (en) * | 2019-08-29 | 2022-11-15 | Suntech International Ltd. | Unsupervised learning-based detection method and driver profile- based vehicle theft detection device and method using same |
CN112677983B (en) * | 2021-01-07 | 2022-04-12 | 浙江大学 | System for recognizing driving style of driver |
CN117422169B (en) * | 2023-10-18 | 2024-06-25 | 酷哇科技有限公司 | Vehicle insurance user driving behavior analysis and prediction method and device based on causal intervention |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013250663A (en) | 2012-05-30 | 2013-12-12 | Denso Corp | Driving scene recognition device |
US9495874B1 (en) * | 2012-04-13 | 2016-11-15 | Google Inc. | Automated system and method for modeling the behavior of vehicles and other agents |
US10156848B1 (en) * | 2016-01-22 | 2018-12-18 | State Farm Mutual Automobile Insurance Company | Autonomous vehicle routing during emergencies |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2940042B2 (en) * | 1990-01-23 | 1999-08-25 | 日産自動車株式会社 | Vehicle control strategy device |
JP3902543B2 (en) | 2002-12-17 | 2007-04-11 | 本田技研工業株式会社 | Road traffic simulation device |
JP2009073465A (en) | 2007-08-28 | 2009-04-09 | Fuji Heavy Ind Ltd | Safe driving support system |
JP2009234442A (en) | 2008-03-27 | 2009-10-15 | Equos Research Co Ltd | Driving operation support device |
JP5856387B2 (en) | 2011-05-16 | 2016-02-09 | トヨタ自動車株式会社 | Vehicle data analysis method and vehicle data analysis system |
WO2016170785A1 (en) | 2015-04-21 | 2016-10-27 | パナソニックIpマネジメント株式会社 | Information processing system, information processing method, and program |
US10345449B2 (en) | 2016-12-02 | 2019-07-09 | Verizon Connect Ireland Limited | Vehicle classification using a recurrent neural network (RNN) |
JP6579495B2 (en) | 2017-03-29 | 2019-09-25 | マツダ株式会社 | Vehicle driving support system |
-
2017
- 2017-10-13 JP JP2017199379A patent/JP7053213B2/en active Active
-
2018
- 2018-10-12 US US16/158,865 patent/US11061911B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9495874B1 (en) * | 2012-04-13 | 2016-11-15 | Google Inc. | Automated system and method for modeling the behavior of vehicles and other agents |
JP2013250663A (en) | 2012-05-30 | 2013-12-12 | Denso Corp | Driving scene recognition device |
US10156848B1 (en) * | 2016-01-22 | 2018-12-18 | State Farm Mutual Automobile Insurance Company | Autonomous vehicle routing during emergencies |
Also Published As
Publication number | Publication date |
---|---|
US20190114345A1 (en) | 2019-04-18 |
JP2019074849A (en) | 2019-05-16 |
JP7053213B2 (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11061911B2 (en) | Driving data analyzer | |
Hanselmann et al. | CANet: An unsupervised intrusion detection system for high dimensional CAN bus data | |
CN108372857B (en) | Efficient context awareness by event occurrence and episode memory review for autonomous driving systems | |
CN107697070B (en) | Driving behavior prediction method and device and unmanned vehicle | |
Mohammadi et al. | Future reference prediction in model predictive control based driving simulators | |
JP2022089806A (en) | Modeling operation profiles of vehicle | |
CN112396254A (en) | Destination prediction method, destination prediction device, destination prediction medium, and electronic device | |
CN118282780B (en) | New energy automobile vehicle-mounted network intrusion detection method, equipment and storage medium | |
US20230177241A1 (en) | Method for determining similar scenarios, training method, and training controller | |
CN111461426A (en) | High-precision travel time length prediction method based on deep learning | |
CN116415200A (en) | Abnormal vehicle track abnormality detection method and system based on deep learning | |
Khodayari et al. | Improved adaptive neuro fuzzy inference system car‐following behaviour model based on the driver–vehicle delay | |
CN112559585A (en) | Traffic space-time sequence single-step prediction method, system and storage medium | |
CN115909239A (en) | Vehicle intention recognition method and device, computer equipment and storage medium | |
CN117376920A (en) | Intelligent network connection automobile network attack detection, safety state estimation and control method | |
CN116467615A (en) | Clustering method and device for vehicle tracks, storage medium and electronic device | |
KR102196027B1 (en) | LSTM-based steering behavior monitoring device and its method | |
CN112462759B (en) | Evaluation method, system and computer storage medium of rule control algorithm | |
CN118171723A (en) | Method, device, equipment, storage medium and program product for deploying intelligent driving strategy | |
CN111104953A (en) | Driving behavior feature detection method and device, electronic equipment and computer-readable storage medium | |
CN112164223B (en) | Intelligent traffic information processing method and device based on cloud platform | |
CN113078974A (en) | Method for neural network sparse channel generation and inference | |
CN116989818B (en) | Track generation method and device, electronic equipment and readable storage medium | |
Zeng et al. | Towards building reliable deep learning based driver identification systems | |
RU2724596C1 (en) | Method, apparatus, a central device and a system for recognizing a distribution shift in the distribution of data and / or features of input data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: DENSO CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MISAWA, HIDEAKI;TAKENAKA, KAZUHITO;TANIGUCHI, TADAHIRO;SIGNING DATES FROM 20181026 TO 20181029;REEL/FRAME:047871/0206 Owner name: THE RITSUMEIKAN TRUST, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MISAWA, HIDEAKI;TAKENAKA, KAZUHITO;TANIGUCHI, TADAHIRO;SIGNING DATES FROM 20181026 TO 20181029;REEL/FRAME:047871/0206 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |