US12141607B2 - Machine learning device - Google Patents
Machine learning device Download PDFInfo
- Publication number
- US12141607B2 US12141607B2 US17/348,768 US202117348768A US12141607B2 US 12141607 B2 US12141607 B2 US 12141607B2 US 202117348768 A US202117348768 A US 202117348768A US 12141607 B2 US12141607 B2 US 12141607B2
- Authority
- US
- United States
- Prior art keywords
- training
- vehicle
- electric power
- machine learning
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4893—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues taking into account power or heat criteria
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3228—Monitoring task completion, e.g. by use of idle timers, stop commands or wait commands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3243—Power saving in microcontroller unit
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/329—Power saving characterised by the action undertaken by task scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5094—Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/10—Interfaces, programming languages or software development kits, e.g. for simulating neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to a machine learning device.
- an object of the present disclosure is to keep the amount of electric power which can be supplied to the outside from a vehicle at the time of a disaster from decreasing due to processing relating to training of a machine learning model in the vehicle able to supply electric power to the outside.
- a machine learning device provided in a vehicle able to supply electric power to an outside, comprising a processor configured to perform processing relating to training a machine learning model used in the vehicle, wherein the processor is configured to lower an electric power consumption amount in the processing relating to training when acquiring disaster information compared with when not acquiring the disaster information.
- a machine learning device comprising: a communication device able to communicate with a vehicle able to supply electric power to an outside; and a processor configured to train a machine learning model and transmitting a trained machine learning model through the communication device to the vehicle, wherein the processor is configured to stop the transmission to the vehicle of the trained machine learning model when acquiring disaster information.
- the present disclosure it is possible to keep the amount of electric power which can be supplied to the outside from a vehicle at the time of a disaster from decreasing due to processing relating to training of a machine learning model in the vehicle able to supply electric power to the outside.
- FIG. 1 is a schematic view of the configuration of a machine learning system according to a first embodiment of the present disclosure.
- FIG. 2 is view schematically showing the configuration of a vehicle in which a machine learning device according to the first embodiment of the present disclosure is provided.
- FIG. 3 is a functional block diagram of an ECU in a first embodiment.
- FIG. 4 shows one example of a neutral network model having a simple configuration.
- FIG. 5 is a flow chart showing a control routine of disaster information transmission processing in the first embodiment of the present disclosure.
- FIG. 6 is a flow chart showing a control routine of training stop processing in the first embodiment of the present disclosure.
- FIG. 7 is view schematically showing the configuration of a vehicle in which a machine learning device according to the second embodiment of the present disclosure is provided.
- FIG. 8 is a flow chart showing a control routine of training stop processing in a second embodiment of the present disclosure.
- FIG. 9 is a functional block diagram of an ECU in a third embodiment.
- FIG. 10 is a flow chart showing a control routine of vehicle identification processing in the third embodiment of the present disclosure.
- FIG. 11 is a flow chart showing a control routine of training stop processing in a third embodiment of the present disclosure.
- FIG. 12 is a flow chart showing a control routine of training stop processing in a fourth embodiment of the present disclosure.
- FIG. 13 is a functional block diagram of an ECU in a fifth embodiment.
- FIG. 14 is a flow chart showing a control routine of training stop processing in the fifth embodiment of the present disclosure.
- FIG. 15 is a flow chart showing a control routine of vehicle identification processing in a sixth embodiment of the present disclosure.
- FIG. 16 is a flow chart showing a control routine of model transmission stop processing in a seventh embodiment of the present disclosure.
- FIG. 1 is a schematic view of a configuration of a machine learning system 1 according to the first embodiment of the present disclosure.
- the machine learning system 1 is provided with a server 2 and a vehicle 3 .
- the server 2 is provided outside of the vehicle 3 and is provided with a communication interface 21 , a storage device 22 , a memory 23 , and a processor 24 .
- the server 2 may be further provided with an input device such as a keyboard and mouse and an output device such as a display etc. Further, the server 2 may be configured by a plurality of computers.
- the communication interface 21 can communicate with the vehicle 3 and enables the server 2 to communicate with the vehicle 3 .
- the communication interface 21 has an interface circuit for connecting the server 2 to the communication network 5 .
- the server 2 communicates with the vehicle 3 through the communication interface 21 , the communication network 5 , and the wireless base station 6 .
- the communication interface 21 is one example of a communication device.
- the storage device 22 for example, has a hard disk drive (HDD), solid state drive (SSD), or optical storage medium.
- the storage device 22 stores various types of data, for example, stores information relating to the vehicle 3 , computer programs for the processor 24 to perform various processing, etc.
- the memory 23 for example, has a semiconductor memory such as a random access memory (RAM).
- the memory 23 for example, stores various data etc., used when various processing is performed by the processor 24 .
- the communication interface 21 , storage device 22 , and memory 23 are connected through signal wires to the processor 24 .
- the processor 24 has one or more CPUs and peripheral circuits and performs various processing. Note that, the processor 24 may further have processing circuits such as arithmetic logic units or numerical calculation units.
- the processor 24 is an example of a control device.
- FIG. 2 is view schematically showing the configuration of a vehicle 3 in which a machine learning device according to the first embodiment of the present disclosure is provided.
- the vehicle 3 is a vehicle able to supply electric power to the outside of the vehicle 3 .
- it is a plug-in hybrid vehicle (PHV), an electric vehicle (EV), a fuel cell vehicle (FCV), etc.
- PGV plug-in hybrid vehicle
- EV electric vehicle
- FCV fuel cell vehicle
- the vehicle 3 is provided with a human machine interface (HMI) 31 , a GPS receiver 32 , a map database 33 , a navigation system 34 , an actuator 35 , a sensor 36 , a communication module 37 , and an electronic control unit (ECU) 40 .
- HMI human machine interface
- GPS receiver 32 the GPS receiver 32
- map database 33 the navigation system 34
- actuator 35 the sensor 36
- communication module 37 the communication module 37
- ECU electronice control unit
- the HMI 31 , the GPS receiver 32 , the map database 33 , the navigation system 34 , the actuator 35 , the sensor 36 , and the communication module 37 are connected to be able to communicate with the ECU 40 through an internal vehicle network based on the CAN (Controller Area Network) or other standard.
- CAN Controller Area Network
- the HMI 31 is an input/output device for input and output of information between the driver and vehicle 3 .
- the HMI 31 for example, includes a display for showing information, a speaker generating sound, operating buttons or a touch screen for the driver to input information, a microphone for receiving the voice of the driver, etc.
- the output of the ECU 40 is transmitted to the driver through the HMI 31 .
- the input from the driver is transmitted through the HMI 31 to the ECU 40 .
- the GPS receiver 32 receives signals from three or more GPS satellites and detects the current position of the vehicle 3 (for example, the latitude and longitude of the vehicle 3 ). The output of the GPS receiver 32 is transmitted to the ECU 40 .
- the map database 33 stores map information.
- the ECU 40 acquires map information from the map database 33 .
- the navigation system 34 sets the driving route of the vehicle to the destination based on the current position of the vehicle detected by the GPS receiver 32 , map information of the map database 33 , input by the driver of the vehicle, etc.
- the driving route set by the navigation system 34 is transmitted to the ECU 40 .
- the GPS receiver 32 and the map database 33 may be incorporated in the navigation system 34
- the actuator 35 is an actuating part required for driving the vehicle 3 . If the vehicle 3 is a PHV, the actuator 35 , for example, includes a motor, a fuel injector, a spark plug, a throttle valve drive actuator, an EGR control valve, etc.
- the ECU 40 controls the actuator 35 .
- the sensor 36 detects the state quantity of the vehicle 3 , the internal combustion engine, the battery, etc. and includes a vehicle speed sensor, an accelerator opening degree sensor, an air flow meter, an air-fuel ratio sensor, a crank angle sensor, a torque sensor, a voltage sensor, etc.
- the output of the sensor 36 is sent to the ECU 40 .
- the communication module 37 is a device which enables communicate between the vehicle 3 and the outside of the vehicle 3 .
- the communication module 37 for example, is a data communication module (DCM) able to communicate with a communication network 5 through a wireless base station 6 .
- DCM data communication module
- a mobile terminal for example, a smartphone, tablet terminal, WiFi router, etc.
- a mobile terminal may be used as the communication module 37 .
- the ECU 40 includes a communication interface 41 , a memory 42 , and a processor 43 and performs various control operations of the vehicle 3 . Note that, in the present embodiment, a single ECU 40 is provided, but a plurality of ECUs may be provided for the different functions.
- the communication interface 41 is an interface circuit for connecting the ECU 40 to an internal vehicle network based on the CAN or other standard.
- the ECU 40 communicates with other vehicle-mounted devices as described above through the communication interface 41 .
- the memory 42 for example, has a volatile semiconductor memory (for example, a RAM) and a nonvolatile semiconductor memory (for example, a ROM).
- the memory 42 stores programs run by the processor 43 , various data used when the various processings are performed in the processor 43 , etc.
- the processor 43 has one or more CPUs (central processing units) and their peripheral circuits and performs various processing. Note that, the processor 43 may further have processing circuits such as arithmetic logic units or numerical calculation units.
- the communication interface 41 , the memory 42 and the processor 43 are connected to each other through signal wires.
- FIG. 3 is a functional block diagram of the ECU 40 in the first embodiment.
- the ECU 40 has a training part 51 .
- the training part 51 is a functional block realized by the processor 43 of the ECU 40 running programs stored in the memory 42 of the ECU 40 .
- the training part 51 performs processing relating to training a machine learning model used in the vehicle 3 .
- a neural network model is used as a machine learning model, and the training part 51 performs processing relating to training a neural network model.
- FIG. 4 shows one example of a neural network model having a simple configuration.
- FIG. 4 The circle marks in FIG. 4 show artificial neurons.
- An artificial neuron is usually called a “node” or “unit” (in this Description, called a “node”).
- the activation function is for example a Sigmoid function a.
- the corresponding weights “w” and biases “b” are used to calculate the total input value “u” ( ⁇ z ⁇ w+b) or only the corresponding weights “w” are used to calculate the total input value “u” ( ⁇ z ⁇ w).
- an identity function is used as the activation function.
- the total input value “u” calculated at the node of the output layer is output as it is as the output value “y” from the node of the output layer.
- the neutral network model used in the vehicle 3 is stored in a memory 42 of the ECU 40 or other storage device provided in the vehicle 3 .
- the ECU 40 inputs a plurality of input parameters to the neutral network model to make the neutral network model output at least one output parameter.
- the values of the input parameters for example, values detected by the sensor 36 etc., or values calculated in the ECU 40 are used.
- the neutral network model it is possible to obtain a suitable value of an output parameter corresponding to predetermined values of input parameters.
- the ECU 40 of the vehicle 3 trains the neutral network model. That is, the neutral network model is trained in the vehicle 3 rather than the server 2 .
- training data sets comprised of measured values of a plurality of input parameters and measured values (truth data) of at least one output parameter corresponding to these measured values are used.
- the training part 51 of the ECU 40 prepares training data sets as processing relating to training of the neutral network model (below, referred to as “training related processing”). Specifically, the training part 51 acquires measured values of a plurality of input parameters and the measured value of at least one output parameter corresponding to these measured values, and prepares a training data set by combining the measured values of the input parameters and output parameter.
- the measured values of the input parameters and the output parameters are, for example, acquired as values detected by the sensor 36 etc., or values calculated or determined at the ECU 40 .
- the training data sets prepared by the training part 51 are stored in the memory 42 of the ECU 40 or other storage device provided at the vehicle 3 . Note that, the measured values of the input parameters used as the training data set may be normalized or standardized.
- the training part 51 trains the neutral network model as training related processing. Specifically, the training part 51 uses a large number of training data sets and repeatedly updates the weights “w” and biases “b” in the neural network model by the known error backpropagation method so that the differences of the output values of the neutral network model and the measured values of the output parameters become smaller. As a result, the neutral network model is trained and a trained neutral network model is produced.
- the information (structure, weights “w”, biases “b”, etc., of the model) of the trained neutral network model is stored in the memory 42 of the ECU 40 or other storage device provided at the vehicle 3 . By using the neutral network model trained at the vehicle 3 , it is possible to predict the value of an output parameter corresponding to predetermined values of input parameters without detecting the actual value of the output parameter by the sensor 36 etc.
- the electric power stored at the battery of the vehicle 3 can be supplied to the outside of the vehicle 3 .
- the vehicle 3 can be effectively used as a source of electric power.
- the training part 51 lowers the amount of electric power consumed in the training relating processing when acquiring disaster information compared with when not acquiring disaster information. By doing this, it is possible to keep the amount of electric power able to be supplied to the outside from the vehicle 3 at the time of a disaster from decreasing due to training related processing.
- Disaster information includes information relating to natural disasters (earthquakes, hurricanes, volcanic eruptions, floods, etc.) and manmade disasters (power outages due to mistakes in construction etc.)
- the training part 51 receives disaster information from the outside of the vehicle 3 to thereby acquire disaster information.
- the server 2 receives disaster information from public institutions (Meteorological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.), electric power companies, etc., and transmits the disaster information to the vehicle 3 .
- the training part 51 of the ECU 40 receives disaster information from the server 2 , it stops the training related processing. That is, the training part 51 stops the training related processing to thereby lower the amount of electric power consumed in the training related processing (rendering it zero).
- FIG. 5 is a flow chart showing the control routine of the disaster information transmission processing in the first embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the processor 24 of the server 2 .
- step S 101 the processor 24 judges whether it has received disaster information. If it is judged that disaster information has not been received, the present control routine ends. On the other hand, if it is judged that disaster information has been received, the control routine proceeds to step S 102 .
- step S 102 the processor 24 transmits the disaster information to the vehicle 3 .
- step S 102 the present control routine ends.
- the disaster information may be input to the server 2 by the operator of the server 2 etc., and at step S 101 , the processor 24 may judge whether disaster information has been input to the server 2 .
- FIG. 6 is a flow chart showing a control routine of training stop processing in the first embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the ECU 40 of the vehicle 3 .
- step S 201 the training part 51 judges whether disaster information has been received from the server 2 . If it is judged that disaster information has not been received from the server 2 , the present control routine ends. On the other hand, if it is judged that disaster information has been received from the server 2 , the control routine proceeds to step S 202 .
- the training part 51 stops the training related processing. Specifically, the training part 51 stops the preparation of the training data sets and the training of the neutral network model. At this time, the fact that the training related processing was stopped for keeping down the electric power consumption may be notified to the driver by text or voice through the HMI 31 .
- the present control routine ends. In this case, the training part 51 , for example, resumes the training related processing when a predetermined time has elapsed, when the vehicle 3 is restarted, when the driver of the vehicle 3 instructs resumption of the training related processing through the HMI 31 , or when the server 2 notifies the end of the state of disaster.
- the training part 51 may stop only training of the neutral network model.
- the training part 51 may lower the amount of electric power consumed in the training related processing without stopping the training related processing.
- the training part 51 for example, reduces the frequency of preparation of training data sets, reduces the frequency of training of the neutral network model, or slows down the training speed of the neutral network model to thereby lower the amount of electric power consumed.
- the training part 51 may directly receive disaster information from public institutions (Meteorological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.), electric power companies, etc., without going through the server 2 . Further, the training part 51 may use a communication module 37 to receive disaster information from other vehicles by vehicle-to-vehicle communication or acquire disaster information from road side devices by road-to-vehicle communication. In these cases, the control routine of FIG. 5 is omitted and, at step S 201 , the training part 51 judges whether it has received disaster information.
- the configuration and control of the machine learning device according to the second embodiment are basically similar to the configuration and control of the machine learning device according to the first embodiment except for the points explained below. For this reason, below, the second embodiment of the present disclosure will be explained focusing on parts different from the first embodiment.
- FIG. 7 is a view schematically showing the configuration of a vehicle 3 ′ in which the machine learning device according to the second embodiment of the present disclosure is provided.
- the vehicle 3 ′ is further provided with an external camera 38 .
- the external camera 38 captures the surroundings of the vehicle 3 ′ and generates surrounding images of the vehicle 3 ′.
- the external camera 38 is placed at the front of the vehicle 3 ′ (for example, the back surface of the room mirror in the vehicle, the front bumper, etc.) so as to capture the area ahead of the vehicle 3 ′.
- the external camera 38 may be a stereo camera able to measure distance.
- the vehicle 3 ′ detects a disaster. That is, the training part 51 of the ECU 40 detects a disaster to acquire disaster information. For example, the training part 51 judges the presence of any disaster in the surroundings of the vehicle 3 ′ based on surrounding images generated by the external camera 38 . Specifically, the training part 51 uses image recognition techniques such as machine learning (neural networks, support vector machines, etc.,) to analyze the surrounding images and thereby judge the presence of any disaster. For example, the training part 51 judges that a disaster has occurred in the surroundings of the vehicle 3 ′ if power cutoff of the traffic lights, collapse of buildings, fractures in the road surfaces, fallen trees, flooding of roads, avalanches, etc., are recognized from the surrounding images.
- the sensor 36 may include gyro sensor etc., and the training part 51 may detect disasters by detecting earthquakes by the sensor 36 .
- FIG. 8 is a flow chart showing a control routine of training stop processing in the second embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the ECU 40 of the vehicle 3 ′.
- step S 301 the training part 51 judges whether it has detected a disaster in the surroundings of the vehicle 3 ′. If it is judged that a disaster has not been detected, the present control routine ends. On the other hand, if it is judged that a disaster has been detected, the control routine proceeds to step S 302 .
- step S 302 in the same way as step S 202 of FIG. 6 , the training part 51 stops the training related processing.
- the present control routine ends.
- the training part 51 for example, resumes the training related processing when a predetermined time has elapsed, when the vehicle 3 ′ is restarted, or when the driver of the vehicle 3 ′ instructs resumption of the training related processing through the HMI 31 .
- the control routine of FIG. 8 can be modified in the same way as the control routine of FIG. 6 .
- the configuration and control of the machine learning device according to the third embodiment are basically similar to the configuration and control of the machine learning device according to the first embodiment except for the points explained below. For this reason, below, the third embodiment of the present disclosure will be explained focusing on parts different from the first embodiment.
- FIG. 9 is a functional block diagram of an ECU 40 in the third embodiment.
- the ECU 40 has a position information acquiring part 52 in addition to a training part 51 .
- the training part 51 and the position information acquiring part 52 are functional blocks realized by the processor 43 of the ECU 40 running programs stored in the memory 42 of the ECU 40 .
- the position information acquiring part 52 acquires position information of the vehicle 3 .
- the position information acquiring part 52 acquires the current position of the vehicle 3 based on the output of the GPS receiver 32 .
- the position information of the vehicle 3 is periodically transmitted along with the identification information of the vehicle 3 (for example, the identification number) from the vehicle 3 to the server 2 and is stored in the storage device 22 of the server 2 .
- the training part 51 lowers the amount of electric power consumed in the training related processing when supply of electric power from the vehicle 3 is anticipated based on the disaster information and the position information of the vehicle 3 compared with when supply of electric power is not anticipated. By doing this, it is possible to decrease the amount of electric power consumed by the training related processing at a more suitable timing in preparation for supply of electric power at the time of a disaster.
- the server 2 receives disaster information from public institutions (Meteorological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.), electric power companies, etc., and identifies a disaster area in which supply of electric power from the vehicle to the outside would be anticipated. Further, position information of a plurality of vehicles being driven is periodically sent to the server 2 and the server 2 compares the position information of the vehicles and disaster area to identify the vehicles inside the disaster area.
- public institutions Metalological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.
- electric power companies etc.
- position information of a plurality of vehicles being driven is periodically sent to the server 2 and the server 2 compares the position information of the vehicles and disaster area to identify the vehicles inside the disaster area.
- the server 2 transmits a stop command for training related processing to a vehicle in the disaster area. If the training part 51 of the ECU 40 receives a stop command of the training related processing from the server 2 , the training part 51 stops the training related processing to thereby lower the amount of electric power consumed at the training related processing (make it zero).
- FIG. 10 is a flow chart showing the control routine of vehicle identification processing in the third embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the processor 24 of the server 2 .
- step S 401 the processor 24 judges whether it has received disaster information. If it is judged that the disaster information has not been received, the present control routine ends. On the other hand, if it is judged that the disaster information has been received, the control routine proceeds to step S 402 .
- the processor 24 identifies vehicles in the disaster area by comparing position information of the disaster area contained in the disaster information with position information of vehicles stored for every vehicle (current positions of vehicles).
- step S 403 the processor 24 transmits a stop command for training related processing to the vehicles identified at step S 402 .
- the present control routine ends.
- the disaster information may be input by the operator of the server 2 etc., to the server 2 and, at step S 401 , the processor 24 may judge whether disaster information has been input to the server 2 .
- FIG. 11 is a flow chart showing the control routine of training stop processing in the third embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the ECU 40 of each vehicle 3 .
- step S 501 the training part 51 judges whether a stop command of the training related processing has been received from the server 2 . If it is judged that a stop command of the training related processing has not been received, the present control routine ends. On the other hand, if it is judged that a stop command of the training related processing has been received, the control routine proceeds to step S 502 .
- step S 502 in the same way as step S 202 of FIG. 6 , the training part 51 stops the training related processing. After step S 502 , the present control routine ends. In this case, the training part 51 , for example, resumes the training related processing when a predetermined time has elapsed, when the vehicle 3 is restarted, or when the driver of the vehicle 3 instructs resumption of the training related processing through the HMI 31 .
- the training part 51 may stop only the training of the neutral network model.
- the processor 24 of the server 2 may transmit a command to keep down the electric power instead of a training stop command to the vehicles in the disaster area.
- a training part 51 receives a command to keep down the electric power from the server 2 , it lowers the amount of electric power consumed in the training related processing without stopping the training related processing.
- the training part 51 reduces the frequency of preparation of training data sets, reduces the frequency of training of the neutral network model, or slows down the training speed of the neutral network model.
- the configuration and control of the machine learning device according to the fourth embodiment are basically similar to the configuration and control of the machine learning device according to the first embodiment except for the points explained below. For this reason, below, the fourth embodiment of the present disclosure will be explained focusing on parts different from the first embodiment.
- each vehicle 3 is sent the disaster information and the training part 51 of the ECU 40 acquires the disaster information. That is, the training part 51 receives disaster information from public institutions (Meteorological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.), electric power companies, etc., and identifies the disaster area in which supply of electric power from the vehicle to the outside would be anticipated. Note that, the training part 51 uses the communication module 37 to receive disaster information from other vehicles by vehicle-to-vehicle communication or receives disaster information from road side devices by road-to-vehicle communication.
- public institutions Metaleorological Agency, Ministry of Land, Infrastructure, Transport and Tourism, etc.
- electric power companies etc.
- the training part 51 uses the communication module 37 to receive disaster information from other vehicles by vehicle-to-vehicle communication or receives disaster information from road side devices by road-to-vehicle communication.
- the training part 51 lowers the amount of electric power consumed in the training relating processing when the vehicle 3 is positioned inside a disaster area. Specifically, the training part 51 stops the training related processing when the vehicle 3 is positioned inside a disaster area.
- FIG. 12 is a flow chart showing a control routine of training stop processing in the fourth embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the ECU 40 of the vehicle 3 .
- step S 601 the training part 51 judges whether disaster information has been received. If it is judged that disaster information has not been received, the present control routine ends. On the other hand, if it is judged that disaster information has been received, the control routine proceeds to step S 602 .
- step S 602 the training part 51 judges whether the vehicle 3 is positioned inside a disaster area based on the position information of the disaster area contained in the disaster information and the current position of the vehicle 3 acquired by the position information acquiring part 52 . If it is judged that the vehicle 3 is not positioned in a disaster area, the present control routine ends. On the other hand, if it is judged that the vehicle 3 is positioned inside a disaster area, the control routine proceeds to step S 603 .
- step S 603 in the same way as step S 202 of FIG. 6 , the training part 51 stops the training related processing. After step S 603 , the present control routine ends. In this case, the training part 51 , for example, resumes the training related processing when a predetermined time has elapsed, when the vehicle 3 is restarted, or when the driver of the vehicle 3 instructs resumption of the training related processing through the HMI 31 .
- the training part 51 may stop only the training of the neutral network model.
- the training part 51 may lower the amount of electric power consumed in the training related processing without stopping the training related processing.
- the training part 51 for example, reduces the frequency of preparation of training data sets, reduces the frequency of training of the neutral network model, or slows down the training speed of the neutral network model.
- the configuration and control of the machine learning device according to the fifth embodiment are basically similar to the configuration and control of the machine learning device according to the first embodiment except for the points explained below. For this reason, below, the fifth embodiment of the present disclosure will be explained focusing on parts different from the first embodiment.
- FIG. 13 is a functional block diagram of the ECU 40 in the fifth embodiment.
- the ECU 40 has an output device control part 53 in addition to the training part 51 and the position information acquiring part 52 .
- the training part 51 , the position information acquiring part 52 , and the output device control part 53 are functional blocks realized by the processor 43 of the ECU 40 running programs stored in the memory 42 of the ECU 40 .
- the output device control part 53 controls the output device provided at the vehicle 3 .
- the output device control part 53 controls the HMI 31 .
- the HMI 31 is one example of an output device.
- the training part 51 lowers the amount of electric power consumed in training related processing in preparation for supply of power to the outside of the vehicle 3 at the time of a disaster.
- a power outage does not always occur at a disaster area.
- the driver will not want to supply power to the outside from the vehicle 3 at the time of a disaster.
- the output device control part 53 confirms permission to stop the training related processing with the driver of the vehicle 3 through the HMI 31 . Further, when the driver permits the stopping of the training related processing, the training part 51 stops the training related processing, while when the driver does not permit the stopping of the training related processing, it does not stop the training related processing, whereby it is possible to stop the training related processing based on the intent of the driver in preparation for supply of electric power at the time of a disaster.
- FIG. 14 is a flow chart showing the control routine of the training stop processing in the fifth embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the ECU 40 of the vehicle 3 .
- step S 701 the output device control part 53 judges whether disaster information has been received from the server 2 . If it is judged that disaster information has not been received from the server 2 , the present control routine ends. On the other hand, if it is judged that disaster information has been received from the server 2 , the control routine proceeds to step S 702 .
- the output device control part 53 confirms permission to stop the training related processing with the driver of the vehicle 3 through the HMI 31 .
- the output device control part 53 confirms permission with the driver by text or voice through the HMI 31 .
- step S 703 the training part 51 judges whether the driver has permitted stopping of the training related processing based on input to the HMI 31 by the driver. If it is judged that the driver has not permitted stopping of the training related processing, the present control routine ends. On the other hand, if it is judged that the driver permitted stopping of the training related processing, the control routine proceeds to step S 704 .
- step S 704 in the same way as step S 202 of FIG. 6 , the training part 51 stops the training related processing.
- the present control routine ends.
- the training part 51 resumes the training related processing when a predetermined time has elapsed, when the vehicle 3 is restarted, when the driver of the vehicle 3 instructs resumption of the training related processing through the HMI 31 , or when the server 2 instructs resumption of the training related processing.
- the control routine of FIG. 14 can be modified in the same way as the control routine of FIG. 6 .
- the configuration and control of the machine learning device according to the sixth embodiment are basically similar to the configuration and control of the machine learning device according to the third embodiment except for the points explained below. For this reason, below, the sixth embodiment of the present disclosure will be explained focusing on parts different from the third embodiment.
- the training part 51 lowers the amount of electric power consumed in training related processing when supply of electric power from the vehicle 3 to the outside is anticipated based on the disaster information and destination of the vehicle 3 compared to when supply of electric power is not anticipated. By doing this, it is possible to keep down the electric power consumed in a suitable vehicle considering the destination.
- the destination of the vehicle 3 is input by the driver of the vehicle 3 and is stored in for example the navigation system 34 etc.
- the position information acquiring part 52 acquires the stored destination of the vehicle 3 .
- the destination of the vehicle 3 is transmitted together with the identification information of the vehicle 3 (for example, the identification number) from the vehicle 3 to the server 2 and is stored in the storage device 22 of the server 2 .
- FIG. 15 is a flow chart showing the control routine of the vehicle identification processing in the sixth embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the processor 24 of the server 2 .
- step S 801 the processor 24 judges whether it has received disaster information. When it is judged that disaster information has not been received, the present control routine ends. On the other hand, when it is judged that disaster information has been received, the control routine proceeds to step S 802 .
- the processor 24 compares the position information of the disaster area included in the disaster information with the destination of the vehicle stored for each vehicle to identify a vehicle with the disaster area as its destination.
- step S 803 the processor 24 transmits a stop command for the training related processing to the vehicle identified at step S 802 .
- the present control routine ends.
- the disaster information may be input by the operator of the server 2 etc., to the server 2 and, at step S 801 , the processor 24 may judge whether disaster information has been input to the server 2 . Further, at step S 803 , the processor 24 may transmit a command to keep down the electric power instead of a command to stop training to a vehicle having the disaster area as its destinations. Further, the current location and destination of each vehicle may be periodically transmitted to the server 2 and the processor 24 may transmit a stop command of the training relating processing or a command to keep down the electric power to a vehicle inside the disaster area and a vehicle having the disaster area as its destination.
- the configuration and control of the machine learning device according to the seventh embodiment are basically similar to the configuration and control of the machine learning device according to the first embodiment except for the points explained below. For this reason, below, the seventh embodiment of the present disclosure will be explained focusing on parts different from the first embodiment.
- the neural network model is trained in the server 2 instead of the ECU 40 of the vehicle 3 . That is, the server 2 functions as a machine learning device.
- the training data sets used for training the neutral network model are prepared in a plurality of vehicles and are transmitted from the plurality of vehicles to the server 2 .
- the processor 24 of the server 2 uses the large number of training data sets to train the neutral network model and transmits the trained neural network model through the communication interface 21 to the vehicles.
- electric power is consumed when receiving the trained neutral network model from the server 2 and storing it.
- the processor 24 of the server 2 acquires disaster information, it stops transmitting the trained neutral network model to the vehicles 3 . Due to this, at the vehicles 3 , it is possible to keep down the decrease of the amount of electric power which can be supplied from a vehicle 3 to the outside at the time of a disaster.
- FIG. 16 is a flow chart showing a control routine of model transmission stop processing in the seventh embodiment of the present disclosure.
- the present control routine is repeatedly performed at predetermined intervals by the processor 24 of the server 2 .
- step S 901 the processor 24 judges whether disaster information has been received. If it is judged that disaster information has not been received, the present control routine ends. On the other hand, if it is judged that disaster information has been received, the control routine proceeds to step S 902 .
- step S 902 the processor 24 stops transmitting the trained neutral network model to the vehicles 3 .
- the present control routine ends.
- the processor 24 for example, resumes transmission of the trained neural network model when a predetermined time has elapsed or when the state of disaster is ended.
- disaster information may be input by the operator of the server 2 etc., to the server 2 and, at step S 901 , the processor 24 may judge whether the disaster information has been input to the server 2 .
- step S 402 of FIG. 10 may be performed between steps S 901 and S 902 , and, at step S 902 , the processor 24 may stop transmitting the trained neural network model to vehicles identified at step S 402 .
- step S 802 of FIG. 15 may be performed between steps S 901 and S 902 and, at step S 902 , the processor 24 may stop transmitting the trained neural network model to vehicles identified at step S 802 .
- control may be performed for stopping the training related processing at each vehicle 3 or control may be performed to lower the amount of electric power consumed in training related processing at each vehicle 3 .
- each training part 51 stops preparing training data sets and transmitting training data sets to the server 2 while when lowering the amount of electric power consumed by the training related processing, reduces the frequency of preparation of training data sets or reduces the frequency of transmission of training data sets to the server 2 .
- the input parameters and output parameters of the neutral network model it is possible to use various parameters corresponding to what is covered by the neutral network model (internal combustion engine, motor, battery, etc.)
- the sensors for detecting measured values of the input parameters or output parameters are selected in accordance with the types of the input parameters and output parameters.
- a random forest regression “k” nearest neighbor algorithm, support vector machine, or other machine learning model other than a neural network may be used.
- steps S 702 to S 704 of FIG. 14 are performed.
- steps S 702 to S 704 of FIG. 14 are performed.
- steps S 702 to S 704 of FIG. 14 are performed.
- steps S 702 to S 704 of FIG. 14 are performed.
- steps S 702 to S 704 of FIG. 14 are performed.
- step S 602 of the control routine of the training stop processing of FIG. 12 the training part 51 judges whether the destination of the vehicle 3 is a disaster area.
- step S 602 of the control routine of the training stop processing of FIG. 12 the training part 51 judges whether the destination of the vehicle 3 is a disaster area.
- steps S 702 to S 704 of FIG. 14 are performed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Traffic Control Systems (AREA)
- Electric Propulsion And Braking For Vehicles (AREA)
Abstract
Description
-
- [PTL 1] Japanese Unexamined Patent Publication No. 2019-183698
-
- 2 server
- 21 communication interface
- 24 processor
- 3 vehicle
- 40 ECU
- 51 training part
Claims (7)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020-105302 | 2020-06-18 | ||
| JP2020105302 | 2020-06-18 | ||
| JP2021030421A JP7298633B2 (en) | 2020-06-18 | 2021-02-26 | machine learning device |
| JP2021-030421 | 2021-02-26 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210397477A1 US20210397477A1 (en) | 2021-12-23 |
| US12141607B2 true US12141607B2 (en) | 2024-11-12 |
Family
ID=78912508
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/348,768 Active 2043-05-25 US12141607B2 (en) | 2020-06-18 | 2021-06-16 | Machine learning device |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US12141607B2 (en) |
| CN (1) | CN113821101B (en) |
Citations (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004268529A (en) | 2003-03-11 | 2004-09-30 | Ricoh Co Ltd | Printing control method |
| JP2009280139A (en) | 2008-05-23 | 2009-12-03 | Denso Corp | Onboard device and program |
| WO2013154093A1 (en) | 2012-04-09 | 2013-10-17 | 日産自動車株式会社 | Control device for hybrid vehicle, management system for hybrid vehicle, and management method for hybrid vehicle |
| US8909950B1 (en) * | 2010-04-18 | 2014-12-09 | Aptima, Inc. | Systems and methods of power management |
| US20160328661A1 (en) * | 2015-05-05 | 2016-11-10 | Retailmenot, Inc. | Scalable complex event processing with probabilistic machine learning models to predict subsequent geolocations |
| JP2017073915A (en) | 2015-10-08 | 2017-04-13 | トヨタ自動車株式会社 | vehicle |
| US20180367484A1 (en) * | 2017-06-15 | 2018-12-20 | Google Inc. | Suggested items for use with embedded applications in chat conversations |
| WO2019097357A1 (en) | 2017-11-16 | 2019-05-23 | 株式会社半導体エネルギー研究所 | Secondary battery life estimation device, life estimation method, and abnormality detection method |
| JP2019161687A (en) | 2018-03-07 | 2019-09-19 | トヨタ自動車株式会社 | vehicle |
| US20190311262A1 (en) | 2018-04-05 | 2019-10-10 | Toyota Jidosha Kabushiki Kaisha | Machine learning device, machine learning method, electronic control unit and method of production of same, learned model, and machine learning system |
| US20200125042A1 (en) | 2018-10-23 | 2020-04-23 | Toyota Jidosha Kabushiki Kaisha | Control support device, apparatus control device, control support method, recording medium, learned model for causing computer to function, and method of generating learned model |
| US20200130669A1 (en) | 2018-10-31 | 2020-04-30 | Toyota Jidosha Kabushiki Kaisha | Communication control device, communication system, method of controlling communication |
| US20200132011A1 (en) | 2018-10-25 | 2020-04-30 | Toyota Jidosha Kabushiki Kaisha | Control support device, vehicle, control support method, recording medium, learned model for causing computer to function, and method of generating learned model |
| US20200142409A1 (en) * | 2018-11-02 | 2020-05-07 | Aurora Innovation, Inc. | Generating Testing Instances for Autonomous Vehicles |
| US20200364471A1 (en) * | 2019-05-14 | 2020-11-19 | Samsung Electronics Co., Ltd. | Electronic device and method for assisting with driving of vehicle |
| US20210097335A1 (en) * | 2019-09-30 | 2021-04-01 | International Business Machines Corporation | Multiclassification approach for enhancing natural language classifiers |
| US20210103458A1 (en) * | 2019-10-08 | 2021-04-08 | Microsoft Technology Licensing, Llc | Machine learning-based power capping and virtual machine placement in cloud platforms |
| US20220250498A1 (en) * | 2019-05-14 | 2022-08-11 | Tsubakimoto Chain Co. | Charge-discharge apparatus, charge-discharge control method, and computer readable medium |
| US20220292368A1 (en) * | 2019-08-29 | 2022-09-15 | Nippon Telegraph And Telephone Corporation | Classification device, learning device, classification method, learning method, classification program and learning program |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4685459B2 (en) * | 2005-01-21 | 2011-05-18 | パイオニア株式会社 | Processing control apparatus, method thereof, program thereof, and recording medium recording the program |
| JP2013021781A (en) * | 2011-07-08 | 2013-01-31 | Toyota Motor Corp | Power supply system |
| JP5907905B2 (en) * | 2013-02-05 | 2016-04-26 | 三菱電機株式会社 | Earthquake detection device, server device, and earthquake detection method |
| JP5956662B1 (en) * | 2015-07-31 | 2016-07-27 | ファナック株式会社 | Motor control device for adjusting power regeneration, control device for forward converter, machine learning device and method thereof |
| US10496936B2 (en) * | 2018-03-05 | 2019-12-03 | Capital One Services, Llc | Systems and methods for preventing machine learning models from negatively affecting mobile devices through intermittent throttling |
| JP7251116B2 (en) * | 2018-11-21 | 2023-04-04 | トヨタ自動車株式会社 | VEHICLE, VEHICLE CONTROL METHOD, AND VEHICLE CONTROL PROGRAM |
| KR20190075017A (en) * | 2019-06-10 | 2019-06-28 | 엘지전자 주식회사 | vehicle device equipped with artificial intelligence, methods for collecting learning data and system for improving the performance of artificial intelligence |
-
2021
- 2021-06-07 CN CN202110633484.3A patent/CN113821101B/en active Active
- 2021-06-16 US US17/348,768 patent/US12141607B2/en active Active
Patent Citations (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004268529A (en) | 2003-03-11 | 2004-09-30 | Ricoh Co Ltd | Printing control method |
| JP2009280139A (en) | 2008-05-23 | 2009-12-03 | Denso Corp | Onboard device and program |
| US8909950B1 (en) * | 2010-04-18 | 2014-12-09 | Aptima, Inc. | Systems and methods of power management |
| WO2013154093A1 (en) | 2012-04-09 | 2013-10-17 | 日産自動車株式会社 | Control device for hybrid vehicle, management system for hybrid vehicle, and management method for hybrid vehicle |
| US20160328661A1 (en) * | 2015-05-05 | 2016-11-10 | Retailmenot, Inc. | Scalable complex event processing with probabilistic machine learning models to predict subsequent geolocations |
| JP2017073915A (en) | 2015-10-08 | 2017-04-13 | トヨタ自動車株式会社 | vehicle |
| US20180367484A1 (en) * | 2017-06-15 | 2018-12-20 | Google Inc. | Suggested items for use with embedded applications in chat conversations |
| WO2019097357A1 (en) | 2017-11-16 | 2019-05-23 | 株式会社半導体エネルギー研究所 | Secondary battery life estimation device, life estimation method, and abnormality detection method |
| US20210190877A1 (en) | 2017-11-16 | 2021-06-24 | Semiconductor Energy Laboratory Co., Ltd. | Lifetime estimation device, lifetime estimation method, and abnormality detection method of secondary battery |
| JP2019161687A (en) | 2018-03-07 | 2019-09-19 | トヨタ自動車株式会社 | vehicle |
| US20190311262A1 (en) | 2018-04-05 | 2019-10-10 | Toyota Jidosha Kabushiki Kaisha | Machine learning device, machine learning method, electronic control unit and method of production of same, learned model, and machine learning system |
| JP2019183698A (en) | 2018-04-05 | 2019-10-24 | トヨタ自動車株式会社 | On-vehicle electronic control unit |
| US20200125042A1 (en) | 2018-10-23 | 2020-04-23 | Toyota Jidosha Kabushiki Kaisha | Control support device, apparatus control device, control support method, recording medium, learned model for causing computer to function, and method of generating learned model |
| JP2020067762A (en) | 2018-10-23 | 2020-04-30 | トヨタ自動車株式会社 | Control assisting device, apparatus controller, control assisting method, control assisting program, prelearned model for making computer function, and method for generating prelearned model |
| US20200132011A1 (en) | 2018-10-25 | 2020-04-30 | Toyota Jidosha Kabushiki Kaisha | Control support device, vehicle, control support method, recording medium, learned model for causing computer to function, and method of generating learned model |
| JP2020067911A (en) | 2018-10-25 | 2020-04-30 | トヨタ自動車株式会社 | Control assistance device, vehicle, and control assistance system |
| US20200130669A1 (en) | 2018-10-31 | 2020-04-30 | Toyota Jidosha Kabushiki Kaisha | Communication control device, communication system, method of controlling communication |
| CN111114466A (en) | 2018-10-31 | 2020-05-08 | 丰田自动车株式会社 | Communication control device, communication system, method performed by communication control device |
| US20200142409A1 (en) * | 2018-11-02 | 2020-05-07 | Aurora Innovation, Inc. | Generating Testing Instances for Autonomous Vehicles |
| US20200364471A1 (en) * | 2019-05-14 | 2020-11-19 | Samsung Electronics Co., Ltd. | Electronic device and method for assisting with driving of vehicle |
| US20220250498A1 (en) * | 2019-05-14 | 2022-08-11 | Tsubakimoto Chain Co. | Charge-discharge apparatus, charge-discharge control method, and computer readable medium |
| US20220292368A1 (en) * | 2019-08-29 | 2022-09-15 | Nippon Telegraph And Telephone Corporation | Classification device, learning device, classification method, learning method, classification program and learning program |
| US20210097335A1 (en) * | 2019-09-30 | 2021-04-01 | International Business Machines Corporation | Multiclassification approach for enhancing natural language classifiers |
| US20210103458A1 (en) * | 2019-10-08 | 2021-04-08 | Microsoft Technology Licensing, Llc | Machine learning-based power capping and virtual machine placement in cloud platforms |
Non-Patent Citations (4)
| Title |
|---|
| Krueger et al. Multi-Layer Event-Based Vehicle-to-Grid (V2G) Scheduling With Short Term Predictive Capability Within a Modular Aggregator Control Structure. [online] (May 5). IEEE., pp. 4727-4739. Retrieved From the Internet (Year: 2020). * |
| Kumaravelan, Priyadharshini Modeling Power Grid Recovery and Resilience Post Extreme Weather Events. [online] (Dec. 2019). Texas State University., pp. 1-98. Retrieved From the Internet (Year: 2019). * |
| Lopez, Karol Lina. A Machine Learning Approach for the Smart Charging of Electric Vehicles. [online]. Universite Laval., pp. 1-119. Retrieved From the Internet <https://dam-oclc.bac-lac.gc.ca/download?is_thesis=1&oclc_number=1132194635&id=fa16dd65-15ef-4384-b9f2-499014e5af2f&fileName=34811.pdf> (Year: 2019). * |
| Mroczek et al. The V2G Process With the Predictive Model. [online] (Apr. 25). IEEE., pp. 86947-86956. Retrieved From the Internet (Year: 2020). * |
Also Published As
| Publication number | Publication date |
|---|---|
| US20210397477A1 (en) | 2021-12-23 |
| CN113821101B (en) | 2024-05-14 |
| CN113821101A (en) | 2021-12-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114911243B (en) | Control method, device, equipment and vehicle for vehicle-road cooperative automatic driving | |
| US11501640B2 (en) | Processing device, processing method, and processing program | |
| CN112740134B (en) | Electronic device, vehicle control method of electronic device, server, and method of providing accurate map data of server | |
| US20230106791A1 (en) | Control device for vehicle and automatic driving system | |
| CN112050792B (en) | Image positioning method and device | |
| EP3957091A1 (en) | Vehicle sensor data acquisition and distribution | |
| JP2019145077A (en) | System for building vehicle-to-cloud real-time traffic map for autonomous driving vehicle (adv) | |
| KR20190141081A (en) | A v2x communication-based vehicle lane system for autonomous vehicles | |
| US11408739B2 (en) | Location correction utilizing vehicle communication networks | |
| WO2016121572A1 (en) | Parking lot and destination association | |
| US11643031B2 (en) | Allergen map created by vehicle network | |
| US10809722B2 (en) | Navigation system with route prediction mechanism and method of operation thereof | |
| CN114537141B (en) | Method, device, apparatus and medium for controlling a vehicle | |
| US11847385B2 (en) | Variable system for simulating operation of autonomous vehicles | |
| CN115100377B (en) | Map construction method, device, vehicle, readable storage medium and chip | |
| US11288373B2 (en) | Boot failure recovery scheme for hardware-based system of autonomous driving vehicles | |
| Bai et al. | Cyber mobility mirror for enabling cooperative driving automation in mixed traffic: A co-simulation platform | |
| JP7298633B2 (en) | machine learning device | |
| US12141607B2 (en) | Machine learning device | |
| JP7380633B2 (en) | Monitoring device, monitoring method and monitoring system | |
| CN112731912B (en) | System and method for enhancing early detection of performance-induced risk of autonomous driving vehicles | |
| CN111857117A (en) | GPS message decoder for decoding GPS messages during autonomous driving | |
| CN113432616A (en) | Map making and/or processing method, map storage device and path planning method | |
| US20220207209A1 (en) | Deterministic sampling of autonomous vehicle simulation variables at runtime | |
| JP7552625B2 (en) | Vehicle communication control device, vehicle communication control method, and computer program for vehicle communication control |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: TOYOTA JIDOSHA KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOKOYAMA, DAIKI;ASAHARA, NORIMI;REEL/FRAME:056554/0964 Effective date: 20210510 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |