CN111102043B - Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method - Google Patents

Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method Download PDF

Info

Publication number
CN111102043B
CN111102043B CN201911010548.3A CN201911010548A CN111102043B CN 111102043 B CN111102043 B CN 111102043B CN 201911010548 A CN201911010548 A CN 201911010548A CN 111102043 B CN111102043 B CN 111102043B
Authority
CN
China
Prior art keywords
vehicle
learned model
control
unit
learned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911010548.3A
Other languages
Chinese (zh)
Other versions
CN111102043A (en
Inventor
北川荣来
横山大树
永坂圭介
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Original Assignee
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp filed Critical Toyota Motor Corp
Publication of CN111102043A publication Critical patent/CN111102043A/en
Application granted granted Critical
Publication of CN111102043B publication Critical patent/CN111102043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01NGAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR MACHINES OR ENGINES IN GENERAL; GAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR INTERNAL COMBUSTION ENGINES
    • F01N3/00Exhaust or silencing apparatus having means for purifying, rendering innocuous, or otherwise treating exhaust
    • F01N3/08Exhaust or silencing apparatus having means for purifying, rendering innocuous, or otherwise treating exhaust for rendering innocuous
    • F01N3/10Exhaust or silencing apparatus having means for purifying, rendering innocuous, or otherwise treating exhaust for rendering innocuous by thermal or catalytic conversion of noxious components of exhaust
    • F01N3/18Exhaust or silencing apparatus having means for purifying, rendering innocuous, or otherwise treating exhaust for rendering innocuous by thermal or catalytic conversion of noxious components of exhaust characterised by methods of operation; Control
    • F01N3/20Exhaust or silencing apparatus having means for purifying, rendering innocuous, or otherwise treating exhaust for rendering innocuous by thermal or catalytic conversion of noxious components of exhaust characterised by methods of operation; Control specially adapted for catalytic conversion ; Methods of operation or control of catalytic converters
    • F01N3/2006Periodically heating or cooling catalytic reactors, e.g. at cold starting or overheating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W10/00Conjoint control of vehicle sub-units of different type or different function
    • B60W10/04Conjoint control of vehicle sub-units of different type or different function including control of propulsion units
    • B60W10/06Conjoint control of vehicle sub-units of different type or different function including control of propulsion units including control of combustion engines
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W10/00Conjoint control of vehicle sub-units of different type or different function
    • B60W10/24Conjoint control of vehicle sub-units of different type or different function including control of energy storage means
    • B60W10/26Conjoint control of vehicle sub-units of different type or different function including control of energy storage means for electrical energy, e.g. batteries or capacitors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W20/00Control systems specially adapted for hybrid vehicles
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01NGAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR MACHINES OR ENGINES IN GENERAL; GAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR INTERNAL COMBUSTION ENGINES
    • F01N9/00Electrical control of exhaust gas treating apparatus
    • F01N9/005Electrical control of exhaust gas treating apparatus using models instead of sensors to determine operating characteristics of exhaust systems, e.g. calculating catalyst temperature instead of measuring it directly
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D29/00Controlling engines, such controlling being peculiar to the devices driven thereby, the devices being other than parts or accessories essential to engine operation, e.g. controlling of engines by signals external thereto
    • F02D29/02Controlling engines, such controlling being peculiar to the devices driven thereby, the devices being other than parts or accessories essential to engine operation, e.g. controlling of engines by signals external thereto peculiar to engines driving vehicles; peculiar to engines driving variable pitch propellers
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/021Introducing corrections for particular conditions exterior to the engine
    • F02D41/0235Introducing corrections for particular conditions exterior to the engine in relation with the state of the exhaust gas treating apparatus
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/14Introducing closed-loop corrections
    • F02D41/1401Introducing closed-loop corrections characterised by the control or regulation method
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/14Introducing closed-loop corrections
    • F02D41/1401Introducing closed-loop corrections characterised by the control or regulation method
    • F02D41/1405Neural network control
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2441Methods of calibrating or learning characterised by the learning conditions
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2451Methods of calibrating or learning characterised by what is learned or calibrated
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2451Methods of calibrating or learning characterised by what is learned or calibrated
    • F02D41/2454Learning of the air-fuel ratio control
    • F02D41/2461Learning of the air-fuel ratio control by learning a value and then controlling another value
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/2406Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using essentially read only memories
    • F02D41/2425Particular ways of programming the data
    • F02D41/2429Methods of calibrating or learning
    • F02D41/2451Methods of calibrating or learning characterised by what is learned or calibrated
    • F02D41/2474Characteristics of sensors
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/24Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means
    • F02D41/26Electrical control of supply of combustible mixture or its constituents characterised by the use of digital means using computer, e.g. microprocessor
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/30Controlling fuel injection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • B60W2520/105Longitudinal acceleration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2710/00Output or target parameters relating to a particular sub-units
    • B60W2710/24Energy storage means
    • B60W2710/242Energy storage means for electrical energy
    • B60W2710/244Charge state
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01NGAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR MACHINES OR ENGINES IN GENERAL; GAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR INTERNAL COMBUSTION ENGINES
    • F01N2900/00Details of electrical control or of the monitoring of the exhaust gas treating apparatus
    • F01N2900/04Methods of control or diagnosing
    • F01N2900/0402Methods of control or diagnosing using adaptive learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01NGAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR MACHINES OR ENGINES IN GENERAL; GAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR INTERNAL COMBUSTION ENGINES
    • F01N2900/00Details of electrical control or of the monitoring of the exhaust gas treating apparatus
    • F01N2900/06Parameters used for exhaust control or diagnosing
    • F01N2900/10Parameters used for exhaust control or diagnosing said parameters being related to the vehicle or its components
    • F01N2900/102Travelling distance
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F01MACHINES OR ENGINES IN GENERAL; ENGINE PLANTS IN GENERAL; STEAM ENGINES
    • F01NGAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR MACHINES OR ENGINES IN GENERAL; GAS-FLOW SILENCERS OR EXHAUST APPARATUS FOR INTERNAL COMBUSTION ENGINES
    • F01N2900/00Details of electrical control or of the monitoring of the exhaust gas treating apparatus
    • F01N2900/06Parameters used for exhaust control or diagnosing
    • F01N2900/16Parameters used for exhaust control or diagnosing said parameters being related to the exhaust apparatus, e.g. particulate filter or catalyst
    • F01N2900/1624Catalyst oxygen storage capacity
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/021Introducing corrections for particular conditions exterior to the engine
    • F02D41/0235Introducing corrections for particular conditions exterior to the engine in relation with the state of the exhaust gas treating apparatus
    • F02D2041/0265Introducing corrections for particular conditions exterior to the engine in relation with the state of the exhaust gas treating apparatus to decrease temperature of the exhaust gas treating apparatus
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/02Input parameters for engine control the parameters being related to the engine
    • F02D2200/08Exhaust gas treatment apparatus parameters
    • F02D2200/0814Oxygen storage amount
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/50Input parameters for engine control said parameters being related to the vehicle or its components
    • F02D2200/501Vehicle speed
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/70Input parameters for engine control said parameters being related to the vehicle exterior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Transportation (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Toxicology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Automation & Control Theory (AREA)
  • Computer Hardware Design (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Combined Controls Of Internal Combustion Engines (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a control support device and method, a vehicle, a recording medium, a learned model for causing a computer to function, and a generation method. The assistance is performed so that, even in a vehicle not equipped with a machine learning device, control substantially equivalent to control using a learned model obtained by machine learning can be executed. The control support device includes: a data acquisition unit that acquires sensor information relating to the state of the inside or outside of a supply-side vehicle that supplies parameters for machine learning; a learning unit that generates a learned model by performing machine learning using an input/output data set, which is the sensor information acquired by the data acquisition unit and includes input parameters and output parameters of the learned model; and a transmission unit that transmits at least one of the learned model and an output parameter obtained by inputting sensor information of the vehicle for assistance control as an input parameter to the generated learned model.

Description

Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method
Technical Field
The present invention relates to a control assistance device, a vehicle, a control assistance method, a recording medium, a learned model for causing a computer to function, and a method for generating the learned model.
Background
There is known a technique of controlling an internal combustion engine using a learned model obtained by machine learning based on a neural network (for example, see patent document 1). In this technique, the flow rate of gas in a predetermined passage of the internal combustion engine is estimated using a learned model, and the internal combustion engine is controlled based on the estimation result.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2012-112277
Disclosure of Invention
Problems to be solved by the invention
However, machine learning based on neural networks requires a sufficient amount of data and computational power.
In particular, when a learned model obtained by machine learning is applied to control of a vehicle, the number of parameters increases and the amount of calculation also increases. Since a machine learning device that performs learning using a large number of parameters with a huge amount of calculation is very expensive, a vehicle on which the machine learning device can be mounted is limited. In this case, a performance difference occurs between the vehicle mounted with the machine learning device and the vehicle not mounted with the machine learning device. Therefore, it is desired to develop a technique of "performing the same control as in the case of using a learned model obtained by machine learning even in a vehicle not equipped with a machine learning device, and reducing a performance difference between the vehicle equipped with the machine learning device and the vehicle not equipped with the machine learning device".
The present invention has been made in view of the above, and an object thereof is to provide a control assistance device, a vehicle, a control assistance method, a recording medium, a learned model for causing a computer to function, and a method for generating a learned model, which can assist a vehicle not equipped with a machine learning device to perform substantially the same control as the control using the learned model obtained by machine learning.
Means for solving the problems
In order to solve the above-described problems and achieve the above-described object, a control assistance device according to an aspect of the present invention assists control of a vehicle using a learned model obtained by machine learning, the control assistance device including: a data acquisition unit that acquires sensor information relating to a state of the inside or outside of a supply-side vehicle to which a parameter for machine learning is supplied; a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit and that includes input parameters and output parameters of the learned model to generate a learned model; and a transmission unit that transmits at least one of the generated learned model and an output parameter calculated by inputting sensor information of the vehicle for assistance control as an input parameter to the generated learned model.
A control assistance device according to an aspect of the present invention assists control of a vehicle using a learned model obtained by machine learning, the control assistance device including: a data acquisition unit that acquires sensor information relating to a state of the inside or outside of a supply-side vehicle to which a parameter for machine learning is supplied; a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit from the plurality of supply-side vehicles and that includes input parameters and output parameters of the learned model to generate a plurality of learned models; a selection unit configured to select a learned model to be transmitted to the vehicle for assist control from the plurality of learned models; and a transmission unit that transmits the selected learned model to the vehicle.
In this way, the selection unit selects a learned model to be transmitted to the vehicle whose control is being assisted from among the plurality of learned models generated by the learning unit through machine learning and transmits the learned model to the vehicle, and thus the learned model suitable for controlling the assisted vehicle can be transmitted to the vehicle by optimizing the selection of the selection unit.
A control assistance device according to an aspect of the present invention assists control of a vehicle using a learned model obtained by machine learning, the control assistance device including: a data acquisition unit that acquires sensor information relating to a state of the inside or outside of a supply-side vehicle to which a parameter for machine learning is supplied; a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit from the plurality of supply-side vehicles and that includes input parameters and output parameters of the learned model to generate a plurality of learned models; a selection unit that selects a learned model from the plurality of learned models; a prediction unit that calculates an output parameter obtained by inputting sensor information of the vehicle subjected to assist control as an input parameter to the selected learned model; and a transmission unit that transmits the calculated output parameter to the vehicle.
In this way, the control of the vehicle can be performed using the output parameters obtained from the learned model generated by the machine learning even when the vehicle is not equipped with a device that calculates the output parameters from the learned model generated by the machine learning by selecting the learned model to be transmitted to the vehicle whose control is being assisted from among the plurality of learned models generated by the machine learning by the learning unit and transmitting the output parameters calculated by inputting the input parameters of the vehicle to the learned model to the vehicle.
In the control assistance device according to one aspect of the present invention, in the above-described invention, the data acquisition unit may further acquire travel history information of the supply-side vehicle associated with the sensor information, the learning unit may associate the generated learned model with the travel history information, and the selection unit may select, from among the plurality of generated learned models, a learned model associated with travel history information having a high degree of matching with the travel history information of the vehicle.
In this way, the learning unit generates the learned model associated with the travel history information based on the sensor information associated with the travel history information, and therefore, the plurality of learned models can be classified based on the travel history information, and therefore, the selection unit can select the learned model suitable for the vehicle to be assisted based on the travel history of the vehicle.
A control assistance device according to an aspect of the present invention is a control assistance device that is capable of communicating with a server capable of storing a learned model obtained by machine learning and that assists control of a vehicle using the learned model, the control assistance device being mounted on a supply-side vehicle that acquires parameters used for the machine learning, the control assistance device including: a data acquisition unit that acquires sensor information relating to a state of the inside or outside of the supply-side vehicle; a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit and that includes input parameters and output parameters of the learned model to generate a learned model; and a transmission unit that transmits the generated learned model to the server.
In this way, the learned model generated by the control assistance device can be stored in the external server, so that the capacity required for the storage unit in the control assistance device can be reduced, and the processing capacity for machine learning in the server can be reduced because it is not necessary to newly generate a learned model in the server.
In the control assistance device according to an aspect of the present invention, in the above-described invention, the server may transmit, to the vehicle, an output parameter obtained by inputting, as an input parameter, sensor information of the vehicle that assists the control to the learned model found from the plurality of learned models received from the plurality of control assistance devices, or generated from the plurality of learned models received from the plurality of control assistance devices.
In this way, since the output parameter can be calculated by the server based on the sensor information from the vehicle whose control is being assisted, even when the vehicle is not equipped with a device that calculates the output parameter based on the learned model, the output parameter calculated by the learned model can be used for controlling the vehicle.
In the above-described invention, the control support device according to an aspect of the present invention may be configured such that the server includes: a server selection unit that selects a learned model from a plurality of learned models obtained by accumulating, merging, or updating a plurality of learned models received from the plurality of control support apparatuses, respectively; a server prediction unit that calculates an output parameter obtained by inputting sensor information of a vehicle for support control as an input parameter to the selected learned model; and a server communication unit that transmits the output parameter to the vehicle. In the control assistance device according to one aspect of the present invention, in the configuration, the server may further receive travel history information associated with the learned model in the supply-side vehicle and travel history information in the vehicle, and the server selection unit may select a learned model associated with travel history information having a high degree of matching with the travel history information of the vehicle from among a plurality of learned models obtained by the accumulation, merger, or update.
In this way, the plurality of learned models acquired from the plurality of control support apparatuses can be accumulated, merged, and averaged, and therefore the learned models can be optimized. Further, by transmitting the output parameters calculated by inputting the input parameters of the vehicle whose control is supported to the learned model to the vehicle, even when the vehicle is not equipped with a device that calculates the output parameters based on the learned model, the vehicle can be controlled using the output parameters calculated by the learned model generated by machine learning. Further, the server selection unit can select the learned model based on the travel history of the vehicle whose control is being assisted, and therefore can select the learned model suitable for the vehicle from the optimized learned models.
In the above-described invention, the control support device according to an aspect of the present invention may further include a charging processing unit that executes a process of calculating a price for supporting the control for the user of the vehicle, a process of calculating a reward for providing the sensor information to the user of the supply-side vehicle, and a process of outputting a result of the calculation.
Thus, the user of the supply-side vehicle can receive the price for the transmission of the sensor information as a reward, so that the user can be less confused about the transmission information, and the administrator of the control support apparatus can collect a large amount of information, so that the learned model and the output parameter to be provided can be optimized. Further, the user who controls the assisted vehicle can use the learned model generated based on various information transmitted from the supply-side vehicle, and the manager who controls the assistance device can secure the reward paid to the user of the supply-side vehicle. Therefore, the provision of services using the control support apparatus is easy to spread.
In the above-described invention, the control support apparatus pertaining to an aspect of the present invention may be configured such that the sensor information is at least 1 type of information selected from information of a maximum catalyst internal oxygen absorption amount, information of a travel distance, information of an average speed, and information of an average acceleration in the vehicle, and the calculated output parameter is a catalyst warm-up delay amount, an SOC value at which constant output driving is started, or a catalyst bed temperature at which incremental control of fuel is executed.
Thus, in the vehicle in which the control is assisted, the control of the catalyst warm-up delay amount, the SOC value at which the constant output driving is started, and the catalyst bed temperature at which the fuel increase amount control is executed, which are obtained using the learned model, can be performed.
A vehicle according to an aspect of the present invention is a vehicle including a vehicle control device that can communicate with a server capable of storing a learned model obtained by machine learning, the server including: a server storage unit that stores the learned model generated by machine learning using sensor information relating to a state of an inside or an outside of a supply-side vehicle to which parameters for the machine learning are supplied as an input/output data set including input parameters and output parameters of the learned model; and a server communication unit that transmits at least one of the generated learned model and an output parameter calculated by inputting sensor information of the vehicle as an input parameter to the learned model when the sensor information of the vehicle is received, wherein the vehicle control device includes: a data acquisition unit that acquires sensor information relating to a state inside or outside the vehicle; and a communication unit that transmits a model request signal requesting transmission of the learned model or a parameter request signal requesting transmission of an output parameter calculated in accordance with the sensor information acquired by the data acquisition unit, and the sensor information to the server, and receives the generated learned model or the calculated output parameter in accordance with the model request signal or the parameter request signal.
In this way, in the vehicle, the learned model can be received from the server by transmitting the model request signal to the server, and the calculated output parameter can be received from the server by transmitting the parameter request signal and the sensor information of the vehicle.
A vehicle according to an aspect of the present invention is a vehicle including a vehicle control device capable of communicating with a server capable of storing a learned model obtained by machine learning, the server including a server learning unit that performs the machine learning by using an input/output data set that is sensor information related to a state inside or outside the vehicle and includes input parameters and output parameters of the learned model, the vehicle control device including: a data acquisition unit that acquires sensor information relating to a state inside or outside the vehicle; and a transmission unit that transmits the sensor information acquired by the data acquisition unit to the server.
In this way, the input/output data set for machine learning in the server can be transmitted from the supply-side vehicle, and therefore, a learned model obtained by machine learning can be generated in the server.
A vehicle according to an aspect of the present invention is a vehicle including a vehicle control device that can communicate with a server capable of storing a learned model obtained by machine learning, the vehicle control device including: a data acquisition unit that acquires sensor information relating to a state inside or outside the vehicle; a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit and that includes input parameters and output parameters of the learned model to generate a learned model; and a transmission unit that transmits the learned model generated by the learning unit to the server.
In this way, the learned model can be generated on the vehicle side and stored in an external server, so that the required capacity of the storage unit in the vehicle control device of the vehicle can be reduced, and the processing capacity of machine learning in the server can be reduced.
A control assistance method according to an aspect of the present invention is a control assistance method executed by a control assistance apparatus that assists control of a vehicle using a learned model obtained by machine learning, the control assistance method including: a data acquisition step of acquiring sensor information relating to a state of an inside or outside of a supply-side vehicle to which a parameter for the machine learning is supplied; a learning step of reading an input/output data set that is the sensor information acquired in the data acquisition step and includes input parameters and output parameters of the learned model from a storage unit, and performing the machine learning using the read input/output data set to generate a learned model; and a transmission step of transmitting at least one of the generated learned model and an output parameter obtained by inputting sensor information of the vehicle for support control as an input parameter to the learned model.
A control assistance program according to an aspect of the present invention is a control assistance program for causing a control assistance device that assists control of a vehicle using a learned model obtained by machine learning to execute: a data acquisition step of acquiring sensor information relating to a state of an inside or outside of a supply-side vehicle to which a parameter for the machine learning is supplied; a learning step of reading an input/output data set that is the sensor information acquired in the data acquisition step and includes input parameters and output parameters of the learned model from a storage unit, and performing the machine learning using the read input/output data set to generate a learned model; and a transmission step of transmitting at least one of the generated learned model and an output parameter obtained by inputting sensor information of the vehicle for support control as an input parameter to the learned model.
A learned model for causing a computer to function according to an aspect of the present invention is constituted by a neural network having: an input layer to which input parameters quantifying a state of an interior or exterior of a vehicle are input; an intermediate layer, which is inputted with a signal outputted from the input layer, and has a multilayer structure; and an output layer to which a signal output from the intermediate layer is input and which outputs an output parameter that quantifies a predetermined state of the vehicle, each layer being configured by one or more nodes, the learned model being associated with travel history information of the vehicle, the input parameter being input to the input layer, the learned network parameter being calculated based on the neural network, and a value that quantifies the predetermined state of the vehicle being output from the output layer.
Thus, it is possible to associate the learned model generated by the deep learning using the neural network with the travel history information and provide the learned model, and it is possible to appropriately support the control of the vehicle using the learned model.
A method for generating a learned model according to an aspect of the present invention generates a learned model for causing a computer to function so as to output a value for quantifying a predetermined state of a vehicle, the computer uses a neural network having an input layer to which input parameters for quantifying the state of the inside or outside of the vehicle are input, an intermediate layer to which signals output by the input layer are input and which has a multilayer structure, and an output layer to which signals output by the intermediate layer are input and which outputs output parameters, each layer being configured by one or more nodes, and updates network parameters of the neural network based on output parameters output by the output layer in accordance with the input of the input parameters and output parameters of a data group configuring input/output together with the input parameters, stores the network parameters in a storage unit in association with travel history information of the vehicle, and learns the network parameters.
This enables appropriate support of vehicle control and provision of a learned model associated with the travel history information.
Effects of the invention
According to the control assistance device, the vehicle, the control assistance method, the recording medium, the learned model for causing the computer to function, and the method for generating the learned model of the present invention, it is possible to perform assistance so that control substantially equivalent to control using the learned model obtained by machine learning can be executed even in a vehicle not equipped with a machine learning device.
Drawings
Fig. 1 is a schematic diagram showing a control assistance system to which a control assistance device according to a first embodiment of the present invention can be applied.
Fig. 2 is a block diagram schematically showing the configuration of the control support device according to the first embodiment of the present invention shown in fig. 1.
Fig. 3 is a diagram schematically showing the configuration of the neural network learned by the learning unit.
Fig. 4 is a diagram illustrating an outline of input and output of a node included in a neural network.
Fig. 5 is a block diagram schematically showing the structure of the supply-side vehicle shown in fig. 1.
Fig. 6 is a block diagram schematically showing the structure of the request side vehicle shown in fig. 1.
Fig. 7 is a flowchart showing a flow of processing of the control support method according to the first embodiment.
Fig. 8 is a flowchart showing a flow of processing of the control support method according to the second embodiment.
Fig. 9 is a block diagram schematically showing the configuration of a supply-side vehicle according to a third embodiment.
Fig. 10 is a flowchart showing a flow of processing of the control support method according to the third embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the drawings of the following embodiments, the same or corresponding portions are denoted by the same reference numerals. The present invention is not limited to the embodiments described below.
(first embodiment)
First, the control support system according to the first embodiment will be described. Fig. 1 shows a control support system according to the first embodiment. As shown in fig. 1, the control support system 1 includes a control support server 2 including a storage unit 23, a plurality of vehicles 3 including a communication unit 33 and a sensor group 36, and a vehicle 4 including a communication unit 43 and a sensor group 36, which are capable of communicating with each other via a network 10.
The network 10 is constituted by an internet network, a cellular phone network, or the like. The Network 10 is a public communication Network such as the internet, and is configured by one or a combination of plural networks such as a Local Area Network (LAN), a Wide Area Network (WAN), a telephone communication Network such as a mobile phone, a public line, a Virtual Private Network (VPN), and a Private line. The network 10 suitably combines wired and wireless communications.
(control support server)
The control support server 2 executes data collection processing for collecting various information transmitted from the plurality of vehicles 3 having the communication unit 33 as a transmission unit via the network 10. The control support server 2 can execute machine learning by using the collected various information. The control support server 2 transmits a predetermined learned model for controlling each part of the vehicle 4 to the vehicle 4. Fig. 2 is a block diagram schematically showing the configuration of the control support server 2.
As shown in fig. 2, the control support server 2 includes a communication unit 21, a control unit 22, and a storage unit 23. The communication unit 21 is, for example, a Local Area Network (LAN) interface board or a wireless communication circuit for wireless communication. The LAN interface board and the wireless communication circuit are connected to a network 10 such as the internet as a public communication network. The communication unit 21, which is a transmitting unit and a receiving unit, is connected to the network 10 and communicates with the plurality of vehicles 3 and the predetermined vehicle 4. The communication unit 21, which is a server communication unit, receives vehicle identification information, travel history information, and vehicle information unique to the vehicles 3 and 4 from the vehicles 3 and 4, and transmits the learned model and the control signal to the vehicles 3 and 4.
The vehicle identification information includes various information for mutually identifying the respective vehicles 3, 4. The travel history information includes information such as a travel time zone, a travel route, a travel area, traffic information, weather, outside air temperature, and outside air humidity in each of the vehicles 3 and 4. The information on the travel time zone is information on the morning, the evening, whether the travel time zone is a commute time zone, whether the travel time zone is on the sun, or the like. The information on the travel route includes information on the uphill/downhill of the specific road, information obtained by adding information on the travel time zone to the information on the uphill/downhill of the specific road, and the like. The information of the travel region is information of a travel route, information of a city village, information of a prefecture and a prefecture, or information of a region such as a kanto and a east sea. The congestion Information is Information obtained by associating actual congestion Information with a travel time zone, Information obtained by associating actual congestion Information with a congestion cause acquired by a road traffic Information Communication System (VICS (registered trademark): Vehicle Information and Communication System), or the like. The weather information is information obtained by correlating a wind direction, a wind speed, and a traveling direction, information on a change in road surface conditions due to rain, snow, or the like. The outside air temperature and humidity include not only the temperature and humidity during traveling but also information on the actual measured temperature and measured humidity of the outside air. The vehicle information includes information on the state, input, and output of the engine as information related to the vehicles of the vehicles 3, 4, particularly, the internal combustion engine. The vehicle information may further include information such as a total travel distance, position information, speed information, acceleration information, load information, sensor group acquisition information, and a vehicle type.
Specifically, the control Unit 22 includes a Processor such as a CPU (Central Processing Unit), a DSP (Digital Signal Processor), or a Field Programmable Gate Array (FPGA), and a main Memory Unit (not shown) such as a RAM (Random Access Memory) or a ROM (Read Only Memory).
The storage unit 23 is configured by a storage medium selected from a volatile memory such as a RAM, a nonvolatile memory such as a ROM, an EPROM (Erasable Programmable ROM), a Hard Disk Drive (HDD), a removable medium, and the like. The removable medium is, for example, a disk recording medium such as a USB (Universal Serial Bus) memory, a CD (Compact Disc), a DVD (Digital Versatile Disc), or a BD (Blu-ray (registered trademark) Disc). The storage unit 23 may be configured using a computer-readable recording medium such as a memory card that can be attached from the outside. The storage unit 23 can store an Operating System (OS) for executing operations of the control support server 2, various programs, various tables, various databases, and the like. The various programs also include the control support program of the first embodiment. These various programs can also be recorded on a computer-readable recording medium such as a hard disk, a flash memory, a CD-ROM, a DVD-ROM, or a flexible disk, and are widely distributed.
The control unit 22 loads the program stored in the storage unit 23 into the work area of the main storage unit and executes the program, and controls the respective components and the like by executing the program, thereby realizing a function suitable for a predetermined purpose. In the first embodiment, the functions of the data acquisition unit 221, the learning unit 222, the selection unit 223, the prediction unit 224, and the charging processing unit 225 are executed by the control unit 22 executing a program.
The data acquisition unit 221 acquires, as various sensor information related to the state of the inside or outside of the vehicle 3 acquired by the sensor group 36 of the vehicle 3, the input parameter and the output parameter obtained as a result of the control corresponding to the input parameter, when a predetermined control signal is used as the input parameter. The data acquisition unit 221 writes and stores the combination of the input parameters and the output parameters as an input/output data set in the input/output data set storage unit 232 of the storage unit 23. Note that the input/output data set is also referred to as training data. The data acquisition unit 221 writes and stores the input parameters of the various sensor information acquired by the sensor group 36 of the vehicle 3 as the input parameters of the learning data in the learning data storage unit 233 of the storage unit 23.
The learning unit 222, which is also a server learning unit, performs machine learning based on the input/output data set acquired by the data acquisition unit 221. The learning unit 222 writes and stores the learning result in the learned model storage unit 231 of the storage unit 23. The learning unit 222 stores the latest learned model at a predetermined timing in the learned model storage unit 231 of the storage unit 23 independently of the neural network that is learning. When storing the learning model in the learned model storage unit 231, the old learned model may be deleted and the latest learned model may be stored, or the latest learned model may be stored in a state where a part or all of the old learned model is stored.
The selection unit 223, which is also a server selection unit, selects a predetermined learned model from the plurality of learned models stored in the learned model storage unit 231. The selection unit 223 selects a predetermined learned model based on, for example, the travel history information transmitted from the vehicle 4 and the travel history information associated with the learned model stored in the learned model storage unit 231. When the selection unit 223 selects the learned model based on the travel history information, the learned model associated with the travel history information having the highest degree of matching with the travel history information transmitted from the vehicle 4 may be selected.
The prediction unit 224, which is also a server prediction unit, calculates an output parameter that quantifies a physical quantity required for controlling the vehicle 4 by inputting the input parameter acquired by the data acquisition unit 221 to a predetermined learned model. The learned model used by the prediction unit 224 to quantify the physical quantity will be described later.
The charging processing unit 225 performs a series of processes for calculating and outputting, to the user of the supply-side vehicle 3, a reward provided in accordance with the transmission amount of the sensor information or a reward fixedly provided regardless of the transmission amount, for example, based on a predetermined contract. Note that the reward is not limited to money, and may be points or discount coupons that can be used for a predetermined application. Note that the reward may be calculated based on the transmission amounts of the vehicle information and the travel history information. The output reward information is transmitted to a conventionally known server of a financial institution such as a credit card company or a bank, a server of a loyalty card company, or the like (both not shown), and is provided to the user of the vehicle 3 by a predetermined method. Thus, the user of the supply-side vehicle 3 can receive a reward for transmission of the sensor information, the travel history information, and the vehicle information transmitted from the vehicle 3. Therefore, the control support server 2 can collect more information for the user of the vehicle 3 than the user has less aversion to the transmission information. The charging processing unit 225 performs a series of processes for calculating and outputting, to the user of the vehicle 4 on the request side, a quantitative price match corresponding to the reception amount of the learned model received by the vehicle 4 and the predicted data amount to be described later, or a fixed price match regardless of the reception amount and the data amount, for example, based on a predetermined contract. The outputted information on the price is transmitted to a conventionally known server or the like (not shown) of a financial institution such as a credit card company or a bank, for example, and is requested to the user of the vehicle 4 by a predetermined method. Thus, the user of the vehicle 4 on the request side can use the learned model in the vehicle 4 with a price paid, and therefore, even when the vehicle 4 is not equipped with the machine learning device, the use of the learned model can be ensured. Further, the manager who controls the support server 2 can secure the reward to be paid to the user of the vehicle 3. From the above points, it contributes to the spread of the control support system 1.
The storage unit 23, which is also a server storage unit, includes a learned model storage unit 231, an input/output data set storage unit 232, and a learned data storage unit 233. The learned model storage unit 231 stores the learned model so as to be retrievable. The learned model storage unit 231 accumulates, updates, and stores the learned models generated by the learning unit 222 of the control unit 22. The learned model in the initial state is initially stored in the learned model storage unit 231. The learned model is a learned model generated based on deep learning using a neural network. Storing the learned model means storing information such as network parameters and arithmetic algorithms in the learned model. The learned model is stored in association with the travel history information transmitted from the vehicle 3. That is, the travel history information associated with the input/output data group transmitted from the predetermined vehicle 3 is associated with the learned model generated from the input/output data group and stored in the learned model storage unit 231. The learned model may be further associated with the vehicle information of the vehicle 3.
The input/output data set storage unit 232 stores an input/output data set composed of the above-described set of input parameters and output parameters. The learning data storage unit 233 stores the output parameters calculated by the learning unit 222 based on the input parameters, together with the input parameters, as learning data.
In the following, deep learning using a neural network will be described as a specific example of machine learning. Fig. 3 is a diagram schematically showing the configuration of the neural network learned by the learning unit 222. As shown in fig. 3, the neural network 100 is a feedforward propagation type neural network, and includes an input layer 101, an intermediate layer 102, and an output layer 103. The input layer 101 is composed of a plurality of nodes, and different input parameters are input to the respective nodes. The intermediate layer 102 is input with an output from the input layer 101. The intermediate layer 102 has a multilayer configuration including a layer constituted by a plurality of nodes that accept input from the input layer 101. The output layer 103 is input with the output from the intermediate layer 102 and outputs an output parameter. Machine learning using a neural network in which the intermediate layer 102 has a multilayer structure is called deep learning.
Fig. 4 is a diagram illustrating an outline of input and output at a node provided in the neural network 100. In fig. 4, a part of input and output of data in the input layer 101 having I nodes, the first intermediate layer 121 having J nodes, and the second intermediate layer 122 having K nodes in the neural network 100 is schematically shown (I, J, K is a positive integer). Input parameter x to the ith node from the side on input layer 101i(I ═ 1, 2, …, I). Hereinafter, the set of all input parameters will be referred to as "input parameters { xi}”。
Each node of the input layer 101 outputs a signal having a value obtained by multiplying the input parameter by a predetermined weight to each node of the adjacent first intermediate layer 121. For example, the ith node from the top of the input layer 101 has a pair input parameter x for the jth (J is 1, 2, …, J) node output from the top of the first intermediate layer 121iMultiplying by a weight αijTo obtain a value alphaijxiOf the signal of (1). The input to the jth node from the upper side of the first intermediate layer 121 is determined by adding a predetermined bias b to the total output from each node of the input layer 101(1) jThe obtained value sigmai=1~Iαijxi+b(1) j. Here, the first item ∑i=1~IMeaning that I is the sum of 1, 2, …, I.
The output value y of the jth node from the upper side of the first intermediate layer 121jAs an input value Σ from the input layer 101 to the nodei=1~Iαijxi+b(1) jIs expressed as yj=S(Σi=1~Iαijxi+b(1) j). This function S is called an activation function. Specific examples of the activation function include Sigmoid function s (u) ═ 1/{1+ exp (-u) }, modified linear function (ReLU) s (u) ═ max (0, u), and the like. Nonlinear functions are often used as activation functions.
Each node of the first intermediate layer 121 outputs a signal having a value obtained by multiplying the input parameter by a predetermined weight to each node of the adjacent second intermediate layer 122. For example, the jth node from the top of the first intermediate layer 121 has a pair input value y for the kth (K ═ 1, 2, …, K) node output from the top of the second intermediate layer 122jMultiplying by a weight betajkTo obtain a value betajkyjOf the signal of (1). The predetermined bias b is added to the output from each node of the first intermediate layer 121 by summing up the input to the kth node from the upper side of the second intermediate layer 122(2) kThe obtained value sigmaj=1~Jβjkyj+b(2) k. Here, the first item ∑j=1~JMeaning that J is the sum of 1, 2, …, J.
Output value z of the kth node from the upper side of the second intermediate layer 122kUsing the input value Σ from the first intermediate layer 121 to the nodej=1~Jβjkyj+b(2) kIs expressed as z as an activation function of the variablek=S(Σj=1~Jβjkyj+b(2) k)。
In this way, by repeating the operations in the forward direction from the input layer 101 side to the output layer 103 side, one output parameter Y is finally output from the output layer 103. Hereinafter, the weights and biases included in the neural network 100 are collectively referred to as network parameters w. The network parameter w is a vector having all the weights and biases of the neural network 100 as components.
The learning unit 222 performs a learning operation based on the input parameter { x }iThe calculated output parameter Y and input parameter { x } are input to the neural network 100iTogether constitute an output parameter (target output) Y of the input-output data set0To update the operation of the network parameters. Specifically, the operation is performed for outputting 2 output parameters Y and Y0The error minimization operation of (2) to update the network parameter w. At this time, a random gradient descent method is often used. The parameter { x ] is input as followsiAnd set of output parameters Y ({ x)i}, Y) are collectively referred to as "learning data".
The outline of the random gradient descent method is described below. The stochastic gradient descent method is such that the method is based on using 2 output parameters Y and Y0Gradients obtained by differentiating the components of the network parameter w of the defined error function E (w)
Figure BDA0002244076890000171
And updating the network parameter w in a minimized mode. The error function is composed of, for example, the output parameter Y of the learning data and the output parameter Y of the input/output data set0Square error of | Y-Y0|2And (4) defining. In addition, the gradient
Figure BDA0002244076890000172
Is a derivative having a dependence on a component of the network parameter w of the error function E (w)
Figure BDA0002244076890000173
Figure BDA0002244076890000174
(here, I is 1 to I, J is 1 to J, and K is 1 to K).
In the stochastic gradient descent method, the network parameter w is sequentially updated to the network parameter w using a predetermined learning rate η that is automatically or manually determined
Figure BDA0002244076890000175
Note that the learning rate η may be changed during learning. In the case of the more general stochastic gradient descent method, the error function e (w) is defined by random extraction from samples comprising the entire learning data. The number of pieces of learning data extracted at this time is not limited to 1, and may be a part of the learning data stored in the learning data storage unit 233.
As means for effecting gradients efficiently
Figure BDA0002244076890000176
As a method of calculating (2), an error back propagation method is known. Error back propagation method is to calculate the learning data ({ x)i}, Y) afterOutputting Y based on target in output layer0Gradient is calculated in reverse along the output layer → intermediate layer → input layer by error with the output parameter Y
Figure BDA0002244076890000177
The method of (1). The learning unit 222 calculates the gradient by using an error back propagation method
Figure BDA0002244076890000178
After all the components are added, by using the calculated gradient
Figure BDA0002244076890000179
The network parameter w is updated using the random gradient descent method described above.
(supply side vehicle)
The vehicle 3 as the supply-side vehicle is a vehicle that travels by driving of a driver or an autonomous vehicle configured to be capable of autonomous travel in accordance with a provided operation command. Fig. 5 is a block diagram schematically showing the structure of the vehicle 3. As shown in fig. 5, the vehicle 3 includes a drive unit 31, an electronic control unit 32, a communication unit 33, a storage unit 34, an input/output unit 35, a sensor group 36, and a GPS unit 37.
The drive unit 31 is a conventionally known drive unit required for traveling of the vehicle 3. Specifically, the vehicle 3 includes an engine that is an internal combustion engine serving as a driving source, a drive transmission mechanism that transmits a driving force of the engine, a driving wheel for traveling, and the like. The engine of the vehicle 3 is configured to be able to generate electric power using an electric motor or the like by being driven by combustion of fuel. The electric power generated by the power generation is charged in a rechargeable battery.
The electronic control unit 32 and the storage unit 34 are physically similar to the control unit 22 and the storage unit 23, respectively. The electronic control unit 32 integrally controls operations of various components mounted on the vehicle 3. The electronic control unit 32 executes the function of the data acquisition unit 321 by executing the program stored in the storage unit 34. The data acquisition unit 321 acquires various data detected by the sensor group 36, and stores the data as sensor information in the sensor group acquisition information storage unit 343.
The Communication unit 33 as the transmission unit and the reception unit is configured by, for example, a vehicle-mounted Communication Module (DCM) or the like that communicates at least with the control support server 2 by wireless Communication via the network 10.
The storage unit 34 includes a travel history information storage unit 341, a vehicle information storage unit 342, and a sensor group acquisition information storage unit 343. The travel history information storage unit 341 stores, in an storable and updatable manner, travel history information including speed, acceleration, travel time zone, travel route, travel area, traffic congestion information, outside air temperature, humidity, weather, and the like in the vehicle 3. The vehicle information storage unit 342 stores vehicle information including a vehicle type, a total travel distance, a remaining fuel amount, a current position, and the like so as to be storable and updatable. The sensor group acquisition information storage unit 343 stores various data detected by the sensor group 36 as sensor information in an storable and updatable manner.
The input/output unit 35 is constituted by a touch panel display, a speaker microphone, and the like. The input/output unit 35 as input means is configured using a user interface such as a keyboard, input buttons, levers, and a touch panel stacked on a display such as a liquid crystal display. The electronic control unit 32 is configured to input predetermined information by a user operating the touch panel display or by emitting a voice to a speaker microphone. The input/output unit 35 as an output means is configured to be able to display characters, graphics, and the like on a screen of the touch panel display, output voice from a speaker microphone, and notify predetermined information to the outside, in accordance with the control of the electronic control unit 32.
The sensor group 36 includes sensors for measuring the state of the vehicle 3, such as a water temperature sensor for detecting the water temperature of the cooling water of the engine (cooling water temperature), an intake air temperature sensor for detecting the intake air temperature of the engine, an atmospheric pressure sensor for detecting the atmospheric pressure, an oil temperature sensor for detecting the oil temperature of the engine, an a/F sensor for detecting the oxygen concentration in the exhaust gas, and a current sensor for detecting the state of charge of the battery. The sensor group 36 includes sensors related to the travel of the vehicle 3, such as a vehicle speed sensor and an acceleration sensor that detect the speed and the acceleration of the vehicle 3, respectively. The sensor group 36 may include an outside air temperature sensor for detecting an outside air temperature, a humidity sensor for detecting a humidity of the outside air, and the like. The sensor group 36 may further include, for example, an in-vehicle sensor capable of detecting various conditions in the vehicle, an imaging device such as an imaging camera, and the like.
The GPS unit 37 receives radio waves from GPS (Global Positioning System) satellites (not shown) and detects the position of the vehicle 3. The detected position is stored in the vehicle information storage unit 342 as position information in the vehicle information so as to be retrievable. As a method of detecting the position of the vehicle 3, a method of combining LiDAR (Laser Imaging Detection and Ranging) with a three-dimensional digital map may be employed.
(requirement side vehicle)
The vehicle 4 on the request side of the vehicle whose control is supported is a vehicle that travels by driving of the driver or an autonomous vehicle that is configured to be able to travel autonomously in accordance with a provided operation command. Fig. 6 is a block diagram schematically showing the structure of the vehicle 4. As shown in fig. 6, the vehicle 4 includes a drive unit 41, an electronic control unit 42, a communication unit 43, a storage unit 44, an input/output unit 45, a sensor group 46, and a GPS unit 47. The drive unit 41, the communication unit 43, the input/output unit 45, the sensor group 46, and the GPS unit 47 are similar to the drive unit 31, the communication unit 33, the input/output unit 35, the sensor group 36, and the GPS unit 37 in the vehicle 3, respectively.
The electronic control unit 42 and the storage unit 44 are physically similar to the control unit 22 and the storage unit 23, respectively. The electronic control unit 42 and the storage unit 44 constitute a vehicle control device that controls each unit of the vehicle 4. The electronic control unit 42 integrally controls operations of various components mounted on the vehicle 4. The electronic control unit 42 executes the functions of the data acquisition unit 421 and the prediction unit 422 by executing the program stored in the storage unit 44. The data acquisition unit 421 is similar to the data acquisition unit 321 in the electronic control unit 32 of the vehicle 3. The prediction unit 422 calculates a predicted value as an output parameter by inputting various data acquired by the data acquisition unit 421 as input parameters to the learned model stored in the control information storage unit 444 of the storage unit 44.
The storage unit 44 includes a travel history information storage unit 441, a vehicle information storage unit 442, a sensor group acquisition information storage unit 443, and a control information storage unit 444. The travel history information storage unit 441, the vehicle information storage unit 442, and the sensor group acquisition information storage unit 443 are similar to the travel history information storage unit 341, the vehicle information storage unit 342, and the sensor group acquisition information storage unit 343, respectively. The control information storage unit 444 stores at least 1 piece of information of the learned model used for the electronic control unit 42 to control each unit, the predicted value calculated by the prediction unit 422, and the predicted value received from the control support server 2.
Fig. 7 is a flowchart showing a flow of a process of the control support method in the control support system 1 according to the first embodiment. As shown in fig. 7, in step ST1, in the vehicle 3 on the supply side, the data acquisition unit 321 of the electronic control unit 32 acquires the travel history information and the vehicle information. The data acquisition unit 321 stores the acquired travel history information and vehicle information in the travel history information storage unit 341 and the vehicle information storage unit 342, respectively. In step ST2, the data acquisition unit 321 acquires data detected by the sensor group 36 and stores the acquired data as sensor information in the sensor group acquisition information storage unit 343. Steps ST1 and ST2 may be performed in reverse order or in parallel.
Next, the process proceeds to step ST3, and the electronic control unit 32 determines whether or not the current time is a predetermined time set in advance. The predetermined timing is a periodic time at predetermined time intervals or a predetermined time set in advance. When the electronic control unit 32 determines that the current time is not the predetermined timing (no in step ST 3), steps ST1 and ST2 are repeatedly executed. When the electronic control unit 32 determines that the timing is the predetermined timing (yes in step ST 3), the process proceeds to step ST4, and the data acquisition unit 321 associates the acquired and stored sensor information with the travel history information and the vehicle information and transmits the sensor information to the control support server 2 via the communication unit 33. The association of the sensor information with the travel history information and the vehicle information may be performed after the acquisition of the sensor information, the travel history information, and the vehicle information and before the transmission of the information.
In step ST5, the data acquisition unit 221 of the control unit 22 that controls the support server 2 classifies the sensor information received from the vehicle 3 based on the travel history information and the vehicle information, and stores the sensor information in the input/output data set storage unit 232. Thus, the sensor information is classified as an input/output data group from the travel history information and the vehicle information.
Next, the process proceeds to step ST6, where the learning unit 222 extracts sensor information for control from among various sensor information stored in the input/output data set storage unit 232 in association with the travel history information and the vehicle information. The learning unit 222 performs machine learning using the neural network 100 using the extracted sensor information as an input/output data set. Thus, the learning unit 222 generates a learned model, and stores the learned model in the learned model storage unit 231 in association with the travel history information and the vehicle information.
The above-described processing of steps ST1 to ST6 is repeatedly executed between the control support server 2 and the plurality of supply-side vehicles 3. Thus, the learned model associated with the various types of travel history information is accumulated in the learned model storage unit 231 of the control support server 2. The learning unit 222 of the control unit 22 may store the generated learned model in the learned model storage unit 231 in association with the vehicle information. The learned model generated in the past may be updated by a new learned model having a high degree of matching with the travel history information associated with the learned model. Further, a plurality of learned models whose associated travel history information is close to each other may be combined with each other and averaged to generate a new learned model. When the learned models are averaged, the network parameters w in the plurality of learned models can be averaged for each node. The number of nodes may also be further varied. The learning unit 222 may further refer to the vehicle information to integrate or update the plurality of learned models. Thus, the learned model generated in the learned model storage unit 231 is associated with the travel history information, and is accumulated, updated, or combined and averaged and stored. As described above, the process of generating the learned model in the control assistance system 1 is completed.
The same processing as in steps ST1 and ST2 is performed in the vehicle 4 on the request side independently of the processing in steps ST1 to ST 6. That is, in step ST7, the data acquisition unit 421 of the electronic control unit 42 in the vehicle 4 acquires the travel history information of the own vehicle and the vehicle information. The data acquisition unit 421 stores the acquired travel history information and vehicle information in the travel history information storage unit 441 and the vehicle information storage unit 442, respectively. In step ST8, the data acquisition unit 421 acquires data detected by the sensor group 46 and stores the acquired data as sensor information in the sensor group acquisition information storage unit 443. Steps ST7 and ST8 may be performed in reverse order or in parallel.
After the processing of steps ST1 to ST8 is executed, the control assistance server 2 executes the control assistance processing for the vehicle 4. First, in step ST9, the communication unit 43 of the vehicle 4 transmits the model request signal, the travel history information, the vehicle information, and the vehicle identification information to the control support server 2. The vehicle identification information may be included in the vehicle information.
In step ST10, the selection unit 223 of the control unit 22 first selects and finds at least 1 learned model associated with the travel history information with the highest degree of matching from the learned model storage unit 231 based on the travel history information received from the vehicle 4. The selection unit 223 may further select at least 1 learned model based on the running history information and the vehicle information having a high degree of matching, based on the degree of matching of the vehicle information. The selection unit 223 refers to the vehicle identification information and transmits the selected learned model to the vehicle 4 that has transmitted the model request signal.
In step ST11, the prediction unit 422 of the electronic control unit 42 calculates a predicted value using the learned model received from the control support server 2 via the communication unit 43, and controls the prediction. That is, first, the data acquisition unit 421 stores the received learned model in the control information storage unit 444. Next, the prediction unit 422 selects and finds sensor information necessary for control from the sensor group acquisition information storage unit 443 as an input parameter. The prediction unit 422 inputs the input parameters to the received learned model to calculate a predicted value. The electronic control unit 42 controls the driving unit 41 and the like using the calculated predicted value.
The vehicle 3 on the supply side and the vehicle 4 on the request side are the same vehicle. When the vehicles 3 and 4 are the same vehicle, the learned model may be generated in the control support server 2 based on the vehicle identification information instead of the transmission/reception travel history information. The steps ST1 to ST11 described above are repeatedly executed in the control assistance system 1. Thus, the generated learned model is more optimized.
(first embodiment)
Hereinafter, a specific example of control using the learned model will be described. In the first embodiment, the input parameters are the maximum in-catalyst oxygen absorption (maximum OSA) and the travel distance of the vehicle, and the output parameters are the catalyst warm-up delay amount (hereinafter also referred to as catalyst warm-up delay amount) when the engine is started next time. In this case, the learned model is generated in the control support server 2 by using, as the input/output data set, the data of the maximum OSA measured by the sensor group 36 of the vehicle 3 and the total travel distance of the vehicle 3 and the data of the catalyst warm-up delay amount detected and controlled in the vehicle 3 based on the maximum OSA and the total travel distance. In the vehicle 4, the prediction unit 422 inputs the maximum OSA and the total travel distance measured by the sensor group 46 as input parameters to the learned model received from the control support server 2, and calculates a predicted value of the catalyst warm-up delay amount. The electronic control unit 42 of the vehicle 4 controls the catalyst warm-up delay amount in accordance with the predicted value calculated by the prediction unit 422. By this control, even if the deterioration of the exhaust gas purifying catalyst mounted on the vehicle 4 is advanced, the catalyst warm-up control can be performed in accordance with the degree of deterioration, and therefore, it is possible to suppress a large amount of HC and CO from being discharged as exhaust gas emissions.
(second embodiment)
In the second embodiment, the input parameters are the average vehicle speed and the average acceleration, and the output parameters are the SOC (State of Charge) value (hereinafter also referred to as the start SOC value) at which constant output driving is started. In this case, the learned model is generated in the control support server 2 by using, as input/output data sets, data of the average vehicle speed and the average acceleration measured by the sensor group 36 of the vehicle 3 and derived by the control unit 22 or the electronic control unit 32, and data of start SOC values detected and controlled in the plurality of vehicles 3 based on the average vehicle speed and the average acceleration. In the vehicle 4, the prediction unit 422 inputs the average vehicle speed and the average acceleration measured by the sensor group 46 and calculated by the electronic control unit 42 as input parameters to the learned model received from the control support server 2, and calculates a predicted value of the start SOC value. The electronic control unit 42 of the vehicle 4 sets the start SOC value in accordance with the predicted value calculated by the prediction unit 422, and controls the constant output driving.
In a Vehicle such as a PHV (Plug-in Hybrid Vehicle), when the SOC value becomes smaller than a predetermined value, constant output driving is started and catalyst warm-up is started. When the driver of the vehicle 4 performs driving at a high speed or driving with a large amount of rapid acceleration, the SOC value decreases and the normal running is switched to before completion of the catalyst warm-up, and therefore there is a possibility that the catalyst warm-up is insufficient. Then, the average vehicle speed and the average acceleration are input to the learned model as input parameters, and the predicted value of the start SOC value is calculated as an output parameter, whereby the start SOC value can be set to an optimum value. Therefore, the possibility of insufficient warm-up of the catalyst in a vehicle such as a PHV can be reduced.
(third embodiment)
In the third embodiment, the input parameters are set to the maximum OSA and the travel distance of the vehicle, and the output parameter is set to the catalyst bed Temperature (hereinafter also referred to as catalyst estimated bed Temperature) at which the increase control of the fuel is performed in order to suppress the OT (Over Temperature) of the catalyst. In this case, the learned model is generated in the control support server 2 by using, as input/output data sets, data of the maximum OSA detected by the sensor group 36 of the vehicle 3 and the measured total travel distance of the vehicle 3, and data of the estimated bed temperature of the catalyst detected and controlled in the vehicle 3 based on the maximum OSA and the total travel distance. In the vehicle 4, the prediction unit 422 inputs the maximum OSA and the total travel distance measured by the sensor group 46 as input parameters to the learned model received from the control support server 2, and calculates a predicted value of the estimated catalyst bed temperature. The electronic control unit 42 of the vehicle 4 sets the estimated catalyst bed temperature in accordance with the predicted value calculated by the prediction unit 422, and performs control for increasing the amount of fuel supplied to the engine when the catalyst bed temperature reaches the set estimated catalyst bed temperature. By this control, the catalyst estimated bed temperature can be reduced in accordance with the state of deterioration of the exhaust gas purification catalyst mounted on the vehicle 4, and therefore the occurrence of OT of the catalyst can be suppressed.
(fourth embodiment)
In the fourth embodiment, the input parameters are the maximum OSA in the vehicle 4, the travel distance, the average speed, and the average acceleration of the vehicle, and the output parameters are the estimated catalyst bed temperature. In this case, the learned model is generated in the control support server 2 by using the maximum OSA related to the vehicle 3, the data of the total travel distance and the average vehicle speed and the average acceleration derived by the control unit 22 or the electronic control unit 32, and the data of the estimated bed temperature of the catalyst detected and controlled in the vehicle 3 based on these 4 kinds of data as the input/output data set. In the vehicle 4, the prediction unit 422 inputs the maximum OSA, the total travel distance, the average vehicle speed, and the average acceleration relating to the vehicle 4 as input parameters to the learned model received from the control support server 2, and calculates a predicted value of the catalyst estimated bed temperature. The electronic control unit 42 of the vehicle 4 sets the estimated catalyst bed temperature in accordance with the predicted value calculated by the prediction unit 422, and performs control for increasing the fuel when the catalyst bed temperature reaches the set estimated catalyst bed temperature. By this control, the same effects as those of the third embodiment can be obtained. Further, according to the fourth embodiment, since there is a possibility that the catalyst bed temperature may rapidly increase when the driver of the vehicle 4 performs driving at a high speed or driving with a large amount of rapid acceleration, the catalyst estimated bed temperature can be lowered in such a case, and the increase in the fuel supplied to the engine can be executed in a state where the catalyst bed temperature is lower. This can suppress the occurrence of OT in the catalyst even during high-speed driving or during driving with a large amount of rapid acceleration.
Further, a plurality of embodiments selected from the first to fourth embodiments described above may be executed in parallel in the same vehicle 4.
According to the first embodiment described above, the learned model is generated by machine learning using the sensor information acquired from the plurality of vehicles 3 as the input/output data group, and when the model request signal is received from the vehicle 4 that executes the predetermined control, the learned model associated with the travel history information that matches the travel history information in the vehicle 4 to the highest degree is selected and transmitted to the vehicle 4, and the predicted value is calculated in the vehicle 4. Thus, the predicted value can be calculated by the vehicle 4 using the learned model of the traveling condition closest to the traveling condition of the own vehicle, and the control of the vehicle 4 based on the predicted value can be appropriately supported.
(second embodiment)
Next, a control support method according to a second embodiment will be described. The control support system 1, the control support server 2, and the vehicles 3 and 4 according to the second embodiment have the same configurations as those of the first embodiment. However, the second embodiment differs from the first embodiment in the processing of the control support server 2 and the information received by the vehicle 4. In the second embodiment, the control support server 2 calculates a predicted value for control in the vehicle 4 using the generated learned model, and the vehicle 4 acquires the predicted value.
Fig. 8 is a flowchart for explaining a control support method according to the second embodiment. In fig. 8, steps ST21 to ST28 are the same as steps ST1 to ST8 shown in fig. 7. In the second embodiment, the control support server 2 generates a learned model in steps ST21 to ST 28. Steps ST21 to ST28 are repeatedly executed by the control support server 2 and the vehicle 3. Thus, the learned model generated in the learned model storage unit 231 of the control support server 2 is stored in association with the travel history information, and the plurality of learned models having a high degree of coincidence in the associated travel history information are combined and averaged, and updated to a new learned model. In this case, a plurality of learned models having high matching degrees with respect to the travel history information and the vehicle information may be combined and averaged.
After the learned model is generated by the control support server 2, the control support server 2 executes the control support process for the vehicle 4. That is, in step ST29, the communication unit 43 of the vehicle 4 transmits the predicted value request signal, the sensor information, the travel history information, the vehicle information, and the vehicle identification information as the parameter request signal to the control support server 2.
When the control support server 2 receives the predicted value request signal, the sensor information, the travel history information, the vehicle information, and the vehicle identification information from the vehicle 4, the process proceeds to step ST 30. In step ST30, the selection unit 223 of the control unit 22 selects at least 1 learned model associated with the travel history information having the highest degree of matching with the travel history information received from the vehicle 4 from the learned models stored in the learned model storage unit 231. The selection unit 223 may further refer to the vehicle information received from the vehicle 4, and select at least 1 learned model associated with the travel history information and the vehicle information having the highest degree of matching.
After proceeding to step ST31, the prediction unit 224 calculates a predicted value using the selected learned model and transmits the predicted value. That is, the prediction unit 224 first selects sensor information necessary for control from among the sensor information received from the vehicle 4 as an input parameter. The prediction unit 224 inputs the input parameters to the selected learned model to calculate a predicted value. The prediction unit 224 transmits the calculated predicted value to the vehicle 4 that has transmitted the predicted value request signal based on the vehicle identification information and the vehicle information via the communication unit 21.
When the electronic control unit 42 of the vehicle 4 receives the predicted value from the control support server 2 via the communication unit 43, the electronic control unit controls the drive unit 41 and the like using the obtained predicted value in step ST 32. Specific examples of control of the driving unit 41 and the like using the predicted values are the same as those of the first to fourth embodiments described above. The supply-side vehicle 3 and the request-side vehicle 4 may be the same vehicle.
When a part of the sensor information, which is the input parameter received from the vehicle 4 and input to the learned model by the control support server 2, is missing, the prediction unit 224 may calculate a provisional value serving as the missing sensor information as the input parameter. This is effective in the case where a sensor that acquires sensor information necessary as an input parameter of the learned model is not provided in the vehicle 4. The steps ST21 to ST32 described above are repeatedly executed in the control assistance system 1. This optimizes the calculated predicted value.
According to the second embodiment, the control support server 2 generates the learned model, calculates the predicted value in accordance with the sensor information received from the vehicle 4 on the request side, and transmits the calculated predicted value to the vehicle 4. Thus, the vehicle 4 can be controlled using the predicted value obtained by the learned model generated in the control support server 2, and therefore the same effects as those of the first embodiment can be obtained. Further, since the control support server 2 calculates the predicted value, it is not necessary to provide a prediction unit for calculating the predicted value in the vehicle 4. Therefore, the processing capacity of the vehicle 4 can be reduced as compared with the first embodiment, and therefore the vehicle 4 capable of executing the control using the learned model can be increased.
(third embodiment)
Next, a vehicle and a control assistance method according to a third embodiment will be described. The control support system 1, the control support server 2, and the vehicle 4 according to the third embodiment have the same configurations as those of the first embodiment, and the information received by the vehicle 4 is the same as that of the second embodiment. However, the third embodiment differs from the first and second embodiments in the process of controlling the support server 2 and the supply-side vehicle 3. In the third embodiment, a learned model is generated by the vehicle 3 on the supply side, and the control support server 2 calculates a predicted value for control in the vehicle 4 on the request side using the generated learned model.
(supply side vehicle)
Fig. 9 is a block diagram schematically showing the configuration of a vehicle 3A on the supply side in the third embodiment. As shown in fig. 9, the vehicle 3A includes a drive unit 31, an electronic control unit 32, a communication unit 33, a storage unit 34, an input/output unit 35, a sensor group 36, and a GPS unit 37, as in the vehicle 3 of the first embodiment shown in fig. 5. In the vehicle 3A, the electronic control unit 32 further includes a learning unit 322, and the storage unit 34 further includes an input/output data set storage unit 344, a learned model storage unit 345, and a learned data storage unit 346, unlike the vehicle 3. The learning unit 322, the input/output data set storage unit 344, the learned model storage unit 345, and the learning data storage unit 346 are similar to the learning unit 222, the input/output data set storage unit 232, the learned model storage unit 231, and the learning data storage unit 233 in the control support server 2, respectively. That is, the vehicle 3A according to the third embodiment is a vehicle mounted with a control assistance device.
Fig. 10 is a flowchart showing a flow of processing of the control support method according to the third embodiment. In fig. 10, steps ST41, ST42, ST44, ST47, and ST48 are the same as steps ST1, ST2, ST3, ST7, and ST8 shown in fig. 7. The processing in steps ST41 and ST42 may be performed in reverse order, or may be performed in parallel, and the processing in steps ST47 and ST48 may be performed in reverse order, or may be performed in parallel.
After executing steps ST41 and ST42, the process proceeds to step ST43, and the learning unit 322 of the electronic control unit 32 generates a learned model. That is, the data acquisition unit 321 of the electronic control unit 32 extracts sensor information for control from the various sensor information stored in the sensor group acquisition information storage unit 343. The data acquisition unit 321 further reads the travel history information and the vehicle information associated with the extracted sensor information from the travel history information storage unit 341 and the vehicle information storage unit 342, respectively. The data acquisition unit 321 stores the extracted sensor information, the travel history information associated therewith, and the vehicle information in the input/output data set storage unit 344. Next, the learning unit 322 reads the sensor information from the input/output data set storage unit 344, and performs machine learning using the neural network 100 using the read sensor information as an input/output data set. Thus, the learning unit 322 generates a learned model, and stores the learned model in the learned model storage unit 345 in association with the travel history information and the vehicle information.
Thereafter, the process proceeds to step ST44, where the electronic control unit 32 determines whether or not the current time is a predetermined time set in advance. When the electronic control unit 32 determines that the current time is not the predetermined timing (no in step ST 44), steps ST41 to ST43 are repeatedly executed.
If the electronic control unit 32 determines that the timing is the predetermined timing (step ST 44: yes), the process proceeds to step ST 45. In step ST45, the electronic control unit 32 associates the generated and stored learned model with the travel history information and the vehicle information and transmits the learned model to the control support server 2. The association of the learned model with the travel history information and the vehicle information may be performed after the generation of the learned model and before the transmission of the learned model.
In step ST46, the data acquisition unit 221 of the control support server 2 classifies the received learned models based on the travel history information and the vehicle information, and stores the classified learned models in the learned model storage unit 231. The processing of steps ST47 and ST48 is performed in the vehicle 4 on the request side independently of the processing of steps ST41 to ST 46.
The above-described processing of steps ST41 to ST48 is repeatedly executed between the plurality of supply-side vehicles 3 and the control support server 2. Thus, in the learned model storage unit 231 of the control support server 2, as in steps ST1 to ST6 shown in fig. 7, the generated learned models are stored in association with the travel history information, updated by the new learned models having a high degree of coincidence in the travel history information, and the plurality of learned models having a high degree of coincidence in the associated travel history information are merged and averaged.
After the above processing of steps ST41 to ST48 is performed, steps ST49 to ST52 are performed. Steps ST49 to ST52 correspond to steps ST29 to ST32 shown in fig. 8 of the second embodiment, respectively.
The vehicle 3A on the supply side and the vehicle 4 on the request side are the same vehicle. In this case, if the learned model generated in the vehicle 3A (vehicle 4) is used as the learned model and the prediction value is calculated by the prediction unit 422 in the vehicle 4 (vehicle 3A), the communication between the vehicles 3A and 4 and the control support server 2 can be omitted. However, in this case, the learned model may become over-learned. In this case, even if the vehicles 3A and 4 are the same vehicle, a common learned model obtained by combining and averaging a plurality of learned models in the control support server 2 may be used as the learned model.
According to the third embodiment, the learned model is generated in the vehicle 3 on the supply side, the control support server 2 calculates the predicted value based on the sensor information received from the vehicle 4 on the request side, and the calculated predicted value is transmitted to the vehicle 4. Thus, the vehicle 4 can be controlled using the predicted value obtained by the control support server 2 based on the learned model generated in the vehicle 3, and therefore the same effects as those of the first and second embodiments can be obtained.
Although the embodiments of the present invention have been specifically described above, the present invention is not limited to the above-described embodiments, and various modifications based on the technical idea of the present invention can be made. For example, the input parameters and the output parameters described in the above embodiments are merely examples, and input parameters and output parameters different from these may be used as necessary.
For example, in the above-described embodiment, deep learning using a neural network has been described as an example of machine learning, but machine learning by a method other than this may be performed. For example, other supervised learning such as support vector machine, decision tree, naive bayes, k-nearest neighbor, etc. may be used. Alternatively, semi-supervised learning may be used instead of supervised learning.
As the input parameters constituting a part of the input/output data group and the learning data, data obtained by road-to-vehicle communication, inter-vehicle communication, or the like may be used in addition to data obtained from the sensor group 46 of the vehicle 4.
Further effects and modifications can be easily derived by those skilled in the art. The broader aspects of the present invention are not limited to the specific details and representative embodiments shown and described above. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Description of the reference symbols
1 control support system
2 control support server
3. 3A, 4 vehicle
21. 33, 43 communication unit
22 control part
23. 34, 44 storage unit
100 neural network
101 input layer
102 middle layer
103 output layer
121 first intermediate layer
122 second intermediate layer
221. 321, 421 data acquisition part
222. 322 learning part
223 selection part
224. 422 prediction unit
225 charge processing part
231. 345 learned model storage
232. 344 input/output data group storage unit
233. 346 a learning data storage unit.

Claims (10)

1. A control assistance device that assists control of a vehicle using a learned model obtained by machine learning, the control assistance device comprising:
a data acquisition unit that acquires sensor information relating to a state of the inside or outside of a supply-side vehicle to which a parameter for machine learning is supplied;
a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit from the plurality of supply-side vehicles and that includes input parameters and output parameters of the learned model to generate a plurality of learned models;
a selection unit configured to select a learned model to be transmitted to the vehicle for assist control from the plurality of learned models; and
a transmission unit that transmits the selected learned model to the vehicle,
the data acquisition unit further acquires travel history information of the supply-side vehicle associated with the sensor information,
the learning unit associates the generated learned model with the travel history information,
the selection unit selects, from the plurality of generated learned models, learned models associated with travel history information having a high degree of matching with the travel history information of the vehicle, and combines and averages the selected learned models with each other.
2. A control assistance device that assists control of a vehicle using a learned model obtained by machine learning, the control assistance device comprising:
a data acquisition unit that acquires sensor information relating to a state of the inside or outside of a supply-side vehicle to which a parameter for machine learning is supplied;
a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit from the plurality of supply-side vehicles and that includes input parameters and output parameters of the learned model to generate a plurality of learned models;
a selection unit that selects a learned model from the plurality of learned models;
a prediction unit that calculates an output parameter obtained by inputting sensor information of the vehicle subjected to assist control as an input parameter to the selected learned model;
a transmission unit that transmits the calculated output parameter to the vehicle,
the data acquisition unit further acquires travel history information of the supply-side vehicle associated with the sensor information,
the learning unit associates the generated learned model with the travel history information,
the selection unit selects, from the plurality of generated learned models, learned models associated with travel history information having a high degree of matching with the travel history information of the vehicle, and combines and averages the selected learned models with each other.
3. A control support device capable of communicating with a server capable of storing a learned model obtained by machine learning and supporting control of a vehicle using the learned model,
a supply-side vehicle mounted on a vehicle for acquiring parameters used for the machine learning includes:
a data acquisition unit that acquires sensor information relating to a state of the inside or outside of the supply-side vehicle;
a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit and that includes input parameters and output parameters of the learned model to generate a plurality of learned models; and
a transmission unit that transmits the generated learned model to the server,
the server is provided with:
a server selection unit that selects a learned model from a plurality of learned models obtained by accumulating, merging, or updating a plurality of learned models received from the plurality of control support apparatuses, respectively;
a server prediction unit that calculates an output parameter obtained by inputting sensor information of the vehicle subjected to assist control as an input parameter to the selected learned model; and
a server communication unit that transmits the output parameter to the vehicle,
the server also receives travel history information in the supply-side vehicle and travel history information in the vehicle that are associated with the learned model,
the server selection unit selects learned models associated with travel history information having a high degree of matching with the travel history information of the vehicle from among the plurality of learned models obtained by the accumulation, combination, or update, and combines and averages the learned models with each other.
4. The control support apparatus according to claim 3,
the server transmits, to the vehicle, an output parameter obtained by using, as an input parameter, sensor information of the vehicle, which is found from a plurality of learned models received from the plurality of control assistance apparatuses, or which is input to assist control based on a learned model generated from a plurality of learned models received from the plurality of control assistance apparatuses.
5. The control support apparatus according to any one of claims 1 to 4,
the vehicle control system further includes a charging processing unit that performs a process of calculating a price for assisting the control of the user of the vehicle, a process of calculating a reward for providing the sensor information to the user of the supply-side vehicle, and a process of outputting a result of the calculation.
6. The control support device according to any one of claims 1 to 4,
the sensor information is at least 1 type of information selected from information of a maximum catalyst internal oxygen absorption amount, information of a travel distance, information of an average speed, and information of an average acceleration in the vehicle,
the output parameter is a catalyst warm-up delay amount, an SOC value at which constant output driving is started, or a catalyst bed temperature at which incremental control of fuel is performed.
7. A vehicle having a vehicle control device capable of communicating with a server capable of storing a learned model obtained by machine learning,
the server is provided with:
a server storage unit that stores a plurality of learned models generated by machine learning using sensor information relating to a state of an inside or an outside of a supply-side vehicle to which parameters for the machine learning are supplied as an input/output data set including input parameters and output parameters of the learned models;
a server selection unit configured to select a learned model to be transmitted to the vehicle for assist control from among the plurality of learned models; and
a server communication unit that transmits at least one of the generated learned model and an output parameter calculated by inputting sensor information of the vehicle as an input parameter to the learned model when the sensor information of the vehicle is received,
the server selecting unit selects a learned model associated with travel history information having a high degree of matching with the travel history information of the vehicle from among the plurality of generated learned models, and combines and averages the learned models with each other,
the vehicle control device includes:
a data acquisition unit that acquires sensor information relating to a state inside or outside the vehicle; and
and a communication unit that transmits a model request signal requesting transmission of the learned model or a parameter request signal requesting transmission of an output parameter calculated in accordance with the sensor information acquired by the data acquisition unit, and the sensor information to the server, and receives the generated learned model or the calculated output parameter in accordance with the model request signal or the parameter request signal.
8. A vehicle having a vehicle control device capable of communicating with a server capable of storing a learned model obtained by machine learning,
the vehicle control device includes:
a data acquisition unit that acquires sensor information relating to a state inside or outside the vehicle;
a learning unit that performs the machine learning by using an input/output data set that is the sensor information acquired by the data acquisition unit and that includes input parameters and output parameters of the learned model to generate a plurality of learned models; and
a transmission unit that transmits the learned model generated by the learning unit to the server,
the server is provided with:
a server selection unit that selects a learned model from a plurality of learned models obtained by accumulating, merging, or updating a plurality of learned models received from the plurality of vehicle control devices, respectively;
a server prediction unit that calculates an output parameter obtained by inputting sensor information of a vehicle for support control as an input parameter to the selected learned model; and
a server communication unit that transmits the output parameter to the assist-controlled vehicle,
the server further receives travel history information in the vehicle associated with the learned model and travel history information in the assist-controlled vehicle,
the server selection unit selects, from among the plurality of learned models obtained by the accumulation, combination, or update, learned models associated with travel history information having a high degree of matching with the travel history information of the assist-controlled vehicle, combines the learned models with each other, and averages the learned models.
9. A control assistance method executed by a control assistance apparatus that assists control of a vehicle using a learned model obtained by machine learning, the control assistance method comprising:
a data acquisition step of acquiring sensor information relating to a state of an inside or outside of a supply-side vehicle to which a parameter for the machine learning is supplied;
a learning step of reading an input/output data set that is the sensor information acquired in the data acquisition step and includes input parameters and output parameters of the learned model from a storage unit, and performing the machine learning using the read input/output data set to generate a plurality of learned models;
a transmission step of transmitting at least one of the generated learned model and an output parameter obtained by inputting sensor information of the vehicle for support control as an input parameter to the learned model; and
a selection step of selecting a learned model to be transmitted to the vehicle for assist control from among the plurality of learned models,
in the selecting step, learned models associated with travel history information having a high degree of matching with the travel history information of the vehicle are selected from the plurality of generated learned models, and the learned models are combined with each other and averaged.
10. A computer-readable storage medium storing a control assistance program for causing a control assistance apparatus that assists control of a vehicle using a learned model obtained by machine learning to execute:
a data acquisition step of acquiring sensor information relating to a state of an inside or outside of a supply-side vehicle to which a parameter for the machine learning is supplied;
a learning step of reading an input/output data set that is the sensor information acquired in the data acquisition step and includes input parameters and output parameters of the learned model from a storage unit, and performing the machine learning using the read input/output data set to generate a plurality of learned models;
a transmission step of transmitting at least one of the generated learned model and an output parameter obtained by inputting sensor information of the vehicle for support control as an input parameter to the learned model; and
a selection step of selecting a learned model to be transmitted to the vehicle for assist control from among the plurality of learned models,
in the selecting step, learned models associated with travel history information having a high degree of matching with the travel history information of the vehicle are selected from the plurality of generated learned models, and the learned models are combined with each other and averaged.
CN201911010548.3A 2018-10-25 2019-10-23 Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method Active CN111102043B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-201218 2018-10-25
JP2018201218A JP6848949B2 (en) 2018-10-25 2018-10-25 Control assist devices, vehicles, and control assist systems

Publications (2)

Publication Number Publication Date
CN111102043A CN111102043A (en) 2020-05-05
CN111102043B true CN111102043B (en) 2022-03-11

Family

ID=70328495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911010548.3A Active CN111102043B (en) 2018-10-25 2019-10-23 Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method

Country Status (4)

Country Link
US (1) US10968855B2 (en)
JP (1) JP6848949B2 (en)
CN (1) CN111102043B (en)
DE (1) DE102019126147A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020503612A (en) * 2016-12-22 2020-01-30 ニッサン ノース アメリカ,インク Autonomous vehicle service system
US11875371B1 (en) 2017-04-24 2024-01-16 Skyline Products, Inc. Price optimization system
JP6852141B2 (en) * 2018-11-29 2021-03-31 キヤノン株式会社 Information processing device, imaging device, control method of information processing device, and program
JP6744597B1 (en) * 2019-10-18 2020-08-19 トヨタ自動車株式会社 Vehicle control data generation method, vehicle control device, vehicle control system, and vehicle learning device
JP7322804B2 (en) * 2020-05-13 2023-08-08 トヨタ自動車株式会社 Dispatch device and vehicle
JP7322802B2 (en) * 2020-05-13 2023-08-08 トヨタ自動車株式会社 Dispatch device and vehicle
JP7322810B2 (en) * 2020-05-21 2023-08-08 トヨタ自動車株式会社 Fuel temperature estimation system, data analysis device, control device for fuel supply device
JP6795116B1 (en) * 2020-06-08 2020-12-02 トヨタ自動車株式会社 Vehicles and servers
JP2021196777A (en) * 2020-06-11 2021-12-27 トヨタ自動車株式会社 Machine learning apparatus, machine learning system, machine learning method, and program
JP7151743B2 (en) * 2020-06-18 2022-10-12 トヨタ自動車株式会社 Vehicle machine learning system
JP7298633B2 (en) * 2020-06-18 2023-06-27 トヨタ自動車株式会社 machine learning device
JP2022007027A (en) * 2020-06-25 2022-01-13 トヨタ自動車株式会社 Vehicle control device, vehicle control system and vehicle learning device
JP7074166B2 (en) * 2020-08-07 2022-05-24 トヨタ自動車株式会社 Servers, vehicle controls, and vehicle machine learning systems
JP7093031B2 (en) * 2020-09-23 2022-06-29 ダイキン工業株式会社 Information processing equipment, information processing methods, and programs
US20210188306A1 (en) * 2020-12-23 2021-06-24 Nageen Himayat Distributed learning to learn context-specific driving patterns
US11760376B2 (en) * 2020-12-29 2023-09-19 Ford Global Technologies, Llc Machine learning updating with sensor data
US11953896B2 (en) 2021-02-15 2024-04-09 Argo AI, LLC System and method for a modular and continually learning remote guidance system for autonomous vehicles
WO2023073910A1 (en) * 2021-10-29 2023-05-04 日本電気株式会社 Model analysis device, model analysis method, and recording medium
US11952014B2 (en) * 2021-10-29 2024-04-09 Waymo Llc Behavior predictions for active emergency vehicles
CN114999021A (en) * 2022-05-17 2022-09-02 中联重科股份有限公司 Method, processor, device and storage medium for determining cause of oil temperature abnormality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104105947A (en) * 2012-02-27 2014-10-15 丰田自动车工程及制造北美公司 Systems and methods for determining available providers
CN106114507A (en) * 2016-06-21 2016-11-16 百度在线网络技术(北京)有限公司 Local path planning method and device for intelligent vehicle
US9779557B2 (en) * 2015-08-18 2017-10-03 Carfit Corp. Automotive activity monitor
JP2017215898A (en) * 2016-06-02 2017-12-07 株式会社マーズスピリット Machine learning system
CN108216261A (en) * 2016-12-20 2018-06-29 现代自动车株式会社 Method and system based on prediction destination control vehicle
US10026506B1 (en) * 2015-02-06 2018-07-17 Brain Trust Innovations I, Llc System, RFID chip, server and method for capturing vehicle data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003050605A (en) * 2001-08-07 2003-02-21 Mazda Motor Corp Server, method and program for changing control gain for automobile
JP5426520B2 (en) 2010-11-24 2014-02-26 本田技研工業株式会社 Control device for internal combustion engine
JP6044556B2 (en) * 2014-01-16 2016-12-14 株式会社デンソー Learning system, in-vehicle device, and server
JP6471106B2 (en) * 2016-01-19 2019-02-13 日立オートモティブシステムズ株式会社 Vehicle control device, vehicle control parameter learning system
WO2020018394A1 (en) * 2018-07-14 2020-01-23 Moove.Ai Vehicle-data analytics
JP6741087B1 (en) * 2019-02-01 2020-08-19 トヨタ自動車株式会社 Internal combustion engine control device, in-vehicle electronic control unit, machine learning system, internal combustion engine control method, electronic control unit manufacturing method, and output parameter calculation device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104105947A (en) * 2012-02-27 2014-10-15 丰田自动车工程及制造北美公司 Systems and methods for determining available providers
US10026506B1 (en) * 2015-02-06 2018-07-17 Brain Trust Innovations I, Llc System, RFID chip, server and method for capturing vehicle data
US9779557B2 (en) * 2015-08-18 2017-10-03 Carfit Corp. Automotive activity monitor
JP2017215898A (en) * 2016-06-02 2017-12-07 株式会社マーズスピリット Machine learning system
CN106114507A (en) * 2016-06-21 2016-11-16 百度在线网络技术(北京)有限公司 Local path planning method and device for intelligent vehicle
CN108216261A (en) * 2016-12-20 2018-06-29 现代自动车株式会社 Method and system based on prediction destination control vehicle

Also Published As

Publication number Publication date
US20200132011A1 (en) 2020-04-30
DE102019126147A1 (en) 2020-04-30
JP6848949B2 (en) 2021-03-24
CN111102043A (en) 2020-05-05
JP2020067911A (en) 2020-04-30
US10968855B2 (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN111102043B (en) Control assistance device and method, vehicle, recording medium, learned model for causing computer to function, and generation method
TWI638328B (en) Electricity demand prediction device, electricity supply system, method of predicting electricity demand, program , electricity supply menage device
US9090255B2 (en) Hybrid vehicle fuel efficiency using inverse reinforcement learning
US8538686B2 (en) Transport-dependent prediction of destinations
CN110850861A (en) Attention-based hierarchical lane change depth reinforcement learning
CN104280039B (en) For providing the system and method for the driving information of electric vehicle
CN111532166A (en) Electric vehicle charging path planning method and device, vehicle and computer storage medium
CN110991757A (en) Comprehensive prediction energy management method for hybrid electric vehicle
CN101837775A (en) Be used to optimize the system and method that energy storage component uses
US10801848B2 (en) Crowd sourcing to predict vehicle energy consumption
US20240159551A1 (en) Navigation Map Learning for Intelligent Hybrid-Electric Vehicle Planning
CN114078287A (en) Method for operating a motor vehicle having a drive train with at least one traction battery-driven traction motor
US11614335B2 (en) Route planner optimization for hybrid-electric vehicles
US20190232943A1 (en) Vehicle control system, vehicle control method, and storage medium
JP6257923B2 (en) Battery degradation prediction system and route search system
JP2020071611A (en) Machine learning device
JPWO2012004842A1 (en) Driving operation support device and driving operation support method
JP7400837B2 (en) Energy consumption estimation program, energy consumption estimation method, and energy consumption estimation device
US11378412B2 (en) Device and method for outputting navigation information, and vehicle
US20230174042A1 (en) Intelligent Engine Activation Planner
CN113222248B (en) Automatic taxi-driving charging pile selection method
KR102480915B1 (en) Operating method of intelligent vehicle driving control system
JP2022067454A (en) Vehicle control support system
Hong et al. Personalized Energy Consumption Prediction of CAEVs With Personal Driving Cycle Selection
CN117670372A (en) Wind control method based on malicious detour of driver

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant