WO2017057528A1 - Voiture non robot, voiture robot, système de circulation routière, système de partage de véhicule, système d'apprentissage de voiture robot et procédé d'apprentissage de voiture robot - Google Patents

Voiture non robot, voiture robot, système de circulation routière, système de partage de véhicule, système d'apprentissage de voiture robot et procédé d'apprentissage de voiture robot Download PDF

Info

Publication number
WO2017057528A1
WO2017057528A1 PCT/JP2016/078747 JP2016078747W WO2017057528A1 WO 2017057528 A1 WO2017057528 A1 WO 2017057528A1 JP 2016078747 W JP2016078747 W JP 2016078747W WO 2017057528 A1 WO2017057528 A1 WO 2017057528A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving
behavior information
driving behavior
robot car
vehicle
Prior art date
Application number
PCT/JP2016/078747
Other languages
English (en)
Japanese (ja)
Inventor
佐藤 謙治
Original Assignee
株式会社発明屋
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社発明屋 filed Critical 株式会社発明屋
Priority to JP2017543533A priority Critical patent/JPWO2017057528A1/ja
Publication of WO2017057528A1 publication Critical patent/WO2017057528A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a non-robot car having a driving support control function for supporting driving by a human driver, a robot car for which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a non-robot car and a robot car Relates to a road traffic system for traveling on
  • the present invention relates to a vehicle sharing system in which a vehicle is shared by a plurality of users.
  • the present invention relates to a robot car training system and a robot car training method for improving the automatic driving performance of a robot car.
  • Patent Documents 1-13 Many automatic driving control techniques for automobiles have been proposed (patent documents 1-13, non-patent documents 1-4).
  • Patent Documents 14-26 A number of techniques relating to a shared vehicle system have been proposed (Patent Documents 14-26, Non-patent Documents 1-4). These technologies include those related to car rental services, car sharing services, taxi services, etc.
  • the vehicles used in the shared vehicle system include vehicles equipped with a driving support system and vehicles having an automatic driving function (Patent Document 1-13). Among these techniques are those related to machine learning of driving operations (Patent Documents 27-44).
  • Machine learning in a conventional automobile is to improve the automatic driving performance by learning the driving behavior of a human driver (human) of the own vehicle and reflecting the learning result in the automatic driving control of the own vehicle. For this reason, machine learning in a conventional automobile can not be applied to a robot car, that is, a vehicle that travels autonomously without a driving operation by a human driver. In addition, under the situation where the own vehicle is inexperienced (that is, not learned), the relevant vehicle can only exhibit the driving performance of the initial value.
  • the problems to be solved by the present invention are as follows. (1) To provide a non-robot car which can improve the driving support performance by utilizing not only the experience of the own vehicle but also the experience of other vehicles. (2) To provide a robot car capable of improving the automatic driving performance by utilizing not only the experience of the own vehicle but also the experience of other vehicles. (3) To provide a road traffic system capable of improving the driving support performance of non-robot cars and the automatic driving performance of robot cars. (4) A common vehicle system is provided in which each vehicle can improve not only the experience of its own vehicle but also the experience of other vehicles to improve the driving performance. (5) A robot car teaching system and a robot car teaching method capable of improving the automatic driving performance of the robot car by making the robot car learn the driving behavior of the human driver.
  • the road traffic system of the present invention includes a system having the following configuration.
  • a non-robot car performs driving support control based on the traveling condition of the own vehicle while referring to driving behavior information (experience information) of another vehicle. Therefore, according to this road traffic system, even in the situation where the own vehicle is inexperienced, the non-robot car performs the driving support control by performing the driving support information of the other vehicle that has experienced the situation. The situation can be handled with the same level of driving performance as other vehicles.
  • a road traffic system in which a plurality of vehicles travels on a road, wherein the vehicles include a non-robot car having a driving support control function for supporting driving by a human driver, and the non-robot car is a traveling condition of its own vehicle
  • the driving operation to be executed is determined on the basis of the traveling condition recognized by the traveling condition recognition unit that recognizes the driving behavior information acquiring unit that acquires the driving behavior information of the other vehicle, and the traveling condition recognition unit.
  • a driving support control unit that performs driving support control so that the driving operation is performed, and the driving support control unit stores driving knowledge that stores knowledge information to be referred to when determining the driving operation to be performed.
  • a learning processing unit for updating the knowledge information stored in the driving knowledge unit based on the driving behavior information acquired by the driving activity information acquisition unit.
  • Road traffic system according to claim Rukoto.
  • a non-robot car performs a learning process of updating knowledge information (such as a determination criterion for determining a driving operation to be performed) based on driving behavior information of another vehicle while the knowledge information To determine the driving operation according to the traveling condition, and perform driving support control so that the driving operation is performed. Therefore, according to this road traffic system, a non-robot car can perform driving support control by learning the driving behavior of another vehicle that has experienced the situation even in a situation where the own vehicle is unexperienced.
  • a road traffic system in which a plurality of vehicles travels on a road, and the vehicles include a non-robot car having a driving support control function of supporting driving by a human driver.
  • the non-robot car is based on the traveling condition recognized by the traveling condition recognition unit that recognizes the traveling condition of the own vehicle, the driving behavior information acquisition unit that acquires the driving behavior information of the other vehicle, and the traveling condition recognition unit.
  • a driving support control unit that determines a driving operation to be performed and performs driving support control such that the driving operation is performed; and the driving support control unit determines that the driving condition recognition unit recognizes the driving operation.
  • the parameter of the driving operation determination function used in the driving operation determination unit is adjusted based on the driving operation determination unit that determines the driving operation according to the situation by calculation and the driving activity information acquired by the driving activity information acquisition unit And a learning processing unit (parameter adjustment unit).
  • the non-robot car performs the learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle, and determines the driving operation according to the traveling situation by the driving operation determination function.
  • Drive assist control so that the driving operation is performed. Therefore, according to this road traffic system, a non-robot car can perform driving support control by learning the driving behavior of another vehicle that has experienced the situation even in a situation where the own vehicle is unexperienced.
  • a road traffic system in which a plurality of vehicles travel on a road, wherein the vehicle includes a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and the robot car is
  • the traveling condition recognition unit refers to the traveling condition recognition unit that recognizes the traveling condition, the driving behavior information acquisition unit that acquires the driving behavior information of other vehicles, and the driving behavior information acquired by the driving behavior information acquisition unit
  • a road traffic system comprising: an automatic driving control unit that performs automatic driving control based on a recognized traveling condition.
  • the robot car performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle. Therefore, according to this road traffic system, the robot car performs the automatic driving control based on the driving behavior information of the other vehicle who has experienced the situation even in the situation where the own vehicle is unexperienced. The situation can be addressed with the same level of driving performance as a vehicle.
  • a road traffic system in which a plurality of vehicles travel on a road, wherein the vehicle includes a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and the robot car is
  • the driving operation to be executed is determined based on the traveling condition recognized by the traveling condition recognition unit that recognizes the traveling condition, the driving behavior information acquisition unit that acquires the driving behavior information of other vehicles, and the traveling condition recognition unit.
  • an automatic driving control unit performing automatic driving control so as to execute the driving operation, and the automatic driving control unit stores knowledge information to be referred to when determining the driving operation to be performed.
  • a learning processing unit that appropriately updates the knowledge information stored in the driving knowledge unit based on the driving knowledge unit and the driving behavior information acquired by the Transportation systems, characterized in that it comprises a processing unit), a.
  • the robot car performs the learning process of updating the knowledge information (such as the determination criteria when determining the driving operation to be performed) based on the driving behavior information of the other vehicle, and the knowledge information Reference is made to determine the driving operation according to the traveling condition, and automatic driving control is performed so that the driving operation is performed. Therefore, according to this road traffic system, the robot car learns the driving behavior of the other vehicle that has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the automatic driving control. The situation can be addressed with the same level of driving performance as a vehicle.
  • a road traffic system in which a plurality of vehicles travel on a road, wherein the vehicle includes a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and the robot car is
  • the driving operation to be executed is determined based on the traveling situation recognized by the traveling situation recognition unit, which recognizes the traveling situation, a driving behavior information acquisition unit which acquires driving behavior information of other vehicles, and the traveling situation recognition unit,
  • the automatic driving control unit performs automatic driving control so that the driving operation is performed, and the automatic driving control unit calculates the driving behavior according to the traveling condition recognized by the traveling condition recognition unit.
  • the robot car performs a learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle, and determines the driving operation according to the traveling situation by the driving operation determination function.
  • the automatic operation control is performed such that the driving operation is performed. Therefore, according to this road traffic system, the robot car learns the driving behavior of the other vehicle that has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the automatic driving control.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior
  • the non-robot car includes a driving condition recognition unit that recognizes the driving condition of the own vehicle, a driving behavior information receiving unit that receives driving behavior information of another vehicle from the computing system, and the driving behavior Driving based on the traveling situation recognized by the traveling situation recognition unit while referring to the driving behavior information received by the information receiving unit Transportation systems, characterized in that it has a driving support control unit that performs assistance control, the.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, and sets the driving behavior information to one or more vehicles different from the transmission source of the driving behavior information.
  • the non-robot car having received the driving behavior information from the computing system performs driving support control based on the traveling condition of the host vehicle while referring to the driving behavior information. Therefore, according to the road traffic system, even in a situation where the own vehicle is inexperienced, the non-robot car performs the driving assistance control while referring to the driving behavior information of the other vehicle that has experienced the situation. The situation can be handled with the same level of driving performance as other vehicles.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior
  • the non-robot car includes a traveling condition recognition unit that recognizes the traveling condition of the own vehicle, a driving behavior information receiving unit that receives driving behavior information of another vehicle from the computing system, and the traveling condition
  • the driving operation to be executed is determined based on the traveling condition recognized by the recognition unit, and the driving assistance is performed so that the driving operation is performed.
  • a driving knowledge control unit storing a knowledge information to be referred to when determining the driving behavior, and a driving knowledge received by the driving behavior information receiving unit.
  • a road traffic system comprising: a learning processing unit (knowledge update processing unit) that updates knowledge information stored in the driving knowledge unit based on action information.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, and sets the driving behavior information to one or more vehicles different from the transmission source of the driving behavior information. Send.
  • the non-robot car that has received the driving behavior information from the computing system performs learning processing for updating knowledge information (such as a determination criterion when determining the driving operation to be performed) based on the driving behavior information of the other vehicle,
  • knowledge information such as a determination criterion when determining the driving operation to be performed
  • the driving operation according to the traveling condition is determined with reference to the knowledge information, and the driving support control is performed so that the driving operation is performed. Therefore, according to this road traffic system, a non-robot car learns the driving behavior of another vehicle that has experienced the situation and performs driving assistance control even in the situation where the own vehicle is unexperienced.
  • the situation can be handled with the same level of driving performance as other vehicles.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior
  • the non-robot car is a traveling condition recognition unit that recognizes the traveling condition of the host vehicle, a driving behavior information reception unit that receives the driving behavior information from the computing system, and the traveling condition recognition unit Determine the driving operation to be performed based on the driving situation recognized by the driver, and drive assistance control so that the driving operation is performed
  • a driving operation control unit for determining the driving behavior according to the driving condition recognized by the driving condition recognition unit, and the driving operation control unit acquiring the driving activity information;
  • a learning processing unit for adjusting parameters of the driving operation determination function used in the driving operation determination unit based on the
  • the non-robot car performs the learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle, and determines the driving operation according to the traveling situation by the driving operation determination function.
  • Drive assist control so that the driving operation is performed. Therefore, according to this road traffic system, a non-robot car learns the driving behavior of another vehicle that has experienced the situation and performs driving assistance control even in the situation where the own vehicle is unexperienced. The situation can be handled with the same level of driving performance as other vehicles.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior And a driving action information transmission function of transmitting information to one or more vehicles different from the transmission source of the driving action information, wherein the vehicle is driven by the automatic driving control instead of the driving operation by the human driver.
  • the robot car includes a driving situation recognition unit that recognizes a running condition of the host vehicle, a driving behavior information receiving unit that receives the driving behavior information from the computing system, and the driving behavior information.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, and sets the driving behavior information to one or more vehicles different from the transmission source of the driving behavior information. Send.
  • the robot car having received the driving behavior information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving behavior information. Therefore, according to the road traffic system, the robot car performs automatic driving control while referring to the driving behavior information of other vehicles who have experienced the situation even in the situation where the own vehicle is unexperienced.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior And a driving action information transmission function of transmitting information to one or more vehicles different from the transmission source of the driving action information, wherein the vehicle is driven by the automatic driving control instead of the driving operation by the human driver.
  • the robot car includes a driving condition recognition unit that recognizes the driving condition of the host vehicle, a driving behavior information receiving unit that receives the driving behavior information from the computing system, and the driving condition.
  • An automatic driving control unit for performing automatic driving control, wherein the automatic driving control unit receives a driving knowledge unit storing knowledge information to be referred to when determining the driving behavior, and the driving behavior information receiving unit And a learning processing unit (knowledge update processing unit) for updating the knowledge information stored in the driving knowledge unit based on the driving behavior information.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, and sets the driving behavior information to one or more vehicles different from the transmission source of the driving behavior information. Send.
  • the robot car that has received the driving behavior information from the computing system performs learning processing to update knowledge information (such as a determination criterion when determining the driving operation to be performed) based on the driving behavior information of the other vehicle.
  • knowledge information such as a determination criterion when determining the driving operation to be performed
  • the driving operation according to the traveling condition is determined with reference to the knowledge information, and the automatic driving control is performed so that the driving operation is performed. Therefore, according to this road traffic system, the robot car learns the driving behavior of the other vehicle that has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the automatic driving control.
  • the situation can be addressed with the same level of driving performance as a vehicle.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information from one or more vehicles, a driving behavior information receiving function, and the driving behavior And a driving action information transmission function of transmitting information to one or more vehicles different from the transmission source of the driving action information, wherein the vehicle is driven by the automatic driving control instead of the driving operation by the human driver.
  • the robot car includes a traveling state recognition unit that recognizes the traveling state of the own vehicle, a driving behavior information receiving unit that receives driving behavior information of another vehicle from the computing system, and The driving operation to be executed is determined based on the traveling condition recognized by the traveling condition recognition unit, and the driving operation is performed.
  • a driving operation determination unit that performs automatic driving control, and the driving control unit determines by calculation the driving behavior according to the traveling condition recognized by the traveling condition recognition unit; And a learning processing unit (parameter adjustment unit) for adjusting a parameter of the driving operation determination function used in the driving operation determination unit based on the driving operation information acquired by the driving operation information acquisition unit.
  • Road traffic system In this road traffic system, the robot car performs a learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle, and determines the driving operation according to the traveling situation by the driving operation determination function.
  • the automatic operation control is performed such that the driving operation is performed.
  • the robot car learns the driving behavior of the other vehicle that has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the automatic driving control.
  • the situation can be addressed with the same level of driving performance as a vehicle.
  • a road traffic system in which a plurality of vehicles travel on a road, the system having a computing system, the computing system receiving driving behavior information from a robot car, a function of receiving driving behavior information, and A driving behavior information transmission function for transmitting to a robot car, wherein the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and the driving behavior information of the own vehicle is A driving condition information transmitting unit for transmitting to the computing system, wherein the non-robot car is a vehicle having a driving support control function for supporting driving by a human driver, and a traveling condition for recognizing the traveling condition of the own vehicle A recognition unit, and a driver who receives the driving behavior information from the computing system And a driving support control unit that performs driving support control based on the traveling condition recognized by the traveling condition recognition unit while referring to the driving behavior information received by the driving behavior information reception unit.
  • Road traffic system characterized by In this road traffic system, the computing system receives driving behavior information (experience information) from a robot car and transmits the driving behavior information to a non-robot car.
  • the non-robot car having received the driving behavior information from the computing system performs driving support control based on the traveling condition of the host vehicle while referring to the driving behavior information. Therefore, according to the road traffic system, even in the situation where the own vehicle is inexperienced, the non-robot car performs the driving support control while referring to the driving behavior information of the robot car that has experienced the situation.
  • the situation can be handled with the same level of driving performance as a robot car.
  • the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done. As the driving support performance of the non-robot car improves, the operation efficiency of the entire road traffic system and the safety can be improved.
  • a road traffic system in which a plurality of vehicles travel on a road, the system having a computing system, the computing system receiving driving behavior information from a robot car, a function of receiving driving behavior information, and A driving behavior information transmission function for transmitting to a robot car, wherein the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and the driving behavior information of the own vehicle is A driving condition information transmitting unit for transmitting to the computing system, wherein the non-robot car is a vehicle having a driving support control function for supporting driving by a human driver, and a traveling condition for recognizing the traveling condition of the own vehicle A recognition unit, and a driver who receives the driving behavior information from the computing system
  • the information receiving unit has a driving support control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and performs driving support control so that the driving operation is performed.
  • the driving support control unit stores a driving knowledge unit that stores knowledge information (such as judgment criteria) to be referred to when determining the driving behavior, and the driving behavior information received by the driving behavior information reception unit. And a learning processing unit (knowledge update processing unit) for updating knowledge information stored in the driving knowledge unit.
  • the computing system receives driving behavior information (experience information) from a robot car and transmits the driving behavior information to a non-robot car.
  • the non-robot car having received the driving behavior information from the computing system performs learning processing for updating the knowledge information (such as the determination criteria for determining the driving operation to be performed) based on the driving behavior information of the robot car.
  • the driving operation according to the traveling condition is determined with reference to the knowledge information, and the driving support control is performed so that the driving operation is performed. Therefore, according to this road traffic system, a non-robot car learns the driving behavior of a robot car that has experienced the situation even when the host vehicle is inexperienced, and performs driving support control. The situation can be handled with the same level of driving performance as a robot car. And according to this road traffic system, in a situation where a robot car and a non-robot car coexist, the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done.
  • a road traffic system in which a plurality of vehicles travel on a road the system having a computing system, the computing system receiving driving behavior information from a robot car, a function of receiving driving behavior information, and A driving behavior information transmission function for transmitting to a robot car, wherein the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and the driving behavior information of the own vehicle is A driving condition information transmitting unit for transmitting to the computing system, wherein the non-robot car is a vehicle having a driving support control function for supporting driving by a human driver, and a traveling condition for recognizing the traveling condition of the own vehicle A recognition unit, and a driver who receives the driving behavior information from the computing system
  • the information receiving unit has a driving support control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and perform
  • the driving support control unit determines, based on the driving behavior information acquired by the driving behavior information acquiring unit, a driving operation determination unit that determines the driving behavior according to the traveling situation recognized by the driving situation recognition unit by calculation. And a learning processing unit (parameter adjustment unit) for adjusting parameters of the driving operation determination function used in the driving operation determination unit.
  • the non-robot car performs the driving operation according to the traveling condition of the host vehicle while performing the learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the robot car. It determines with a function and performs driving assistance control so that the said driving operation is performed.
  • a non-robot car learns the driving behavior of a robot car that has experienced the situation even when the host vehicle is inexperienced, and performs driving support control.
  • the situation can be handled with the same level of driving performance as a robot car.
  • the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done.
  • the driving support performance of the non-robot car improves, the operation efficiency of the entire road traffic system and the safety can be improved.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information receiving function for receiving driving behavior information from a non-robot car, and the driving behavior information
  • a driving behavior information transmission function for transmitting to a robot car
  • the non-robot car is a vehicle whose driving operation is performed by a human driver
  • the driving behavior information of the host vehicle is transmitted to the computing system
  • An action information transmission unit and the robot car is a vehicle to be operated by automatic driving control instead of driving operation by a human driver
  • a driving behavior recognition unit for recognizing a running condition of the host vehicle
  • a driving behavior information receiving unit for receiving the driving behavior information from the computing system
  • a road traffic system comprising: an automatic driving control unit that performs automatic driving control based on the traveling condition recognized by the traveling condition recognition unit.
  • the computing system receives driving behavior information (experience information) from a non-robot car and transmits the driving behavior information to the robot car.
  • the robot car having received the driving behavior information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving behavior information. Therefore, according to this road traffic system, the robot car performs automatic driving control while referring to the driving behavior information of the non-robot car that has experienced the situation even in the situation where the own vehicle is unexperienced. The situation can be handled with the same level of driving performance as the non-robot car.
  • the robot car learns the driving technique of the human driver who drives the non-robot car, and the automatic driving performance of the robot car is obtained.
  • the efficiency can be improved.
  • the automatic driving performance of the robot car improves, it is possible to improve the operation efficiency of the entire road traffic system, improve the safety, and the like.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information receiving function for receiving driving behavior information from a non-robot car, and the driving behavior information
  • a driving behavior information transmission function for transmitting to a robot car, wherein the non-robot car is a vehicle whose driving operation is performed by a human driver, and the driving behavior information of the host vehicle is transmitted to the computing system
  • An action information transmission unit wherein the robot car recognizes a traveling condition of the host vehicle, a driving behavior information reception unit receives the driving behavior information from the computing system, and the traveling Based on the driving situation recognized by the situation recognition unit, the driving operation to be performed is determined, and the driving operation is An automatic driving control unit for performing automatic driving control to be performed, wherein the automatic driving control unit stores a driving knowledge unit storing knowledge information to be referred to when determining the driving behavior;
  • a road traffic system comprising: a learning processing unit that updates knowledge information stored in the driving knowledge unit based on driving behavior information received
  • the computing system receives driving behavior information (experience information) from a non-robot car and transmits the driving behavior information to the robot car.
  • the robot car that has received the driving behavior information from the computing system performs learning processing for updating knowledge information (such as a determination criterion for determining the driving operation to be performed) based on the driving behavior information of the non-robot car.
  • the driving operation according to the traveling condition is determined with reference to the knowledge information, and automatic driving control is performed so that the driving operation is performed. Therefore, according to this road traffic system, the robot car learns the driving behavior of the non-robot car who has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the driving assistance control.
  • the situation can be handled with the same level of driving performance as a non-robot car.
  • the robot car learns the driving technique of the human driver who drives the non-robot car, and the automatic driving performance of the robot car is obtained.
  • the efficiency can be improved.
  • the automatic driving performance of the robot car improves, it is possible to improve the operation efficiency of the entire road traffic system, improve the safety, and the like.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information receiving function for receiving driving behavior information from a non-robot car, and the driving behavior information
  • a driving behavior information transmission function for transmitting to a robot car
  • the non-robot car is a vehicle whose driving operation is performed by a human driver
  • the driving behavior information of the host vehicle is transmitted to the computing system
  • An action information transmission unit wherein the robot car recognizes a traveling condition of the host vehicle, a driving behavior information reception unit receives the driving behavior information from the computing system, and the traveling Based on the driving situation recognized by the situation recognition unit, the driving operation to be performed is determined, and the driving operation is A driving operation control unit for performing automatic driving control to be performed, the driving control operation determining the driving behavior according to the driving condition recognized by the driving condition recognition unit by calculation A determination unit, and a learning processing unit that adjusts parameters of a driving operation determination function used in the driving operation determination unit based on the driving activity information acquired
  • the robot car performs the learning operation of adjusting the parameters of the driving operation determination function based on the driving behavior information of the non-robot car, and determines the driving operation according to the traveling condition of the own vehicle. It determines by a function and performs automatic operation control so that the said driving operation is performed. Therefore, according to this road traffic system, the robot car learns the driving behavior of the non-robot car who has experienced the situation even in the situation where the own vehicle is inexperienced, and performs the driving assistance control. The situation can be handled with the same level of driving performance as a non-robot car.
  • the non-robot car has a driving behavior information output unit for outputting driving behavior information of the own vehicle to the outside, and the driving behavior information of the own vehicle includes the traveling condition of the own vehicle and the driving operation performed on the own vehicle.
  • the road traffic system according to any one of the configurations 1.1 to 1.3, which is driving behavior information in which [Configuration 1.21]
  • the robot car has a driving behavior information output unit that outputs driving behavior information of the own vehicle to the outside, and the driving behavior information of the own vehicle includes the traveling condition of the own vehicle and the driving operation performed on the own vehicle.
  • the road traffic system according to any one of configurations 1.4 to 1.6, which is the associated driving behavior information.
  • the non-robot car has a driving operation detection unit that detects a driving operation of a human driver of a host vehicle.
  • a driving behavior information transmission unit that transmits driving behavior information in which the traveling situation recognized by the traveling situation recognition unit and the driving operation detected by the driving operation detection unit are associated to the computing system;
  • the road traffic system according to any one of 16 to 1.18.
  • the computing system includes a driving behavior information receiving function of receiving driving behavior information of one or a plurality of vehicles, an optimization information generating function of generating driving behavior information optimized based on the driving behavior information, It has an optimization information update function that updates and manages optimized driving behavior information to the latest information, and a driving behavior information transmission function that transmits the optimized driving behavior information to one or more vehicles.
  • the computing system includes a driving behavior information receiving function of receiving driving behavior information from a robot car, an optimization information generation function of generating driving behavior information optimized based on the driving behavior information, and the optimization.
  • Configuration 1.13 having an optimization information update function of updating and managing updated driving behavior information to the latest information, and a driving behavior information transmitting function of transmitting the optimized driving behavior information to a non-robot car Road traffic system in any one of 1.15 to 1.15.
  • the computing system includes a driving behavior information receiving function of receiving driving behavior information from a non-robot car, an optimization information generating function of generating driving behavior information optimized based on the driving behavior information, and the optimization.
  • the configuration 1.16 has an optimization information update function of updating and managing the updated driving behavior information to the latest information, and a driving behavior information transmitting function of transmitting the optimized driving behavior information to the robot car.
  • the road traffic system according to any one of 1. to 18.
  • the robot car refers to driving behavior information received by the driving behavior information receiving unit, and a driving situation recognition unit that recognizes the running condition of the host vehicle, a driving behavior information receiving unit that receives the driving behavior information, and
  • the automatic driving control unit that performs automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the own vehicle, and the traveling condition recognized by the
  • Road traffic system to be the computing system receives driving behavior information (experience information) from the robot car, and transmits the driving behavior information to a robot car different from the transmission source of the driving behavior information.
  • the robot car having received the driving action information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving action information of the other robot car. Therefore, the robot car performs automatic driving control according to the traveling situation of the own vehicle while referring to the driving behavior information of the other robot cars who have experienced the situation even in the situation where the own vehicle is unexperienced And can handle the situation with the same level of driving performance as the other robot cars.
  • this road traffic system it is possible to increase the learning efficiency of the robot cars in the road traffic system and rapidly improve the automatic driving performance by using the driving behavior information among the robot cars. Since the automatic driving performance of all the robot cars in the road traffic system can be rapidly improved, the operation efficiency, safety, etc. of the entire road traffic system are rapidly improved.
  • a road traffic system in which a robot car on which driving operation is performed by automatic driving control instead of driving operation by a human driver is traveling on a road
  • the computing system includes one or more robot cars
  • Driving behavior information receiving function for receiving driving behavior information
  • optimization information generating function for generating driving behavior information optimized based on the driving behavior information
  • the robot car has an optimization information update function of updating and managing it, and a driving action information transmitting function of transmitting the optimized driving action information to one or more robot cars
  • the robot car A driving condition recognition unit for recognizing a driving condition, a driving behavior information receiving unit for receiving the driving behavior information, and the driving behavior information reception
  • An automatic driving control unit that performs automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the own vehicle while referring to the driving behavior information received by the vehicle, and the traveling condition recognized by the traveling condition recognition unit
  • a driving behavior information transmitting unit for transmitting to the computing system driving behavior information in which driving behavior by automatic driving control is associated, and the
  • Road traffic system characterized in that it is the information which matched.
  • the computing system receives driving behavior information (experience information) from a robot car, generates driving behavior information optimized based on the driving behavior information, and performs the optimized driving.
  • the action information is updated to the latest information and managed, and the driving action information is transmitted to the robot car.
  • the robot car having received the driving action information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving action information of the other robot car.
  • the robot car performs automatic driving control with reference to the driving behavior information optimized based on the driving behavior information of the other robot cars who have experienced the situation even in the situation where the own vehicle is inexperienced Therefore, the situation can be dealt with at the same level or higher driving performance as the other robot cars.
  • this road traffic system it is possible to increase the learning efficiency of the robot cars in the road traffic system and rapidly improve the automatic driving performance by using the driving behavior information among the robot cars. Since the automatic driving performance of all the robot cars in the road traffic system can be rapidly improved, the operation efficiency, safety, etc. of the entire road traffic system are rapidly improved.
  • the optimized driving behavior information may be a driving behavior information optimized according to a vehicle attribute of a vehicle receiving the provision of the driving behavior information, and a possibility that a vehicle receiving the driving behavior information may contact an obstacle.
  • Driving behavior information optimized to maximize regenerative energy driving behavior information optimized to minimize the number of accelerations or acceleration times in a predetermined traveling distance or predetermined traveling time, predetermined traveling distance or predetermined traveling time
  • Driving behavior information optimized to minimize or maximize the number of braking times or the braking time at the vehicle, so as to minimize the travel distance from the departure point to the arrival point Configuration 1.23, 1.24, 1.25, 1.
  • the optimization information generation function is configured to minimize the possibility of the vehicle coming into contact with an obstacle based on the traveling condition of the vehicle receiving the provision of the driving behavior information and the vehicle attribute of the vehicle.
  • the road traffic system according to any one of the configurations 1.23, 1.24, 1.25, 1.27, including the function of correcting behavior information.
  • the provision destination vehicle when there is a difference between the vehicle attribute of the vehicle providing the driving behavior information (providing source vehicle) and the vehicle attribute of the vehicle receiving the providing of the driving behavior information (providing destination vehicle), the provision destination vehicle The driving behavior information modified so as to minimize the possibility of contact with the obstacle is provided to the destination vehicle.
  • the vehicles can perform the driving support control or the automatic driving control with reference to the driving behavior information. .
  • a road traffic system in which a plurality of vehicles travels on a road, wherein the vehicles have a function of providing driving behavior information of one's own vehicle to another vehicle, a function of receiving provision of driving behavior information of another vehicle, and And a function of performing driving control of the own vehicle based on the driving behavior information, wherein the driving behavior information of the own vehicle associates the traveling situation of the own vehicle with the driving operation performed in the own vehicle
  • a road traffic system characterized in that it is information
  • the driving behavior information of the other vehicle is information in which the traveling condition of the other vehicle is associated with the driving operation performed in the other vehicle.
  • the vehicles of this road traffic system can mutually provide driving behavior information (experience information) among the vehicles.
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system has the same level as the other vehicle by performing the driving control with reference to the driving behavior information of the other vehicle that has experienced the situation.
  • a vehicle of this road traffic system can receive driving behavior information (experience information) of another vehicle by inter-vehicle communication. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system has the same level as the other vehicle by performing the driving control with reference to the driving behavior information of the other vehicle that has experienced the situation.
  • a road traffic system in which a plurality of vehicles travels on a road, wherein the vehicles refer to a function of receiving driving behavior information of another vehicle by communication between the own vehicle and a ground stationary object, and driving behavior information of the other vehicle And having a function of performing driving control of the own vehicle, wherein the driving behavior information of the other vehicle is information in which the traveling situation of the other vehicle and the driving operation performed in the other vehicle are associated with each other Characteristic road traffic system.
  • a vehicle of this road traffic system can receive driving behavior information (experience information) of another vehicle through communication with a ground stationary object. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle.
  • a road traffic system in which a plurality of vehicles travel on a road, wherein the vehicle refers to a function of receiving driving behavior information of another vehicle by communication between the host vehicle and the road, and driving behavior information of the other vehicle. And the function of performing driving control of the own vehicle, wherein the driving behavior information of the other vehicle is information correlating the traveling condition of the other vehicle with the driving operation performed in the other vehicle.
  • Road traffic system A vehicle of this road traffic system can receive driving behavior information (experience information) of another vehicle through road-vehicle communication.
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system has the same level as the other vehicle by performing the driving control with reference to the driving behavior information of the other vehicle that has experienced the situation.
  • the driving behavior information of the other vehicle is information correlating the traveling condition of the other vehicle with the driving operation performed in the other vehicle.
  • Road traffic system to be.
  • a vehicle of this road traffic system can receive driving behavior information (experience information) of another vehicle through communication with a portable terminal. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system has the same level as the other vehicle by performing the driving control with reference to the driving behavior information of the other vehicle that has experienced the situation.
  • a road traffic system in which a plurality of vehicles travels on a road, wherein the vehicles have a function of uploading driving behavior information of the host vehicle to a computing system on a network, and computing performance information of other vehicles on the network
  • the system has a function to download from the system and a function to control the driving of the vehicle based on the driving behavior information downloaded from the computing system on the network, and the driving behavior information of the vehicle is the traveling condition of the vehicle
  • the driving operation performed in the subject vehicle and the driving behavior information of the other vehicle is information in which the traveling state of the other vehicle and the driving operation performed in the other vehicle are associated.
  • Road traffic system characterized by The vehicles of this road traffic system can provide driving behavior information (experience information) with a large number of vehicles via a network. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system has the same level as the other vehicle by performing the driving control with reference to the driving behavior information of the other vehicle that has experienced the situation.
  • a road traffic system in which a plurality of vehicles travel on a road, and based on a function of uploading driving behavior information of a host vehicle to a computing system on a network, driving behavior information of a host vehicle and driving behavior information of another vehicle A function of downloading the generated driving behavior information from the computing system on the network, and a function of referring to the driving behavior information downloaded from the computing system on the network to control the driving of the vehicle.
  • the driving behavior information of the host vehicle is information in which the traveling status of the host vehicle and the driving operation performed on the host vehicle are associated, and the driving behavior information of the other vehicle is the traveling status of the other vehicle and the other vehicle
  • a road traffic system characterized in that it is information associated with a driving operation performed in the above.
  • the vehicle of this road traffic system uploads the driving behavior information (experience information) of the own vehicle to the computing system on the network, and based on the driving behavior information of the own vehicle and the driving behavior information (experience information) of the other vehicle
  • the generated driving behavior information can be downloaded from a computing system on the network.
  • the driving behavior information generated based on the driving behavior information of the own vehicle and the driving behavior information of the other vehicle can be used for the drive control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, the vehicle of this road traffic system is driving behavior information generated on the basis of the driving behavior information of the vehicle and the driving behavior information of other vehicles that have experienced the situation.
  • a road traffic system in which a plurality of vehicles travel on a road, the plurality of vehicles including a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, the robot car
  • a driving condition recognition unit for recognizing a driving condition of the own vehicle, a learning processing unit for learning about objects and driving operations around the own vehicle, a driving behavior information acquiring unit for acquiring driving behavior information of another vehicle, and the driving behavior
  • An automatic driving control unit that performs automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the own vehicle while referring to the driving behavior information acquired by the information acquisition unit and the learning result by the learning processing unit;
  • the driving behavior information of the other vehicle includes operation history information in which the traveling condition of the other vehicle and the driving operation performed in the other vehicle are associated; Road traffic system which is characterized.
  • the robot car of this road traffic system performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle. Therefore, even in a situation where the vehicle is inexperienced, the vehicle of this road traffic system is equivalent to the other vehicle by performing automatic driving control with reference to the driving behavior information of the other vehicle that has experienced the situation. The situation can be dealt with at the level of driving performance.
  • a road traffic system in which a plurality of vehicles travel on a road, the plurality of vehicles including a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, the robot car
  • a driving condition recognition unit for recognizing a driving condition of the own vehicle, a learning processing unit for learning about objects and driving operations around the own vehicle, a driving behavior information acquiring unit for acquiring driving behavior information of another vehicle, and the driving behavior
  • An automatic driving control unit performing automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the own vehicle and the learning result by the learning processing unit while referring to the driving behavior information acquired by the information acquisition unit;
  • a driving behavior information output unit that outputs driving behavior information of a vehicle to the outside, and the driving behavior information of the other vehicle includes the traveling condition of the other vehicle and the other vehicle.
  • the driving behavior information of the host vehicle is information in which the traveling state of the host vehicle is associated with the driving operation performed by the automatic driving control of the host vehicle.
  • Transportation system The robot car of this road traffic system performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle. Therefore, even in an unexperienced situation, the robot car in this road traffic system performs automatic driving control with reference to the driving behavior information of the other vehicles who have experienced the situation, thereby achieving the same level as the other vehicles. The driving performance can cope with the situation.
  • the robot car of this road traffic system outputs the driving behavior information (experience information) of the own vehicle to the outside
  • other vehicles reference the driving behavior information output from the robot car and drive assistance control or automatic driving Control can also be performed.
  • the other vehicle performs driving support control or automatic driving control with reference to the driving behavior information of the robot car that has experienced the situation even in the unexperienced situation, thereby achieving the same level of driving performance as the robot car. Can handle the situation.
  • a road traffic system in which a plurality of vehicles travel on a road, comprising a computing system, the computing system receiving driving behavior information receiving function for receiving driving behavior information from a non-robot car, and the driving behavior information
  • a driving behavior information transmission function for transmitting to a robot car
  • the non-robot car is a vehicle whose driving operation is performed by a human driver, and a traveling situation recognition unit for recognizing the traveling situation of the own vehicle
  • Driving operation detection unit for detecting a driving operation by a human driver of a vehicle, and driving including operation history information in which the traveling condition recognized by the traveling condition recognition unit is associated with the driving operation detected by the driving operation detection unit
  • a driving behavior information transmission unit for transmitting behavior information to the computing system
  • a vehicle in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a traveling state recognition unit that recognizes a traveling condition of a host vehicle, and a machine learning unit that learns about objects around the host vehicle and driving operation
  • the computing system of the road traffic system receives driving behavior information (experience information) from a non-robot car, and transmits the driving behavior information to the robot car.
  • the robot car having received the driving behavior information from the computing system performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information of the non-robot car. Therefore, the robot car of this road traffic system performs the automatic driving control with reference to the driving behavior information of the non-robot car which has experienced the situation even in the situation where the own vehicle is unexperienced.
  • the situation can be handled with the same level of driving performance as a car.
  • the robot car learns the driving technique of the driver driving the non-robot car, and the automatic driving performance of the robot car is made highly efficient. It can be improved. As the automatic driving performance of the robot car improves, it is possible to improve the operation efficiency of the entire road traffic system, improve the safety, and the like.
  • a road traffic system in which a plurality of vehicles travel on a road comprising a computing system, wherein the computing system has a driving behavior information receiving function of receiving driving behavior information from a non-robot car, and the driving behavior information Optimization information generating function of generating optimized driving behavior information, optimization information updating function of updating and managing the optimized driving behavior information to the latest information, and the optimized driving
  • the non-robot car is a vehicle on which a driving operation is performed by a human driver, and a traveling situation recognition unit for recognizing the traveling situation of the own vehicle
  • a driving operation detection unit for detecting a driving operation of the host vehicle by a human driver, and a traveling condition recognized by the traveling condition recognition unit
  • a driving behavior information transmission unit for transmitting, to the computing system, driving behavior information including operation history information associated with the driving operation detected by the driving operation detection unit; and the robot car is operated by a human driver A vehicle in which a driving operation is performed by automatic driving control instead of a
  • the computing system of the road traffic system receives driving behavior information (experience information) from a non-robot car, generates driving behavior information optimized based on the driving behavior information, and the optimized driving behavior.
  • the information is updated and managed to the latest information, and the driving behavior information is transmitted to the robot car.
  • the robot car having received the driving behavior information from the computing system performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information of the non-robot car.
  • the robot car in this road traffic system automatically refers to the driving behavior information optimized based on the driving behavior information of the non-robot car that has experienced the situation even in the situation where the own vehicle is inexperienced By performing the operation control, it is possible to cope with the situation with an operation performance equal to or higher than that of the non-robot car.
  • the vehicle of the present invention includes a vehicle having the following configuration.
  • a non-robot car having a driving support control function for supporting driving by a human driver, and a driving condition recognition unit recognizing a driving condition of a host vehicle, and a driving behavior information acquiring unit acquiring driving behavior information of another vehicle, A driving support control unit that performs driving support control based on the traveling situation recognized by the traveling situation recognition unit while referring to the driving behavior information acquired by the driving behavior information acquisition unit; Robot car.
  • the non-robot car performs the driving support control based on the traveling condition of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle.
  • a non-robot car having a driving support control function for supporting driving by a human driver, and a driving condition recognition unit recognizing a driving condition of a host vehicle, and a driving behavior information acquiring unit acquiring driving behavior information of another vehicle, A driving support control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and performs driving support control so that the driving operation is performed;
  • the control unit stores in the driving knowledge unit a driving knowledge unit that stores knowledge information (judgment criteria) to be referred to when determining the driving behavior and driving behavior information acquired by the driving behavior information acquiring unit.
  • a non-robot car comprising: a learning processing unit (knowledge update processing unit) for updating knowledge information being stored.
  • the non-robot car performs a learning process of updating knowledge information (such as a determination criterion for determining a driving operation to be performed) based on driving behavior information of another vehicle, and refers to the knowledge information to execute the traveling situation.
  • the driving support control is performed so that the driving operation is determined according to the driving operation. Therefore, this non-robot car learns the driving behavior of other vehicles that have experienced the situation even in the situation where the own vehicle is inexperienced, and performs driving support control, thereby driving at the same level as the other vehicle. Performance can handle the situation.
  • a non-robot car having a driving support control function for supporting driving by a human driver, and a driving condition recognition unit recognizing a driving condition of a host vehicle, and a driving behavior information acquiring unit acquiring driving behavior information of another vehicle,
  • a driving support control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and performs driving support control so that the driving operation is performed;
  • the control unit performs the driving operation based on the driving operation information acquired by the driving operation information acquisition unit, and a driving operation determination unit that determines the driving operation according to the traveling condition recognized by the driving condition recognition unit by calculation.
  • a non-robot car comprising: a learning processing unit (parameter adjustment unit) for adjusting parameters of a driving operation determination function used in the determination unit.
  • This non-robot car performs a learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle, determines the driving operation according to the traveling situation by the driving operation determination function, and the driving operation The driving support control is performed to be performed. Therefore, this non-robot car learns the driving behavior of other vehicles that have experienced the situation even in the situation where the own vehicle is inexperienced, and performs driving support control, thereby driving at the same level as the other vehicle. Performance can handle the situation.
  • the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of the other vehicle by communication between the host vehicle and the other vehicle, according to any one of configurations 2.1 to 2.3.
  • Non robot car The driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle by communication between the host vehicle and the ground stationary object, according to any one of configurations 2.1 to 2.3.
  • Non robot car The non-robot according to any one of Configurations 2.1 to 2.3, wherein the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle by communication between the host vehicle and the road. car.
  • the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle by communication between the host vehicle and the portable terminal, which is not the first one of the constitutions 2.1 to 2.3.
  • Robot car [Configuration 2.8] The non-robot car according to any one of Configurations 2.1 to 2.3, wherein the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle from a computing system on a network.
  • a driving action information output unit outputting the driving action information of the own vehicle to the outside, the driving action information of the own vehicle corresponds to the driving situation of the own vehicle and the driving operation performed on the own vehicle
  • a non-robot car according to any of the features 2.1 to 2.3, which is information.
  • a driving operation detection unit for detecting a driving operation by a human driver of the host vehicle, and driving behavior information in which the traveling condition recognized by the traveling condition recognition unit is associated with the driving operation detected by the driving operation detection unit
  • the non-robot car according to any one of configurations 2.1 to 2.3, comprising: a driving behavior information transmitting unit to transmit to the computing system.
  • a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a driving condition information acquisition unit that recognizes the traveling condition of the own vehicle, and driving behavior information acquisition that acquires driving behavior information of other vehicles And an automatic driving control unit for performing automatic driving control based on the traveling condition recognized by the traveling condition recognition unit while referring to the driving activity information acquired by the driving activity information acquisition unit.
  • Robot car to be This robot car performs automatic driving control based on the traveling condition of the own vehicle while referring to driving behavior information (experience information) of another vehicle.
  • this robot car performs the same level of driving performance as the other vehicle by performing the automatic driving control based on the driving behavior information of the other vehicle who has experienced the situation even in the situation where the own vehicle is unexperienced. Can handle the situation.
  • a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a driving condition information acquisition unit that recognizes the traveling condition of the own vehicle, and driving behavior information acquisition that acquires driving behavior information of other vehicles And an automatic driving control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and performs automatic driving control so that the driving operation is performed.
  • the automatic driving control unit is configured to perform the driving based on driving behavior information acquired by the driving behavior information acquisition unit, and a driving knowledge unit storing knowledge information to be referred to when the driving behavior determination unit determines the driving behavior.
  • a robot car comprising: a learning processing unit (knowledge update processing unit) that appropriately updates knowledge information stored in a knowledge unit.
  • the robot car performs a learning process of updating knowledge information (such as a determination criterion for determining a driving operation to be performed) based on driving behavior information of another vehicle, and refers to the knowledge information to obtain a traveling situation. A corresponding driving operation is determined, and automatic driving control is performed so that the driving operation is performed.
  • a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a driving condition information acquisition unit that recognizes the traveling condition of the own vehicle, and driving behavior information acquisition that acquires driving behavior information of other vehicles And an automatic driving control unit that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit, and performs automatic driving control so that the driving operation is performed.
  • the automatic driving control unit is based on the driving operation information acquired by the driving operation information acquisition unit, and a driving operation determination unit that determines the driving behavior according to the traveling condition recognized by the traveling condition recognition unit by calculation.
  • a robot comprising: a learning processing unit (parameter adjustment unit) for adjusting parameters of a driving operation determination function used in the driving operation determination unit; Over.
  • the robot car performs a learning process of adjusting parameters of the driving operation determination function based on the driving behavior information of another vehicle, determines the driving operation according to the traveling situation by the driving operation determination function, and executes the driving operation. Automatic operation control to be done.
  • the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of the other vehicle by communication between the host vehicle and the other vehicle, according to any one of configurations 2.11 to 2.13.
  • Robot car [Configuration 2.15]
  • the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle by communication between the host vehicle and the ground stationary object, according to any one of configurations 2.11 to 2.13. Robot car.
  • the driving behavior information acquiring unit is a driving behavior information receiving unit that receives driving behavior information of another vehicle by communication between the host vehicle and the portable terminal, and the driving behavior information acquiring unit is not one of the constitutions 2.11 to 2.13.
  • Robot car [Configuration 2.18] The robot car according to any one of Configurations 2.11 to 2.13, wherein the driving behavior information acquisition unit is a driving behavior information reception unit that receives driving behavior information of another vehicle from a computing system on a network.
  • the vehicle is a vehicle of a road traffic system in which a plurality of vehicles travel on a road, and the vehicle has a function of performing driving control of its own vehicle with reference to driving behavior information of the other vehicle, and driving behavior information of the other vehicle Is a piece of information in which traveling conditions of other vehicles are associated with driving operations performed in the other vehicles.
  • This vehicle performs driving control of the own vehicle with reference to driving behavior information of another vehicle. Therefore, even in a situation where the own vehicle is inexperienced, the vehicle performs driving control based on the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. Can handle the situation.
  • a vehicle of a road traffic system in which a plurality of vehicles travels on a road, a function of providing driving behavior information of one's own vehicle to another vehicle, a function of receiving provision of driving behavior information of another vehicle, and driving behavior of another vehicle And the function of performing driving control of the host vehicle based on the information, wherein the driving behavior information of the host vehicle is information in which the traveling state of the host vehicle and the driving operation performed on the host vehicle are associated,
  • the driving behavior information of the other vehicle is information in which a traveling state of the other vehicle is associated with a driving operation performed in the other vehicle. This vehicle can mutually provide driving behavior information (experience information) between the own vehicle and another vehicle.
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • It is a vehicle of a road traffic system in which a plurality of vehicles travel on a road, and a function of receiving driving behavior information of the other vehicle by communication between the own vehicle and the other vehicle and driving behavior information of the other vehicle And the function of performing driving control of the own vehicle, wherein the driving behavior information of the other vehicle is information correlating the traveling condition of the other vehicle with the driving operation performed in the other vehicle.
  • the vehicle to be This vehicle can receive driving behavior information (experience information) of another vehicle by inter-vehicle communication. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • driving behavior information experience information
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • the vehicle to be The vehicle can receive driving behavior information (experience information) of another vehicle by communicating with the ground stationary object. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle.
  • a vehicle of a road traffic system in which a plurality of vehicles travel on a road, and a function of receiving driving behavior information of another vehicle by communication between the vehicle and the road, and a vehicle with reference to driving behavior information of the other vehicle And the function of performing driving control of the vehicle, wherein the driving behavior information of the other vehicle is information correlating the traveling state of the other vehicle and the driving operation performed in the other vehicle. .
  • This vehicle can receive driving behavior information (experience information) of another vehicle by road-to-vehicle communication.
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • the driving behavior information of the other vehicle is information correlating the traveling condition of the other vehicle with the driving operation performed in the other vehicle. vehicle.
  • the vehicle can receive driving behavior information (experience information) of another vehicle through communication with the portable terminal. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • driving behavior information experience information
  • driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • a vehicle of a road traffic system in which a plurality of vehicles travels on a road, the function of uploading the driving behavior information of the own vehicle to the computing system on the network, and the driving behavior information of other vehicles from the computing system on the network
  • the system has a function to be downloaded and a function to control the driving of the vehicle based on the driving behavior information downloaded from the computing system on the network, and the driving behavior information of the vehicle is the traveling condition of the vehicle
  • the driving behavior information includes operation history information in which the driving operation performed on the own vehicle is associated, and the driving behavior information on the other vehicle associates the traveling situation of the other vehicle with the driving operation performed on the other vehicle.
  • This vehicle which is characterized by This vehicle can provide driving behavior information (experience information) with a large number of vehicles via a network. Then, driving behavior information of another vehicle can be used for driving control of the own vehicle. Therefore, even in a situation where the vehicle is inexperienced, this vehicle performs driving control with reference to the driving behavior information of the other vehicle that has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. It can cope with the situation.
  • a vehicle of a road traffic system in which a plurality of vehicles travels on a road, and a function of uploading driving behavior information of the own vehicle to a computing system on a network, driving behavior information of the own vehicle and driving behavior information of other vehicles
  • the driving behavior information of the host vehicle is information in which the traveling status of the host vehicle and the driving operation performed on the host vehicle are associated
  • the driving behavior information of the other vehicle is the traveling status of the other vehicle and the host
  • This vehicle uploads the driving behavior information (experience information) of the own vehicle to the computing system on the network, and the driving behavior information generated based on the driving behavior information of the own vehicle and the driving behavior information of the other vehicle is networked It can be downloaded from the above computing system. Then, the driving behavior information generated based on the driving behavior information of the own vehicle and the driving behavior information of the other vehicle can be used for the drive control of the own vehicle. Therefore, even in the situation where the vehicle is inexperienced, this vehicle refers to the driving behavior information generated based on the driving behavior information of the vehicle and the driving behavior information of the other vehicle that has experienced the situation. By performing driving control, the situation can be coped with driving performance equal to or higher than that of the other vehicle.
  • a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a learning processing for learning a driving condition recognition unit that recognizes a driving condition of the host vehicle, objects around the host vehicle and driving operation And a driving behavior information acquiring unit for acquiring driving behavior information of another vehicle, and a traveling situation recognized by the traveling situation recognizing unit of the own vehicle while referring to the driving behavior information acquired by the driving behavior information acquiring unit and And an automatic driving control unit performing automatic driving control based on a learning result by the learning processing unit, wherein the driving behavior information of the other vehicle includes a traveling state of the other vehicle and a driving operation performed in the other vehicle.
  • a robot car characterized by being associated information.
  • This robot car performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle. Therefore, even in the situation where the own vehicle is inexperienced, this robot car performs driving at the same level as the other vehicle by performing automatic driving control with reference to the driving behavior information of the other vehicle that has experienced the situation. Performance can handle the situation.
  • a robot car in which a driving operation is performed by automatic driving control instead of a driving operation by a human driver, and a learning processing for learning a driving condition recognition unit that recognizes a driving condition of the host vehicle, objects around the host vehicle and driving operation
  • a driving behavior information acquiring unit for acquiring driving behavior information of another vehicle, and a traveling situation recognized by the traveling situation recognizing unit of the own vehicle while referring to the driving behavior information acquired by the driving behavior information acquiring unit and
  • the driving operation information of the other vehicle includes: an automatic driving control unit that performs automatic driving control based on a learning result by the learning processing unit; and a driving activity information output unit that outputs driving activity information of the own vehicle to the outside.
  • the driving behavior information of the host vehicle is information in which a traveling state of the host vehicle and a driving operation performed by the automatic driving control of the host vehicle are associated with each other.
  • This robot car performs automatic driving control based on the traveling condition of the own vehicle and the learning result of the own vehicle while referring to the driving behavior information (experience information) of the other vehicle. Therefore, even in an unexperienced situation, the robot car performs the automatic driving control with reference to the driving behavior information of the other vehicle who has experienced the situation, thereby achieving the same level of driving performance as the other vehicle. Can handle the situation.
  • this robot car outputs the driving behavior information (experience information) of the own vehicle to the outside
  • the other vehicle performs the driving assistance control or the automatic driving control with reference to the driving behavior information output from the robot car You can also.
  • the other vehicle performs driving support control or automatic driving control with reference to the driving behavior information of the robot car that has experienced the situation even in the unexperienced situation, thereby achieving the same level of driving performance as the robot car. Can handle the situation.
  • the computing system of the present invention includes a computing system having the following configuration.
  • [Configuration 3.1] A computing system for a road traffic system in which a plurality of vehicles travels on a road, the driving behavior information receiving function of receiving driving behavior information of one or more vehicles, and the driving behavior information transmission source of the driving behavior information And a driving behavior information transmission function of transmitting to one or more different vehicles.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, and transmits the driving behavior information to one or more vehicles different from the transmission source of the driving behavior information.
  • the vehicle that has received the driving behavior information from the computing system can use the driving behavior information for driving control of the own vehicle.
  • a computing system for a road traffic system in which a plurality of vehicles travel on a road, the driving behavior information receiving function of receiving driving behavior information of one or more vehicles, and the driving optimized based on the driving behavior information
  • a driving behavior information transmission function to be transmitted to a vehicle.
  • the computing system receives driving behavior information (experience information) of one or more vehicles, generates driving behavior information optimized based on the driving behavior information, and generates the optimized driving behavior information. Update and manage the latest information, and send it to one or more vehicles.
  • the vehicle that has received the driving behavior information from the computing system can use the driving behavior information for driving control of the own vehicle. That is, even when the vehicle is inexperienced, the vehicle performs driving control with reference to the driving behavior information optimized based on the driving behavior information of the other vehicle who has experienced the situation. The situation can be dealt with at the same level or higher driving performance as the other vehicles.
  • the robot car is a vehicle which is operated by automatic operation control instead of operation operation by a human driver, the traveling state recognition unit recognizing a traveling state of the own vehicle, and the traveling.
  • a driving behavior information transmission unit for transmitting to the computing system driving behavior information in which the traveling situation recognized by the situation recognition unit is associated with the driving operation by automatic driving control; and the non-robot car is a human A vehicle that has a driving support function that supports driving by a driver, and recognizes traveling conditions of the host vehicle The traveling condition recognized by the traveling condition recognition unit of the own vehicle while referring to the driving behavior information reception unit, the driving behavior information reception unit that receives the driving behavior information, and the driving behavior information reception unit that receives the driving behavior information And a driving support control unit for performing a driving support control based on the above.
  • the computing system receives driving behavior information (experience information) from the robot car and transmits the driving behavior information to the non-robot car.
  • the robot car having received the driving action information from the computing system performs driving support control based on the traveling condition of the host vehicle while referring to the driving action information. Therefore, even when the non-robot car is inexperienced, the non-robot car performs the driving support control while referring to the driving behavior information of the robot car that has experienced the situation, thereby achieving the same level as the non-robot car.
  • the driving performance can cope with the situation. And, according to this computing system, in a situation where a robot car and a non-robot car coexist, the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done.
  • the non-robot car is a vehicle that is operated by a human driver, and detects a traveling operation recognition unit that recognizes traveling conditions of the host vehicle, and a driving operation by the human driver of the host vehicle
  • Driving behavior information transmission for transmitting to the computing system driving behavior information in which a driving operation detecting unit, driving conditions recognized by the traveling condition recognizing unit, and driving operations detected by the driving operation detecting unit are associated
  • the robot car has an automatic driving control instead of a driving operation by a human driver.
  • a driving condition recognition unit that recognizes the traveling condition of the vehicle, a driving behavior information reception unit that receives the driving behavior information, and a driving behavior received by the driving behavior information reception unit
  • a computing system comprising: an automatic driving control unit performing automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the host vehicle while referring to behavior information.
  • the computing system receives driving behavior information (experience information) from a non-robot car, and transmits the driving behavior information to the robot car.
  • the robot car having received the driving behavior information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving behavior information.
  • the robot car performs the same level of operation as the non-robot car by performing automatic driving control while referring to the driving behavior information of the non-robot car who has experienced the situation even in the situation where the own vehicle is unexperienced.
  • the driving performance can cope with the situation.
  • the robot car learns the driving technique of the human driver who drives the non-robot car, and the automatic driving performance of the robot car is obtained. The efficiency can be improved.
  • the automatic driving performance of the robot car improves, it is possible to improve the operation efficiency of the entire road traffic system, improve the safety, and the like.
  • a computing system for a road traffic system in which a plurality of vehicles travel on a road the driving behavior information receiving function of receiving driving behavior information of one or more robot cars, and transmitting the driving behavior information to the driving behavior information
  • a driving behavior information transmission function for transmitting to one or more robot cars different from the original, and the robot car is a vehicle in which the driving operation is performed by automatic driving control instead of the driving operation by a human driver
  • a driving condition recognition unit recognizing a driving condition of the own vehicle, a driving behavior information receiving unit receiving the driving behavior information, and the driving behavior information received by the driving behavior information receiving unit while referring to the driving behavior information
  • the automatic driving control unit performs automatic driving control based on the traveling condition recognized by the traveling condition recognition unit, and is recognized by the traveling condition recognition unit
  • a driving behavior information transmitting unit for transmitting to the computing system driving behavior information in which a driving situation is associated with a driving operation by automatic driving control, and the driving behavior information includes the traveling situation and the driving of the robot car.
  • a computing system that is information associated with an operation.
  • the computing system receives driving behavior information (experience information) from the robot car, and transmits the driving behavior information to a robot car different from the transmission source of the driving behavior information.
  • the robot car having received the driving action information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving action information of the other robot car. Therefore, the robot car performs automatic driving control according to the traveling situation of the own vehicle while referring to the driving behavior information of the other robot cars who have experienced the situation even in the situation where the own vehicle is unexperienced And can handle the situation with the same level of driving performance as the other robot cars.
  • robot cars in the road traffic system use driving behavior information to improve learning efficiency of the robot car in the road traffic system and rapidly improve automatic driving performance.
  • a computing system for a road traffic system in which a plurality of vehicles travel on a road the driving behavior information receiving function for receiving driving behavior information of one or more robot cars, and optimization based on the driving behavior information
  • An optimization information generation function for generating driving behavior information, an optimization information updating function for updating and managing the optimized driving behavior information to the latest information, and one or more of the optimized driving behavior information
  • the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by the human driver, and the traveling state of the own vehicle is The driving behavior information received by the driving behavior information receiving unit, the driving behavior information receiving unit that receives the driving behavior information,
  • An automatic driving control unit performing automatic driving control based on the traveling condition recognized by the traveling condition recognition unit of the own vehicle while referring to the driving condition by the traveling condition recognition unit recognized by the traveling condition recognition unit and driving operation by the automatic driving control
  • Driving behavior information transmitting unit for transmitting to the computing system driving behavior information associated with the driving behavior information, and the driving behavior information is information in which the traveling state of the robot car
  • the computing system receives driving behavior information (experience information) from the robot car, generates driving behavior information optimized based on the driving behavior information, and updates the optimized driving behavior information. Update and manage, and transmit the driving behavior information to the robot car.
  • the robot car having received the driving action information from the computing system performs automatic driving control based on the traveling condition of the own vehicle while referring to the driving action information of the other robot car. Therefore, the robot car performs automatic driving control with reference to the driving behavior information optimized based on the driving behavior information of the other robot cars who have experienced the situation even in the situation where the own vehicle is inexperienced Therefore, the situation can be dealt with at the same level or higher driving performance as the other robot cars.
  • the optimized driving behavior information may be a driving behavior information optimized according to a vehicle attribute of a vehicle receiving the provision of the driving behavior information, and a possibility that a vehicle receiving the driving behavior information may contact an obstacle.
  • Driving behavior information optimized so as to minimize the driving behavior information optimized so as to minimize energy consumption of the vehicle receiving the driving behavior information, and of the vehicle receiving the driving behavior information
  • Driving behavior information optimized to maximize regenerative energy
  • driving behavior information optimized to minimize the number of accelerations or acceleration times in a predetermined traveling distance or predetermined traveling time
  • predetermined traveling distance or predetermined traveling time Driving behavior information optimized to minimize or maximize the number of braking times or the braking time at the vehicle, so as to minimize the travel distance from the departure point to the arrival point Optimized been driving behavior information, or travel time of arrival point from the start point is optimized driving behavior information so as to minimize configuration 3.2 or 3.6 computing system.
  • the optimization information generation function corrects the driving behavior information based on the vehicle attribute of the vehicle receiving the driving behavior information so as to minimize the possibility of the vehicle coming into contact with an obstacle.
  • a computing system according to any one of configurations 3.2, 3.6, including.
  • the vehicle attribute of the vehicle providing the driving behavior information (providing source vehicle) is different from the vehicle attribute of the vehicle receiving the providing of the driving behavior information (providing destination vehicle)
  • the providing destination vehicle The driving behavior information corrected so as to minimize the possibility of contact with the obstacle is provided to the destination vehicle.
  • the vehicles can perform the driving support control or the automatic driving control with reference to the driving behavior information. .
  • the computer program of the present invention includes a program having the following configuration.
  • [Configuration 4.1] A computer program for realizing the road traffic system according to any one of the configurations 1.1 to 1.39 using one or more computers.
  • the road traffic system according to any one of configurations 1.1 to 1.39 is realized by executing this computer program by one or more computers constituting the road traffic system.
  • [Configuration 4.2] A computer program for realizing the non-robot car according to any one of configurations 2.1 to 2.10 using one or more computers.
  • the non-robot car according to any one of the configurations 2.1 to 2.10 is realized by executing this computer program by one or more computers constituting the non-robot car.
  • [Configuration 4.3] A computer program for realizing the robot car according to any one of configurations 2.11 to 2.22, 2.30, and 2.31, using one or more computers.
  • the non-robot car according to any one of configurations 2.11 to 2.22, 2.30, and 2.31 is realized by executing this computer program by one or a plurality of computers constituting the robot car.
  • Ru. [Configuration 4.1] A computer program for realizing the computing system according to any one of configurations 3.1 to 3.8 by one or more computers. By executing this computer program by one or more computers, the computing system of any one of configurations 3.1 to 3.8 is realized.
  • the road traffic system according to any one of the configurations 1.1 to 1.40, wherein a vehicle is shared by a plurality of users.
  • the road traffic system of the present invention can be used to construct a vehicle sharing system.
  • the vehicle sharing system of the present invention in a situation where a robot car and a non-robot car coexist, the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done.
  • the driving support performance of the non-robot car is improved, the operation efficiency of the entire vehicle sharing system, the safety, the customer satisfaction, etc. can be improved.
  • the non-robot car learns the operation technique of the robot car, and the driving support performance of the non-robot car is improved efficiently. It can be done. As the driving support performance of the non-robot car is improved, the operation efficiency of the entire vehicle sharing system, the safety, the customer satisfaction, etc. can be improved.
  • the robot cars in the vehicle sharing system use the driving behavior information to improve the learning efficiency of the robot car in the vehicle sharing system and rapidly improve the automatic driving performance. It can be done. Since the automatic driving performance of all the robot cars in the shared vehicle system can be rapidly improved, the operation efficiency, safety, customer satisfaction, etc. of the entire shared vehicle system are rapidly improved.
  • the robot car training system of the present invention includes a system having the following configuration.
  • a non-robot car that has a robot car and a non-robot car that travels the same route as the robot car, and the non-robot car is a vehicle whose driving operation is performed by a human driver, and a traveling condition that recognizes the traveling condition of its own vehicle
  • the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and recognizes the traveling condition of the own vehicle.
  • Driving condition recognition unit driving behavior information acquisition unit for acquiring driving behavior information of the non-robot car, and the traveling condition recognition unit of the own vehicle
  • An automatic driving control unit performing automatic driving control based on the recognized driving situation, and learning processing for learning the driving behavior of the non-robot car based on the driving behavior information acquired by the driving behavior information acquiring unit
  • a robot car teaching system characterized by having.
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car performs automatic driving control based on the traveling condition of the host vehicle, and learns the driving behavior of the non-robot car based on the acquired driving behavior information.
  • this robot car training system it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car, and to improve the automatic driving performance of the robot car.
  • the automated driving performance of the robot car improves, the safety and reliability of the robot car are improved, and thus the safety and reliability of the entire road traffic system in which the robot car and the non-robot car coexist is improved.
  • the automatic driving control unit performs driving of the driving behavior information acquired by the driving behavior information acquiring unit as a learning data set (a combination of a traveling situation and a driving operation performed in the situation), the individual traveling included in the driving behavior information.
  • the robot car training system according to configuration 7.1, wherein learning processing (learning processing by supervised learning) is performed such that the same driving operation (correct operation) as the non-robot car is performed in the situation in the host vehicle.
  • learning processing learning processing by supervised learning
  • the robot car can learn the driving behavior of the human driver who drives the non-robot car by supervised learning in which the driving behavior information of the non-robot car is used as a learning data set.
  • the autonomous driving control unit gives a more positive reward when taking a driving action closer to the driving action of the non-robot car obtained from the driving action information of the non-robot car, and is more distant from the driving action of the non-robot car
  • Robot car of configuration 7.1 that gives a more negative reward (punishment) when taking a driving action and performs a learning process (learning process by reinforcement learning) so as to take a driving action that is likely to get the most reward.
  • Teaching system According to this robot car training system, it is possible to cause the robot car to learn the driving behavior of the human driver who drives the non-robot car by the reinforcement learning based on the driving behavior information of the non-robot car.
  • a non-robot car that has a robot car and a non-robot car that travels the same route as the robot car, and the non-robot car is a vehicle whose driving operation is performed by a human driver, and a traveling condition that recognizes the traveling condition of its own vehicle
  • the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and recognizes the traveling condition of the own vehicle.
  • the driving operation to be executed is determined based on the traveling condition recognition unit to be performed and the traveling condition recognized by the traveling condition recognition unit, and An automatic driving control unit that performs automatic driving control so that the step is executed, and a driving action information acquiring unit that acquires the driving action information output from the non-robot car, the automatic driving control unit comprising
  • the driving knowledge unit stores knowledge information (judgement criteria etc.) to be referred to when determining the driving operation and the driving knowledge unit based on the driving behavior information acquired by the driving behavior information acquiring unit
  • a robot car training system comprising: a learning processing unit (knowledge updating processing unit) that performs learning processing for updating knowledge information.
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car refers to the knowledge information to determine the driving operation according to the traveling situation, performs automatic driving control so that the driving operation is performed, and performs the knowledge information (based on the acquired driving action information)
  • a learning process is performed to update the judgment criteria etc. when determining the driving operation to be executed.
  • this robot car training system it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car, and to improve the automatic driving performance of the robot car.
  • the learning processing unit sets the driving behavior information acquired by the driving behavior information acquiring unit as a learning data set (a combination of a traveling situation and a driving operation performed in the situation), and the individual traveling situations included in the driving behavior information. Performing a learning process (learning process by supervised learning) for updating the knowledge information stored in the driving knowledge unit so that the same driving operation (correct operation) as the non-robot car is performed in the host vehicle Robot car training system of 7.4.
  • the robot car can learn the driving behavior of the human driver who drives the non-robot car by supervised learning in which the driving behavior information of the non-robot car is used as a learning data set.
  • the learning processing unit gives a more positive reward when driving behavior closer to the driving behavior of the non-robot car obtained from the driving behavior information of the non-robot car, and driving farther from the driving behavior of the non-robot car
  • a learning process that gives more negative reward (punishment) when taking action, and updates the knowledge information stored in the driving knowledge section so as to take a driving action that is likely to receive the most reward (learning by reinforcement learning Robot car training system of composition 1.4 which performs processing).
  • the robot car is a vehicle whose driving operation is performed by automatic driving control instead of the driving operation by a human driver, and recognizes the traveling condition of the own vehicle.
  • the driving operation to be executed is determined based on the traveling condition recognition unit to be performed and the traveling condition recognized by the traveling condition recognition unit, and An automatic driving control unit that performs automatic driving control so that the step is executed, and a driving action information acquiring unit that acquires the driving action information output from the non-robot car, the automatic driving control unit comprising
  • the driving operation determination unit uses the driving operation determination unit, which determines the driving behavior according to the traveling condition recognized by the traveling condition recognition unit by calculation, and the driving behavior information acquired by the driving behavior information acquisition unit. And a learning processing unit (parameter adjustment unit) for adjusting parameters of the driving operation determination function.
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car determines the driving operation according to the traveling condition of the own vehicle by the driving operation determination function, performs automatic driving control so that the driving operation is performed, and performs driving based on the acquired driving behavior information. Perform learning processing to adjust the parameters of the operation decision function.
  • this robot car training system it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car, and to improve the automatic driving performance of the robot car.
  • the learning processing unit sets the driving behavior information acquired by the driving behavior information acquiring unit as a learning data set (a combination of a traveling situation and a driving operation performed in the situation), and the individual traveling situations included in the driving behavior information. Perform a learning process (learning process by supervised learning) to adjust parameters of the driving operation determination function so that the same driving operation (correct operation) as the non-robot car is performed in the host vehicle.
  • Robot car training system
  • the robot car can learn the driving behavior of the human driver who drives the non-robot car by supervised learning in which the driving behavior information of the non-robot car is used as a learning data set.
  • the learning processing unit gives a more positive reward when driving behavior closer to the driving behavior of the non-robot car obtained from the driving behavior information of the non-robot car, and driving farther from the driving behavior of the non-robot car
  • a learning process (learning process by reinforcement learning) for adjusting the parameters of the driving operation decision function so as to give a more negative reward (punishment) when taking action and take a driving action that is likely to obtain the most reward
  • this robot car training system it is possible to cause the robot car to learn the driving behavior of the human driver who drives the non-robot car by the reinforcement learning based on the driving behavior information of the non-robot car.
  • the robot car training system according to any one of configurations 9.1 to 9.9, wherein the non-robot car travels the path before the robot car.
  • the robot car is made to learn the driving behavior of the human driver who drives the non-robot car, based on the driving behavior information of the non-robot car traveling on the same route earlier.
  • the robot car while causing the robot car to experience a new situation, the robot car is made to learn the driving behavior of the human driver who drives the non-robot car that has already experienced the situation (learning based on prior information) be able to.
  • the robot car training system according to any one of the configurations 7.1 to 7.9, wherein the non-robot car travels behind the path after the robot car.
  • the robot car is made to learn the driving behavior of the human driver who drives the non-robot car, based on the driving behavior information of the non-robot car traveling on the same route later.
  • a driving behavior information receiving unit having a computing system, the computing system receiving driving behavior information from the non-robot car, and a driving behavior information transmitting unit transmitting the driving behavior information to the robot car
  • the driving behavior information output unit is a driving behavior information transmitting unit for transmitting the driving behavior information of the non-robot car to the computing system
  • the driving behavior information acquisition unit is configured to 7.
  • the robot car training system according to any one of the configurations 7.1 to 7.11, which is a driving behavior information reception unit received from the operating system. .
  • the computing system receives driving behavior information (experience information) from a non-robot car, and transmits the driving behavior information to the robot car.
  • the robot car acquires the driving behavior information of the non-robot car through the computing system, and based on the driving behavior information, the driving behavior of the human driver who drives the non-robot car I can learn.
  • the computing system is an optimization information generation unit that generates optimized driving behavior information based on the driving behavior information received by the driving behavior information reception unit; 7.
  • a robot car training system according to configuration 7.12 comprising: a driving behavior information transmission unit for transmitting the latest driving behavior information generated by the optimization information generation unit to the robot car.
  • the computing system receives driving behavior information (experience information) from a non-robot car, generates driving behavior information optimized based on the driving behavior information, and is optimized. Transmit driving behavior information to the robot car.
  • the robot car that has received the optimized driving behavior information from the computing system can learn the driving behavior of the human driver driving the non-robot car based on the optimized driving behavior information.
  • the optimized driving behavior information is a driving behavior information optimized according to a vehicle attribute of the robot car receiving the driving behavior information, and a robot car receiving the driving behavior information contacts an obstacle.
  • Driving behavior information optimized to minimize possibility driving behavior information optimized to minimize energy consumption of a robot car receiving the driving behavior information, provision of the driving behavior information Driving behavior information optimized to maximize the regenerative energy of the robot car received, driving behavior information optimized to minimize the number of accelerations or acceleration times in a predetermined traveling distance or predetermined traveling time, predetermined traveling distance Or driving behavior information optimized to minimize or maximize the number of braking times or braking times in a predetermined travel time, departure point to arrival point 7.13 of the driving behavior information optimized to minimize the travel distance at the driving point or the driving behavior information optimized to minimize the traveling time from the departure point to the arrival point Robot car teaching system.
  • the robot car is provided with a deep learning function realized by a neuromorphic chip, and the features of the driving behavior (recognition, judgment, planning, operation) of a human driver who drives a non-robot car Can be extracted by the robot car for learning.
  • the robot car has a learning function imitating a real brain realized by a spiking neural network and the driving behavior of a human driver who drives a non-robot car (recognition, judgment, planning , And operation) can be made to be learned by the robot car by itself.
  • the robot car training method of the present invention includes a robot car training method having the following configuration.
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car performs automatic driving control based on the traveling condition of the host vehicle, and learns driving behavior of a human driver who drives a non-robot car based on the acquired driving behavior information.
  • this robot car training method it is possible to make the robot car learn the driving behavior of a human driver who drives a non-robot car, and to improve the automatic driving performance of the robot car.
  • the learning step includes driving behavior information acquired in the driving behavior information acquiring step as a learning data set (a combination of a traveling situation and a driving operation performed in the situation) in each traveling situation included in the driving behavior information.
  • the robot car training method according to configuration 8.1 which is a step of performing learning processing (learning processing by supervised learning) such that the same driving operation (correct operation) as the non-robot car is performed in the host vehicle.
  • the robot car training method it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car by supervised learning using the driving behavior information of the non-robot car as the learning data set.
  • the learning step gives a more positive reward when driving behavior closer to the driving behavior of the non-robot car obtained from the driving behavior information of the non-robot car, and driving behavior further away from the driving behavior of the non-robot car Robot of configuration 8.1, which is a step of giving a negative reward (punishment) more when taking out and performing a learning process (learning process by reinforcement learning) so as to take a driving action that is likely to obtain the most reward.
  • Car teaching method is a step of giving a negative reward (punishment) more when taking out and performing a learning process (learning process by reinforcement learning) so as to take a driving action that is likely to obtain the most reward.
  • a robot car teaching method for driving a robot car by teaching the robot car the driving behavior of a human driver driving a non robot car, wherein the non robot car travels the same route as the robot car A traveling step, a non-robot car traveling condition recognition step in which a non-robot car recognizes the traveling condition of the vehicle while traveling on the route, a non-robot car driving operation of the vehicle by a human driver while traveling A driving action detection step of detecting, a driving action information output step in which a non-robot car outputs driving action information in which the traveling condition is associated with the driving operation, a robot car traveling step in which the robot car travels the route A robot whose robot car recognizes the traveling condition of the vehicle while traveling on the route Running condition recognition step, driving action information acquisition step in which the robot car acquires driving action information
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car refers to the knowledge information to determine the driving operation according to the traveling situation, performs automatic driving control so that the driving operation is performed, and performs the knowledge information (based on the acquired driving action information)
  • a learning process is performed to update the judgment criteria etc. when determining the driving operation to be executed.
  • this robot car training system it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car, and to improve the automatic driving performance of the robot car.
  • the learning step includes driving behavior information acquired in the driving behavior information acquiring step as a learning data set (a combination of a traveling situation and a driving operation performed in the situation) in each traveling situation included in the driving behavior information.
  • learning processing learning processing by supervised learning
  • the step of performing learning processing of updating the knowledge information stored in the driving knowledge storing step such that the same driving operation (correct operation) as the non-robot car is performed in the own vehicle
  • the robot car training method it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car by supervised learning using the driving behavior information of the non-robot car as the learning data set.
  • the learning step gives a more positive reward when driving behavior closer to the driving behavior of the non-robot car obtained from the driving behavior information of the non-robot car, and driving behavior further away from the driving behavior of the non-robot car Learning process that gives more negative rewards (punishment) and takes the driving behavior that most rewards are likely to be obtained, and updating the knowledge information stored by the driving knowledge storage step (learning by reinforcement learning
  • a robot car teaching method for driving a robot car by teaching the robot car the driving behavior of a human driver driving a non robot car, wherein the non robot car travels the same route as the robot car A traveling step, a non-robot car traveling condition recognition step in which a non-robot car recognizes the traveling condition of the vehicle while traveling on the route, a non-robot car driving operation of the vehicle by a human driver while traveling A driving action detection step of detecting, a driving action information output step in which a non-robot car outputs driving action information in which the traveling condition is associated with the driving operation, a robot car traveling step in which the robot car travels the route A robot whose robot car recognizes the traveling condition of the vehicle while traveling on the route Running condition recognition step, driving action information acquisition step in which the robot car obtains driving action information
  • the non-robot car outputs driving behavior information in which the traveling state of the host vehicle and the driving operation performed by the human driver of the host vehicle are associated.
  • the robot car acquires driving behavior information output from the non-robot car.
  • the robot car determines the driving operation according to the traveling condition of the own vehicle by the driving operation determination function, performs automatic driving control so that the driving operation is performed, and performs driving based on the acquired driving behavior information. Perform learning processing to adjust the parameters of the operation decision function.
  • this robot car training method it is possible to make the robot car learn the driving behavior of a human driver who drives a non-robot car, and to improve the automatic driving performance of the robot car.
  • the learning step includes driving behavior information acquired in the driving behavior information acquiring step as a learning data set (a combination of a traveling situation and a driving operation performed in the situation) in each traveling situation included in the driving behavior information.
  • Configuration 8 is a step of performing learning processing (learning processing by supervised learning) for adjusting parameters of the driving operation determination function so that the same driving operation (correct operation) as the non-robot car is performed in the host vehicle .7 Robot car teaching method.
  • the robot car training method it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car by supervised learning using the driving behavior information of the non-robot car as the learning data set.
  • the learning step gives a more positive reward when driving behavior closer to the driving behavior of the non-robot car obtained from the driving behavior information of the non-robot car, and driving behavior further away from the driving behavior of the non-robot car Perform a learning process (learning process by reinforcement learning) to adjust the parameters of the driving operation determination function so as to give a more negative reward (punishment) when taking a driving action and to take a driving action that is likely to obtain the most reward.
  • the robot car teaching method of configuration 8.7 which is a step.
  • this robot car training method it is possible to make the robot car learn the driving behavior of the human driver who drives the non-robot car by reinforcement learning based on the driving behavior information of the non-robot car.
  • the robot car training method according to any one of configurations 8.1 to 8.9, wherein the non-robot car travels the path before the robot car.
  • the robot car is made to learn the driving behavior of the human driver who drives the non-robot car, based on the driving behavior information of the non-robot car traveling on the same route earlier.
  • this robot car teaching method while making the robot car experience a new situation, making the robot car learn (learning based on prior information) the driving behavior of the human driver of the non-robot car who has already experienced the situation it can.
  • Configuration 8.11 8. Robotic teaching method according to any of the features 8.1 to 8.9, characterized in that the non-robot car travels the path after the robot car.
  • the robot car is made to learn the driving behavior of the human driver who drives the non-robot car, based on the driving behavior information of the non-robot car traveling on the same route later.
  • the driving behavior information output step is a step in which the non-robot car transmits the driving behavior information of the own vehicle to the computing system
  • the driving behavior information acquisition step is the robot car
  • the computing system receives driving behavior information (experience information) from the non-robot car, and transmits the driving behavior information to the robot car.
  • the robot car acquires the driving behavior information of the non-robot car through the computing system, and based on the driving behavior information, the driving behavior of the human driver who drives the non-robot car I can learn.
  • An optimization information generation step in which the computing system generates driving behavior information optimized based on the driving behavior information received in the driving behavior information receiving step; and the computing system generates the optimization information in the computing step A driving behavior information transmitting step of transmitting the latest driving behavior information to the robot car.
  • the computing system generates driving behavior information optimized based on driving behavior information (experience information) received from the non-robot car, and the optimized driving behavior information is used as a robot.
  • driving behavior information experience information
  • the robot car that has received the optimized driving behavior information from the computing system can learn the driving behavior of the human driver driving the non-robot car based on the optimized driving behavior information.
  • the optimized driving behavior information is a driving behavior information optimized according to a vehicle attribute of the robot car receiving the driving behavior information, and a robot car receiving the driving behavior information contacts an obstacle.
  • Driving behavior information optimized to minimize possibility driving behavior information optimized to minimize energy consumption of a robot car receiving the driving behavior information, provision of the driving behavior information Driving behavior information optimized to maximize the regenerative energy of the robot car received, driving behavior information optimized to minimize the number of accelerations or acceleration times in a predetermined traveling distance or predetermined traveling time, predetermined traveling distance Or driving behavior information optimized to minimize or maximize the number of braking times or braking times in a predetermined travel time, departure point to arrival point Of driving behavior information optimized to minimize travel distance at the driving point or driving behavior information optimized to minimize traveling time from the departure point to the arrival point, Robot car teaching method.
  • the computer program of the present invention includes a program having the following configuration.
  • [Configuration 9.1] A computer program for realizing the robot car teaching system according to any one of configurations 7.1 to 7.17 using one or more computers. By executing this computer program by one or more computers, a robot car training system according to any one of configurations 7.1 to 7.17 is realized.
  • [Configuration 9.2] A computer program for implementing the robot car teaching method according to any one of configurations 8.1 to 8.15 using one or more computers. By executing this computer program by one or more computers, the robot car training method according to any one of configurations 8.1 to 8.15 is realized.
  • each vehicle drives based on driving behavior information of another vehicle that has experienced the situation even in a situation where the own vehicle has not been experienced.
  • driving behavior information of another vehicle that has experienced the situation even in a situation where the own vehicle has not been experienced.
  • each vehicle is based on the driving behavior information of the other vehicle that has experienced the situation even in the situation where the own vehicle is unexperienced.
  • driving control it is possible to cope with the situation with the same level of driving performance as the other vehicle.
  • the robot car learn the driving behavior of the human driver who drives the non-robot car, and to improve the automatic driving performance of the robot car.
  • the automated driving performance of the robot car improves, the safety and reliability of the robot car are improved, and thus the safety and reliability of the entire road traffic system in which the robot car and the non-robot car coexist is improved.
  • Conceptual diagram showing a configuration example of the road traffic system of the present invention Functional block diagram showing an example of a system configuration of a vehicle (car) in the road traffic system of the present invention Explanatory drawing about embodiment of the road traffic system of this invention.
  • Conceptual diagram illustrating the data structure of driving behavior information (A): An explanatory view of data ID (B): An explanatory view of route ID Explanation of travel route
  • FIG. 12 An explanatory view following FIG. 12 Explanatory drawing of the embodiment which presupposes FIG.12 and FIG.13
  • a conceptual diagram showing another configuration example of the road traffic system of the present invention A conceptual diagram showing yet another configuration example of the road traffic system of the present invention
  • a conceptual diagram showing yet another configuration example of the road traffic system of the present invention A conceptual diagram showing yet another configuration example of the road traffic system of the present invention
  • a conceptual diagram showing yet another configuration example of the road traffic system of the present invention A conceptual diagram showing yet another configuration example of the road traffic system of the present invention (A): A functional block diagram showing a configuration example of an automatic operation control unit of a robot car in FIGS.
  • FIG. 17 to 21 A functional block diagram showing a configuration example of a driving support control unit of a non-robot car in FIG. (A): A functional block diagram showing another configuration example of the automatic driving control unit of the robot car in FIGS. 17 to 21 (B): Another configuration example of the driving support control unit of the non-robot car in FIG.
  • Functional block diagram shown A conceptual diagram showing a configuration example of a robot car training system according to the present invention
  • FIG. 24 is a flow diagram illustrating the operation content of the robot car training system of FIG.
  • FIG. 22 is a flow diagram illustrating the contents of a learning step executed in the configuration example of FIG.
  • FIG. 23 is a flow diagram illustrating the contents of a learning step executed in the configuration example of FIG.
  • FIG. 32 is a flow chart illustrating the operation content of the robot car training system of FIG. 32;
  • a robot car is a car that can travel automatically without human driving. In Japan, it is also called “automotive car”. In English, it is written as “autonomous car”. It is also called “UGV (unmanned ground vehicle)", “driverless car” or “self-driving car”. (Quoted from Wikipedia)
  • a non-robot car is a car other than a robot car. The non-robot car is operated by a human driver. A car that does not have the function (automatic driving function) that can automatically travel without human driving is a non-robot car.
  • Non-robot cars include vehicles having a manual driving function and a driving support function but not having an automatic driving function.
  • the driving situation includes a self situation and a non-self situation (external environment).
  • the self status includes the position (latitude, longitude) of the vehicle on the earth, the motion status of the vehicle (internal environment), the relative status with surrounding objects, and the like.
  • the motion condition of the vehicle is: center of gravity position (x, y, z), yaw ( ⁇ ), roll ( ⁇ ), pitch ( ⁇ ), speed (first-order time derivative of center of gravity position), acceleration (center of gravity position Second-order time derivative), angular velocity (yaw rate), etc.
  • the peripheral object is an object present around the vehicle.
  • Surrounding objects include vehicles, pedestrians, stationary objects on the ground, and the like.
  • the relative situation with the surrounding object includes the positional relationship between the vehicle and the surrounding object, the distance between the vehicle and the surrounding object, and the like.
  • Ground stationary objects include traffic signals, road signs, pedestrian crossings, road shoulders, guard rails, telephone poles, fences, garages, houses, and the like.
  • non-self status As examples of non-self status (external environment), travel route, travel lane, width of travel lane, number of lanes, road shape, road slope, road surface type, road surface condition, surrounding brightness, weather, display contents of traffic lights, The number of surrounding vehicles, forward vehicle speed, forward vehicle acceleration, surrounding obstacles, types of traveling lanes, and the like can be mentioned.
  • the driving operation is a concept including the content of the operation and the operation amount of the operation.
  • an operation for adjusting the propulsive force of the vehicle
  • an operation for adjusting the braking force of the vehicle
  • the steering angle or steering angular velocity of the vehicle Operations steering operation
  • operations for changing the combination of gears of the transmission of the vehicle shift operation
  • the driving behavior information is information in which the traveling condition of the vehicle is associated with the driving operation performed on the vehicle, and includes position on the route-driving operation correspondence information, entering and leaving route position-driving operation correspondence information, and the like.
  • Information on parking operation (driving operation) performed at each point on the moving route for entering (parking) in the parking space as an example of the entry and exit route position-driving operation correspondence information (position on entry route-driving operation Correspondence table), information on the departure operation (driving operation) performed at each point on the moving path for leaving the parking space (location on leaving route-driving operation correspondence table), and the like can be mentioned.
  • the learning includes learning based on various data obtained during manual driving, learning based on various data obtained during driving assistance, and learning based on various data obtained during automatic driving.
  • the learning includes learning of action plans, learning about operation tendencies, learning about surrounding objects, and the like.
  • the learning of the action plan includes learning of knowledge (data) for determining the driving operation to be performed, learning of a calculation formula (program) for determining the driving operation to be performed, and the like.
  • knowledge data
  • a calculation formula program
  • the knowledge is updated so that the same driving operation (correct operation) as that performed in the other vehicle is performed in the own vehicle for a certain traveling situation.
  • a positive reward is given when driving behavior closer to the driving behavior of the other vehicle obtained from the driving behavior information of the other vehicle is taken, and driving behavior further from the driving behavior of the other vehicle Give them a negative reward (punishment), learn how much reward is likely to be earned if they act, and knowledge so that the most reward will be taken.
  • reinforcement learning can be mentioned.
  • the knowledge is updated such that the action that is likely to receive the most reward is performed, and as a result, the optimal driving action is performed.
  • a calculation formula program
  • so-called parameter learning for learning a driving operation determination function (calculation formula) that determines (estimates) a driving operation to be performed based on driving behavior information of another vehicle can be mentioned.
  • adjustment of the parameters of the driving operation determination function is performed such that an error between the driving behavior information of the other vehicle and the driving operation given by the driving operation determination function is minimized.
  • parameter learning using the driving behavior information of another vehicle as a learning data set, the same driving operation (a correct operation) as that of the other vehicle is performed in the own vehicle in each traveling situation included in the driving behavior information.
  • a learning process (learning process by supervised learning) that adjusts the parameters of the driving operation determination function is included.
  • a positive reward is given when driving behavior closer to the driving behavior of the other vehicle obtained from the driving behavior information of the other vehicle is taken, and from the driving behavior of the other vehicle Give a negative reward (punishment) when you drive further away, and learn how much reward is likely to be obtained if you act, and take action that is likely to receive the most reward.
  • reinforcement learning in which parameters of the driving operation determination function are adjusted. In this case, by adjusting the parameters of the driving operation determination function so as to take an action that is likely to obtain the most reward, as a result, the optimum driving action is performed.
  • learning based on the number of times of passing at each point and the number of driving operations and the amount of operation performed at each point can be mentioned. For example, the ratio of the number of times of passing each point and the number of specific driving operations performed at the point is calculated, and if the ratio is equal to or more than the predetermined value, the point is set as the operation required point and the ratio is less than the predetermined value If so, the point is set as an operation unnecessary point. As a result of learning, driving support control or automatic driving control is performed at a point set as the operation required point.
  • a brake operation with an operation amount equal to or more than a predetermined value As an example of a specific driving operation, a brake operation with an operation amount equal to or more than a predetermined value, an accelerator operation with an operation amount equal to or more than a predetermined value, a steering operation with an operation amount equal to or more than a predetermined value, an operation amount (change amount of gear ratio) is a predetermined value
  • the above shift operation etc. can be mentioned.
  • learning about a peripheral object learning based on the number of times of passing each point and the number of times of detection of the peripheral object at each point can be mentioned.
  • the ratio of the number of times of passing through each point and the number of times of detection of a specific surrounding object detected at the point is calculated, and if the ratio is equal to or more than a predetermined value, the point is set as a caution point and the ratio is predetermined If it is less than the value, the point is set as the standard attention point.
  • the detection processing of the surrounding objects is executed with higher accuracy than the detection accuracy at the standard caution point, and based on the detection result, driving assistance considering safety Control or automatic operation control is performed.
  • driving support control for reducing the possibility of the vehicle approaching the specific peripheral object when the vehicle is in contact with the specific peripheral object
  • Driving support control braking support control
  • automatic driving control to reduce the possibility of the vehicle approaching the specific peripheral object when the vehicle contacts the specific peripheral object
  • Automatic operation control braking operation control or the like which makes the impact of the vehicle smaller.
  • the learning about peripheral objects includes time zone learning about peripheral objects. In the case of learning for each time zone for objects in the vicinity, the ratio is calculated for each time zone of a day, and the point of caution is set for each time zone.
  • the learning about driving operations and the learning about surrounding objects include learning in consideration of vehicle attributes.
  • Examples of specific peripheral objects include pedestrians and bicycles on pedestrian crossings in front of vehicles, pedestrians and bicycles crossing roads in front of vehicles, oncoming vehicles, passing vehicles, obstacles on roads, and the like.
  • Examples of obstacles on the road include vehicles parked and stopped on the roadside, utility poles on the roadside and at corners, trash cans and signs placed on the street, and signs and trees overhanging the road.
  • the vehicle attributes include vehicle type, vehicle size, inner / outer ring difference, vehicle weight, usage mode of vehicle, classification of vehicle type, vehicle number, door opening width, engine type, and the like.
  • vehicle a private car, a sales car, a freight carrier, a passenger transporter (taxi), a passenger transporter (bus), etc.
  • the classification of vehicle types includes large vehicles, small vehicles, two-wheelers, and the like.
  • the ratio of the driving operation to be refrained from vehicle attribute is calculated, and a point whose ratio is equal to or more than a predetermined value is set as a caution point.
  • the ratio of surrounding objects having a high possibility of approaching within a predetermined distance in the vehicle attributes is calculated, and a point having the ratio equal to or more than a predetermined value is required. It is set as a caution point.
  • the driving operation which should be refrained from vehicle attribute high speed traveling and sudden steering operation on the curved road etc of vehicles with high center of gravity (vehicles with high height, vehicles with a lot of loads, etc.), sudden acceleration of bus or taxi An operation, a sudden brake operation, etc. can be mentioned.
  • Passengers in the case of buses or taxis, electric poles in the case of large vehicles, billboards overhanging on the road, etc. can be mentioned as examples of nearby objects that are likely to approach within a predetermined distance according to vehicle attributes .
  • this type of machine learning operation control method can be used.
  • General-purpose image recognition systems capable of detecting arbitrary objects such as surrounding vehicles and pedestrians are known.
  • a known general-purpose image recognition system can be used.
  • Known as methods for detecting pedestrians and other vehicles are extraction of feature quantities of HOG (Histograms of Orientied Gradients), threshold learning by SVM (Support Vector Machine) which is one of machine learning methods, and the like. There is.
  • Computing systems in the road traffic system, the vehicle sharing system, and the robot car training system of the present invention include a Cloud Computing System.
  • Cloud computing is one of distributed computing using the Internet.
  • the cloud refers to a data center for realizing cloud computing and a server computer group operated in the data center.
  • Cloud technology makes it possible to process large volumes of data without the user being aware of where the data is on the Internet.
  • the big data in the road traffic system, the vehicle sharing system and the robot car training system of the present invention that is, the huge number of data transmitted from a large number of vehicles traveling on the earth ( Data, data on driving operations, etc.) can be processed.
  • the road traffic system, the vehicle sharing system, and the robot car training system of the present invention can be applied to similar systems described in Patent Documents 1-44, Non-Patent Documents 1-4, and the like.
  • FIG. 1 is a conceptual view showing a configuration example of a road traffic system of the present invention.
  • the road traffic system 1 illustrated in FIG. 1 includes a car 100 and a computing system 200.
  • Computing system 200 comprises server computer 210 and database 220.
  • the server computer 210 receives driving behavior information of a number of vehicles including the automobile 100 via the Internet 300.
  • the server computer 210 stores the received driving behavior information in the database 220.
  • the server computer 210 transmits the driving behavior information extracted from the database 220 to a number of vehicles including the automobile 100 via the Internet 300.
  • the server computer 210 may be single or plural.
  • the database 220 may be disposed on one server computer or distributed on a plurality of server computers.
  • the vehicle 100 includes an on-vehicle gateway 110.
  • the on-vehicle gateway 110 is an information processing apparatus having a wireless communication function and configured mainly of a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), and the like (not shown).
  • the in-vehicle gateway 110 executes various processes by the CPU executing a control program stored in the ROM.
  • the on-vehicle gateway 110 uploads various data to the computing system 200 via the Internet 300 (sends to the server computer 210), and downloads various data from the computing system 200 via the Internet 300 (received from the server computer 210) Do.
  • the data transmitted and received between the automobile 100 and the computing system 200 includes data of driving behavior information of the host vehicle and data of driving behavior information of another vehicle.
  • FIG. 2 is a functional block diagram showing an example of a system configuration of a vehicle (car) in the road traffic system of the present invention.
  • the automobile 100 has an on-board gateway 110 and a traveling control system 120.
  • the in-vehicle gateway 110 communicates with the computing system 200 under the control of the travel control system 120.
  • the in-vehicle gateway 110 inputs data received from the computing system 200 to the traveling control system 120.
  • the in-vehicle gateway 110 transmits the data input from the traveling control system 120 to the computing system 200.
  • the traveling control system 120 includes a detection unit 121, a vehicle information input unit 122, a positioning unit 123, a map information input unit 124, an operation unit 125, a communication unit 126, a display unit 127, a storage unit 129, a control unit 129, and the like.
  • the detection unit 121 is configured of sensors for detecting the presence of objects in the vicinity (other vehicles, pedestrians, stationary objects on the ground, etc.), and the position, size, relative velocity, etc. of objects in the vicinity. .
  • the detection unit 121 is embodied by, for example, a sonar 121a, a radar 121b, a camera 121c, a three-dimensional range sensor, and the like.
  • the sonar 121a transmits an ultrasonic wave to a predetermined area from each antenna directed in the front, rear, left, and right directions of the host vehicle, and receives the reflected wave. Then, based on the received reflected wave, the positional relationship with the host vehicle, the distance, and the like are output for an object existing in the front, rear, left, and right directions of the host vehicle.
  • the radar 121 b irradiates laser light or a millimeter wave from an antenna directed in the front, rear, left, and right directions of the host vehicle, scans a predetermined detection area, and receives the reflected wave.
  • the camera 121c is provided at a predetermined position in the front, rear, left, and right directions of the host vehicle, and outputs imaging data in which surrounding vehicles present in the front, rear, left, and right directions of the host vehicle are captured.
  • a plurality of such sensors such as sonars, radars, cameras 121c, and three-dimensional range sensors may be used in combination or may be used alone.
  • the vehicle information input unit 122 controls information related to the movement status (center of gravity, yaw, roll, pitch, speed, acceleration, angular velocity, etc.) of the host vehicle and driving operation (accelerator operation, brake operation, steering operation, shift operation). Input to the part 128.
  • the positioning unit 123 measures the position (latitude, longitude) of the host vehicle on the earth, and inputs the position to the control unit 128.
  • the positioning unit 123 is embodied by, for example, a high precision positioning receiver or the like compatible with high precision GPS (Global Positioning System).
  • the map information input unit 124 acquires information on the road on which the vehicle is currently traveling from the storage medium storing the road map information, and inputs the information to the control unit 128.
  • the operation unit 125 is an input device for inputting operation instructions such as on / off of travel control, switching of control modes, switching of various displays on the display unit 127, and provided, for example, in spokes of a steering wheel of a vehicle. Are realized by switches and the like.
  • the communication unit 126 is a communication device for communicating with a communication device provided on the ground stationary object and a communication device mounted on a nearby vehicle. Ground stationary objects include garages and roads.
  • the display unit 127 is a display device including a center display provided in the center of the instrument panel and an indicator provided in the meter panel.
  • the display unit 127 displays on / off of travel control and a control mode together with information indicating the state of the host vehicle.
  • the control modes include a manual driving mode, a driving support mode, and an automatic driving mode.
  • the storage unit 128 is a storage device that stores driving behavior information of the host vehicle and driving behavior information of another vehicle.
  • the control unit 129 is an information processing apparatus configured mainly by a CPU, a ROM, a RAM, and the like (not shown), and centrally controls each part of the traveling control system 120.
  • the control unit 129 executes various processes by the CPU executing a control program stored in the ROM.
  • the control unit 129 Based on the position (latitude, longitude) of the host vehicle input from the positioning unit 123 and the road map information input from the map information input unit 124, the control unit 129 provides information on road structures such as telephone poles and traffic signals. And the three-dimensional range sensor of the detection unit 121 detects the three-dimensional distance of the surrounding object. Then, the 3D distance data and the road map are synthesized in real time, and whether the object detected by the 3D range sensor is a road structure or an object on the road (vehicle, pedestrian, etc.) is accurately determined. Identify High-accuracy grasping of the position of the vehicle is realized by a known method such as Monte Carlo localization, and GPS uses position information as secondary information.
  • the relative situation with the surrounding object is realized by a known method such as a Kalman filter.
  • the control unit 129 stores in the storage unit 19 driving behavior information of the host vehicle based on various information input from the detection unit 121, the vehicle information input unit 122, the positioning unit 123, and the map information input unit 124.
  • the operation history information obtained by the detection unit 121, the vehicle information input unit 122, the positioning unit 123, and the map information input unit 124 in the driving behavior information of the own vehicle (the position on the traveling route-driving operation correspondence table, entering and leaving Path position-driving operation correspondence table etc. is included.
  • the control unit 129 communicates with the computing system 200 via the onboard gateway 110.
  • the control unit 129 transmits the driving behavior information of the vehicle stored in the storage unit 128 to the computing system 200 via the on-vehicle gateway 110.
  • the control unit 129 stores the driving behavior information of the other vehicle received via the on-vehicle gateway 110 in the storage unit 128.
  • Operation history information obtained by the detection unit 121 of the other vehicle, the vehicle information input unit 122, the positioning unit 123, and the map information input unit 124 is included in the driving behavior information of the other vehicle , Entry and exit route position-driving operation correspondence table, etc.).
  • the control unit 129 communicates with surrounding ground stationary objects and surrounding vehicles via the communication unit 126.
  • Control unit 129 transmits the driving behavior information of the host vehicle stored in storage unit 128 to the ground stationary object and surrounding vehicles via communication unit 126.
  • Control unit 129 stores, in storage unit 128, driving behavior information of another vehicle received via communication unit 126.
  • the control unit 129 is connected to a vehicle control unit 130 that is a target of driving control.
  • the vehicle control unit 130 includes various electronic control devices such as an engine ECU (Electronic Control Unit) 130a, a brake ECU 130b, a steering angle ECU 130c, and a stability ECU 130d.
  • the engine ECU 130a controls the output of the engine by issuing a control command according to the operation amount of the accelerator pedal and the state of the engine.
  • the brake ECU 130 b controls the braking force of the brake according to the operation amount of the brake pedal.
  • the steering angle ECU 130 c controls the steering angle of the steering.
  • the stability ECU 130 d controls the traveling stability of the vehicle.
  • the control unit 129 controls the traveling of the vehicle by giving commands to the respective ECUs in the vehicle control unit 130 according to the amount of driving operation (accelerator operation amount, brake operation amount, steering operation amount, etc.).
  • the control unit 129 analyzes, in real time, the traveling situation of the host vehicle changing from moment to moment, based on the analysis result and the driving behavior information of the host vehicle and / or the driving behavior information of the other vehicle.
  • the drive support information is generated, and the drive support information is notified to the driver using the display unit 127 or the like.
  • the control unit 129 analyzes, in real time, the traveling condition of the vehicle which changes from moment to moment, based on the analysis result, the driving behavior information of the vehicle, and / or the driving behavior information of the other vehicle. Drive operation amount is determined, and commands are given to each ECU in the vehicle control unit 130.
  • the onboard gateway 110 and the traveling control system 120 are separately present, the onboard gateway 110 can be integrated with the traveling control system 120.
  • the automobile 100 configured as described above performs driving assistance and automatic driving based on the traveling state of the host vehicle and the driving behavior information of the host vehicle or the driving behavior information of another vehicle.
  • the automobile 100 functions as a non-robot car when traveling in the manual operation mode or the driving support mode, and functions as a robot car when traveling in the automatic operation mode.
  • FIG. 3 is an explanatory view of an embodiment of a vehicle (motor vehicle) of the present invention.
  • the automobile V1 (100) has no experience of traveling on the traveling route R.
  • the automobile V2 (100) has experience of traveling on the traveling route R.
  • the vehicle V2 acquires various data related to driving when traveling on the traveling route R, and stores the various data in the storage unit 128 of the host vehicle.
  • the vehicle V2 provides the vehicle V1 with driving behavior information including the various data stored in the storage unit 128.
  • the driving behavior information in this case includes information in which the traveling route R and the driving operation performed by the vehicle V1 at each point on the route R are associated with each other.
  • the automobile (own vehicle) V1 can use the driving behavior information of the automobile (other vehicle) V2 for driving support control and automatic driving control of the own vehicle V1.
  • the automobile V1 has no experience of traveling on the traveling route R, but performs driving support control and automatic driving control based on the driving behavior information of the automobile (other vehicle) V2 who has experience traveling the traveling route R. (Other Vehicles) It is possible to exhibit the same level of driving support performance and automatic driving performance as V2.
  • the traveling route R is a narrow and winding road or a narrow road where there are many obstacles such as a telephone pole, the driver who is not familiar with driving or a car sharing service, etc. It is not easy for the driver to board the traveling route R to travel smoothly. It is not good at a robot car (automated car) to travel smoothly on this kind of road.
  • the vehicle V2 travels smoothly along the travel route R every day, the driver of the vehicle V1 is unfamiliar with the driving because the vehicle V1 performs the driving assistance control with reference to the driving behavior information of the vehicle V2.
  • the vehicle V1 is a vehicle such as a car sharing service, the vehicle V1 can smoothly travel on the traveling route R with the same level of driving performance as the vehicle V2.
  • the car V1 performs the automatic driving control with reference to the driving behavior information of the car V2 so that the car V1 travels along the traveling route R smoothly with the driving performance of the same level as the car V2. It is possible to
  • FIG. 4 is a conceptual view exemplifying a position on a driving route-driving operation correspondence table included in driving behavior information.
  • the driving behavior information is managed by data ID: Data ID.
  • the data ID is a unique value that can specify data of one piece of driving behavior information out of a huge amount of driving behavior information.
  • a unique combination of a vehicle ID (Car ID) and a route ID (Root ID) is specified by the data ID (see FIG. 5A).
  • the vehicle ID is a unique value that can identify one vehicle among many vehicles.
  • the vehicle ID of the vehicle is also specified by the vehicle ID.
  • the path ID is a unique value that can identify one path out of a huge number of paths.
  • the combination of the starting point (Starting Point), the arrival point (Destination Point), and the passing point (Pass Point) is specified by the path ID (see FIG. 5B).
  • the driving behavior information illustrated in FIG. 4 is obtained when the vehicle V2 travels on the route R on the map illustrated in FIG.
  • the starting point (Starting Point) of the route R is S1
  • the destination point (Destination Point) is D1
  • the pass point (Pass Point) is PP1, PP2, and PP3.
  • the driving behavior information illustrated in FIG. 4 indicates the correspondence with the driving operation performed by the vehicle V1 at each point (P1, P2,..., Pn) on the route R.
  • acceleration operation start operation
  • acceleration operation at point P1 acceleration operation at point P2, deceleration operation (braking operation) at point P3, and left turning operation (operation to turn the steering wheel to the left) at point P4
  • the right turning operation operation to return the steering wheel
  • acceleration operation at point P6 deceleration operation (braking operation) at point Pn-3
  • right turning operation operation to turn the steering wheel to the right
  • the left turning operation operation to return the steering wheel
  • the deceleration operation braking operation
  • a method of delivering driving behavior information from the automobile V2 to the automobile V1 is arbitrary.
  • communication between the host vehicle V1 and the other vehicle V2 see FIG. 7
  • communication between the host vehicle V1 and the ground stationary object 410 see FIG. 8
  • the host vehicle V1 and the road 420 Communication between the host vehicle V1 and the portable terminal 500 (see FIG. 10), delivery of information via the computing system (cloud system) 200 (see FIG. 11), etc.
  • a traffic light on a road is illustrated as the ground stationary object 410.
  • the vehicles V1 and V2 transmit and receive driving behavior information by using the traffic light 410 as an access point (AP).
  • AP access point
  • the driving behavior information of the vehicle V2 transmitted from the vehicle V2 is received by the vehicle V1 via the traffic light 410.
  • the traffic light 410 that receives the driving behavior information and the traffic light 410 that transmits the driving behavior information may be the same traffic light or may be different traffic lights.
  • access points (APs) are arranged at predetermined intervals along the road.
  • the access point (AP) may be embedded in the road surface or may be provided on the side of the road.
  • the driving behavior information of the automobile (other vehicle) V2 is stored in the portable terminal 500 of the driver of the automobile (own vehicle) V1.
  • the driving behavior information is downloaded from the computing system 200 to the portable terminal 430.
  • the configuration shown in FIG. 11 is, for example, a ground stationary object 410 shown in FIG. 8 or an access provided on a road 420 shown in FIG. 9 as an access point for communicating vehicles V1 and V2 with the computing system 200 via the Internet 300. It can be realized by using points (AP).
  • FIG. 12 is an explanatory view of another embodiment of the automobile of the present invention.
  • the garage G is a garage that the automobile V2 (100) uses daily. Since the garage G faces the narrow road ST, the warehousing operation is difficult for a driver who is unfamiliar with driving or a driver who gets on a different vehicle every time of driving by a car sharing service or the like.
  • On the left front side of the garage G there is a recess D having a shape branched from the road ST.
  • the vehicle V2 In order to insert the vehicle V2 into the garage G, first, as shown in FIG. 13A, the vehicle V2 has to be advanced diagonally to the right until the right end of the vehicle V2 enters the recess D. Thereafter, as shown in FIG.
  • the vehicle V2 must be advanced while carefully adjusting the snake angle so that the orbit draws an arc.
  • the vehicle V2 acquires various data related to driving when it is stored in the garage G, and stores the various data in the storage unit 128 of the own vehicle.
  • the vehicle V2 provides the vehicle V1 (100) with driving behavior information including the various data stored in the storage unit 128.
  • the driving behavior information in this case is information in which the moving route for storing in the garage G and the driving operation performed by the vehicle V2 at each point on the route are associated (entry / exit route position-driving operation correspondence table) Is included.
  • the manner in which the vehicle V2 provides its driving behavior information to the vehicle V1 is arbitrary.
  • FIG. 14 exemplifies a case where driving behavior information is delivered by communication between the vehicles V1 and V2 and the garage G.
  • a garage storage experience providing device 440 is provided in the vicinity of the entrance of the garage G.
  • the garage storage experience report providing device 440 receives the driving behavior information (experience information) from the automobile V2 (driving behavior information receiving function), and stores the driving behavior information in the storage unit (driving behavior information storage function) . Then, when the vehicle V1 approaches the garage G, the garage storage experience providing device 440 transmits the driving behavior information stored in the storage unit to the vehicle V1.
  • the vehicle V1 uses the driving behavior information of the vehicle V2 for the driving support control and the automatic driving control of the own vehicle.
  • the car V1 has no experience of warehousing in the garage G, but by performing driving support control and automatic driving control based on the driving behavior information of the car V2 using the garage G on a daily basis, the same level as the car V2 Can be stored in the garage G with the driving support performance and the automatic driving performance. The same is true for leaving goods.
  • FIG. 15 is a conceptual view showing a configuration example of a computing system in a road traffic system according to the present invention.
  • the server computer 210 in the computing system 200 receives driving behavior information (experience information) of one or more vehicles V1, V2, V3... Via the Internet (driving behavior information receiving function 210a), and the driving behavior Information is transmitted via the Internet 300 to one or more vehicles V1, V2, V3, ... different from the transmission source of the driving behavior information (driving behavior information transmitting function 210b).
  • the vehicle that has received the driving behavior information from the server computer 210 can use the driving behavior information for driving support control and automatic driving control of the own vehicle.
  • FIG. 16 is a conceptual diagram showing another configuration example of the computing system in the road traffic system of the present invention.
  • the server computer 210 in the computing system 200 receives driving behavior information (experience information) of one or more vehicles via the Internet 300 (driving behavior information receiving function 210a), and is optimized based on the driving behavior information. Generate the driving behavior information (optimization information generation function 210c), update the optimized driving behavior information to the latest information and manage it (optimization information updating function 210d), and use the Internet for one or more vehicles It transmits via (driving action information transmission function 210b).
  • the vehicle that has received the driving behavior information from the server computer 210 can use the driving behavior information for driving control of the vehicle.
  • the driving assistance control and the automatic driving control are performed based on the driving behavior information optimized based on the driving behavior information of the other vehicle that has experienced the situation.
  • the situation can be coped with driving support performance and automatic driving performance equal to or higher than vehicles.
  • a large number of vehicles V1, V2, V3, ..., Vn mutually utilize driving behavior information to efficiently optimize the driving support performance and the automatic driving performance of each vehicle. it can.
  • a large number of vehicles V1, V2, V3, ..., Vn can efficiently optimize the driving support performance and the automatic driving performance of each vehicle by utilizing not only the experience of the own vehicle but also the experiences of other vehicles.
  • a transportation system can be realized.
  • Targets of optimization include consumed energy, regenerative energy, accident incidence rate, and the like.
  • the server computer 210 provides the driving behavior information according to the vehicle attribute of the provision destination vehicle when the vehicle attributes (vehicle type, vehicle size, inner ring difference, etc.) of the provision source vehicle of the driving behavior information and the provision destination vehicle are different. To the optimal value. Therefore, even when the source vehicle and the destination vehicle have different vehicle attributes, the destination vehicle is provided with the driving behavior information optimized for the vehicle.
  • the optimized driving behavior information there can be mentioned driving behavior information that has been modified so as to minimize the possibility of the destination vehicle coming into contact with an obstacle, according to the current traveling situation.
  • the correspondence relationship between the source vehicle and the destination vehicle may be a many-to-one relationship. In the case of a many-to-one relationship, it is desirable to provide the providing destination vehicle with driving behavior information in which the average value of the driving behavior information of the plurality of providing source vehicles is corrected.
  • FIG. 17 is a conceptual view showing another configuration example of the road traffic system of the present invention.
  • the vehicles 100 constituting the road traffic system 1 are roughly classified into robot cars (automatically driven cars) 100A and non-robot cars (manually operated cars or cars with a driving support function) 100B. Although only one robot car 100A and one non-robot car 100B are shown in FIG. 17, in an actual system, there are a plurality of robot cars 100A and one or more non-robot cars 100B.
  • the server computer 210 in the computing system 200 has a driving behavior information reception function 210a for receiving driving behavior information (experience information) from the non-robot car 100B, and a driving behavior information transmission function 210b for transmitting the driving behavior information to the robot car 100A. And.
  • the robot car 100A is a vehicle in which a driving operation is performed by automatic driving control instead of the driving operation by a human driver.
  • the robot car 100A includes a driving condition recognition unit 100Aa for recognizing the driving condition of the host vehicle, a driving behavior information receiving unit 100Ab for receiving driving behavior information of the non-robot car 100B from the computing system 200, and a driving behavior information receiving unit 100Ab.
  • an automatic driving control unit 100Ac that performs automatic driving control according to the traveling condition recognized by the traveling condition recognition unit 100Aa of the own vehicle while referring to the driving behavior information received by the control unit.
  • the robot car 100A performs automatic driving control while learning the driving operation based on various data obtained at the time of automatic driving travelling.
  • the robot car 100 ⁇ / b> A includes a so-called driver-assisted self-driving car that a human driver can perform an avoidance operation in an emergency.
  • the non-robot car 100B is a vehicle on which a human driver performs a driving operation.
  • the non-robot car 100B includes a traveling condition recognition unit 100Ba that recognizes the traveling condition of the host vehicle, a driving operation detection unit 100Bb that detects a driving operation by the human driver of the host vehicle, and a traveling condition recognized by the traveling condition recognition unit 100Ba.
  • driving behavior information transmitting unit 100Bc for transmitting to the computing system 200 driving behavior information in which the driving behavior detected by the driving operation detection unit 100Bb is associated with each other.
  • the non-robot car 100B executes driving support control while learning the driving operation of the driver of the host vehicle.
  • the traveling condition recognition units 100Aa and 100Ba are realized by the detection unit 121, the vehicle information input unit 122, the positioning unit 123, the map information input unit 124, the operation unit 125, the communication unit 126, and the control unit 129.
  • the driving operation detection unit 100Bb is realized by the driving operation detection function of the vehicle information input unit 122.
  • the driving behavior information reception unit 100Ab and the driving behavior information transmission unit 100Bc are realized by the on-vehicle gateway 110.
  • the automatic operation control unit 100Ac is realized by the control unit 129.
  • the non-robot car 100B transmits the driving behavior information of the host vehicle to the computing system 200.
  • the server computer 210 in the computing system 200 receives the driving behavior information of the non-robot car 100B via the Internet, and transmits the driving behavior information to the robot car 100A via the Internet 300.
  • the robot car 100A that has received the driving behavior information of the non-robot car 100B from the server computer 210 uses the driving behavior information for automatic driving control of the own vehicle. That is, the robot car 100A performs automatic driving control according to the traveling condition of the own vehicle while referring to the driving behavior information of the non-robot car 100B received from the server computer 210.
  • the driving behavior information that the robot car 100A receives from the computing system 200 is driving behavior information in which the learning result of the driving operation by the driver of the non-robot car 100B is reflected. Therefore, according to this system, the robot car 100A drives the non-robot car 100B even if the non-robot car 100B has experienced the situation even when the host vehicle is inexperienced (not learned). By performing automatic driving control based on the behavior information, the situation can be dealt with with the same level of driving performance as the non-robot car 100B.
  • FIG. 18 is a conceptual view showing still another configuration example of the road traffic system of the present invention.
  • the same components as in FIG. 17 will be assigned the same reference numerals and descriptions thereof will be omitted as appropriate.
  • the server computer 210 in the computing system 200 generates driving behavior information optimized based on the driving behavior information and the driving behavior information receiving function 210a for receiving the driving behavior information (experience information) from the non-robot car 100B.
  • the optimization information generation function 210c, the optimization information update function 210d for updating and managing the optimized driving behavior information to the latest information, and the driving behavior information for transmitting the optimized driving behavior information to the robot car 100A And a transmission function 210b.
  • the non-robot car 100B transmits the driving behavior information of the host vehicle to the computing system 200.
  • the server computer 210 in the computing system 200 receives the driving behavior information of the non-robot car 100B via the Internet, optimizes the driving behavior information, and always updates the latest optimized driving behavior information to the robot car 100A. Send via 300.
  • the robot car 100A that has received the driving behavior information of the non-robot car 100B from the server computer 210 can use the driving behavior information for automatic driving control of the own vehicle.
  • the robot car 100A is optimal based on the driving behavior information of the non-robot car 100B when the non-robot car 100B is a vehicle that has experienced the situation even in the situation where the own vehicle is unexperienced. By performing automatic driving control based on the updated latest driving behavior information, it is possible to cope with the situation with driving performance equal to or higher than that of the non-robot car 100B.
  • the non-robot car 100B learns the driving operation of the driver every day and improves the driving support performance daily, thereby improving the automatic driving performance of the robot car 100A daily be able to. That is, in a situation where the robot car 100A and the non-robot car 100B coexist, the robot car 100A learns the driving technique of the driver driving the non-robot car 100B, and the automatic driving performance of the robot car 100A is improved efficiently. It can be done. As the automatic driving performance of the robot car 100A is improved, the operation efficiency of the entire road traffic system 1 can be improved, the safety can be improved, the customer satisfaction can be improved, and the like. According to the systems shown in FIGS.
  • driving behavior information obtained when a professional driver such as a taxi driver or a bus driver drives the non-robot car 100B can be used to improve the operation efficiency of the entire road traffic system 1, safety It can be used to improve gender, etc. It is possible to provide driving behavior information when driving a non-robot car 100B by setting a system in which a professional driver who has provided driving behavior information useful for improving the automatic driving performance of the robot car 100A can obtain a price. Incentives can be given to professional drivers to encourage them to provide driving behavior information using their advanced driving techniques.
  • This system is a company and taxi driver who wants to improve the automatic operation performance of the robot car 100A in an environment where the robot car 100A and the non-robot car 100B coexist (transitional environment until all the vehicles become robot cars). It is a system that is convenient for both and bus drivers.
  • the certain inter-vehicle distance is maintained in the group of robot cars 100A traveling in a coordinated system.
  • the robot car 100A has the driving behavior information receiving unit (driving behavior information acquiring unit) Ab for receiving the driving behavior information of the non-robot car 100A via the Internet.
  • the function of acquiring behavior information can also be realized by other methods. For example, inter-vehicle communication (see FIG. 7), communication between the host vehicle and the ground stationary object (see FIG. 8), road-to-vehicle communication (see FIG. 9), communication between the host vehicle and the portable terminal (FIG. 10). See also), etc.
  • FIG. 19 is a conceptual view showing still another configuration example of the road traffic system of the present invention.
  • the robot car 100A can only use the driving behavior information (experience information) of the non-robot car 100B
  • the robot car 100A and the non-robot car 100B can mutually drive the driving behavior information. It is also possible to make it the system configuration which mutually uses.
  • the robot car 100A in this case has a driving behavior information output unit (driving behavior information transmitting unit, etc.) 100Ad for providing driving behavior information of the own vehicle to another vehicle as illustrated in FIG. 19A.
  • the non-robot car 100B is a car with a driving support function, and as illustrated in FIG.
  • a driving condition recognition unit 100Ba for recognizing the driving condition of the own vehicle and driving behavior information of the robot car 100A. Driving according to the driving situation recognized by the driving situation recognition unit 100Ba of the own vehicle while referring to the driving behavior information acquiring unit (driving behavior information receiving unit, etc.) 100Bd to be acquired and the driving behavior information acquired from the robot car 100A And a driving support control unit 100Be that performs support control.
  • the driving support control unit 100Be is realized by the control unit 129 (see FIG. 2).
  • automatic driving performance of the robot car 100A can be improved daily by the non-robot car 100B learning daily driving operations of the human driver and improving the driving support performance daily
  • the driving support performance of the non-robot car 100B can be improved daily. That is, in a situation where the robot car 100A and the non-robot car 100B coexist, the robot car 100A learns the driving technique of the human driver who drives the non-robot car 100B, and the automatic driving performance of the robot car 100A is made highly efficient.
  • the non-robot car 100B can learn the driving operation of the robot car 100A to improve the driving support performance of the non-robot car 100B with high efficiency.
  • the driving performance of the robot car 100A and the non-robot car 100B is improved, the operation efficiency of the entire road traffic system 1 can be improved, the safety can be improved, and the like.
  • the system configuration in which the non-robot car 100B can use the driving behavior information of the robot car 100A is particularly suitable for the road traffic system 1 of the era after the automatic driving performance of the robot car 100A surpasses the driving technique of the human driver. . This is because it is considered that it is meaningless that the robot car 100A learns the driving operation of the human driver when the robot car 100A is better at driving than the human driver.
  • FIGS. 20 and 21 are conceptual diagrams showing still another configuration example of the road traffic system of the present invention.
  • the robot car 100A and the non-robot car 100B are mixed, but the system configuration is that the vehicle 100 constituting the road traffic system 1 is only the robot car 100A. It is also possible to allow the robot cars 100A inside to use the driving behavior information. Similar to the example of FIG. 19A, the robot car 100A in this case has a driving behavior information output unit (driving behavior information transmitting unit, etc.) 100Ad for providing driving behavior information of the own vehicle to other vehicles. .
  • driving behavior information output unit driving behavior information transmitting unit, etc.
  • the robot cars 100A in the road traffic system 1 use the driving behavior information to increase the learning efficiency of the robot car 100A in the road traffic system 1, and the automatic driving performance is rapidly increased. It can be improved. Since the automatic driving performance of all the robot cars 100A in the road traffic system 1 can be rapidly improved, the operation efficiency, safety, customer satisfaction, etc. of the entire road traffic system 1 are rapidly improved.
  • FIG. 22A is a functional block diagram showing a configuration example of the automatic operation control unit 100Ac of the robot car 100A in FIG. 17 to FIG.
  • FIG. 22B is a functional block diagram showing a configuration example of the driving support control unit 100Be of the non-robot car 100B in FIG.
  • the automatic driving control unit 100Ac is a functional block that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit 100Aa, and performs automatic driving control so that the driving operation is performed.
  • the driving support control unit 100Be is a functional block that determines a driving operation to be performed based on the traveling state recognized by the traveling state recognition unit 100Aa, and performs driving support control so that the driving operation is performed.
  • the automatic driving control unit 100Ac stores a driving knowledge unit 101a that stores knowledge information to be referred to when deciding a driving operation to be performed, and a driving behavior information acquisition unit (driving behavior information reception And the like) and a learning processing unit (knowledge update processing unit) 102a that updates the knowledge information stored in the driving knowledge unit 101a based on the driving behavior information acquired by the unit 100Ab.
  • the robot car 100A updates the knowledge information (such as the determination criteria for determining the driving operation to be performed) based on the driving behavior information of the other vehicle (the other robot car 100A or the non-robot car 100B).
  • the driving operation according to the traveling situation is determined with reference to the knowledge information, and the automatic driving control is performed so that the driving operation is performed. Therefore, even in a situation where the own vehicle is unexperienced, the robot car 100A can learn the driving behavior of another vehicle that has experienced the situation and perform automatic driving control.
  • the driving support control unit 100Be includes a driving knowledge unit 101b that stores knowledge information to be referred to when determining a driving operation to be performed, and a driving behavior information acquisition unit (driving behavior (driving behavior). And a learning processing unit (knowledge update processing unit) 102b that updates the knowledge information stored in the driving knowledge unit 101b based on the driving behavior information acquired by the information receiving unit, etc. 100Bd.
  • the non-robot car 100B uses the knowledge information (determination criteria when determining the driving operation to be performed, etc.) based on the driving behavior information of the other vehicle (the robot car 100A or the other non-robot car 100B).
  • the driving assistance control is performed so that the driving operation is performed. Therefore, even in a situation where the own vehicle is inexperienced, the non-robot car 100B can learn the driving behavior of another vehicle that has experienced the situation to perform driving support control.
  • FIG. 23A is a functional block diagram showing another configuration example of the automatic operation control unit 100Ac of the robot car 100A in FIG. 17 to FIG.
  • FIG. 23B is a functional block diagram showing another configuration example of the driving support control unit 100Be of the non-robot car 100B in FIG.
  • the automatic driving control unit 100Ac is a functional block that determines a driving operation to be performed based on the traveling condition recognized by the traveling condition recognition unit 100Aa, and performs automatic driving control so that the driving operation is performed.
  • the driving support control unit 100Be is a functional block that determines a driving operation to be performed based on the traveling state recognized by the traveling state recognition unit 100Aa, and performs driving support control so that the driving operation is performed.
  • the automatic driving control unit 100Ac determines the driving operation according to the traveling condition recognized by the traveling condition recognition unit 100Aa by calculation, and the driving operation information acquisition unit (Driving behavior information receiving unit, etc.) Based on the driving behavior information acquired by 100Ab, the learning processing unit 104a adjusts the parameters of the driving operation determination function used in the driving operation determination unit 103a.
  • the robot car 100A performs the learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle (the other robot car 100A or the non-robot car 100B). A corresponding driving operation is determined by the driving operation determination function, and automatic driving control is performed so that the driving operation is performed.
  • the robot car 100A can learn the driving behavior of another vehicle that has experienced the situation and perform automatic driving control.
  • the learning processing unit 104a of the automatic driving control unit 100Ac as a deep neural network, a robot car 100A having general-purpose driving knowledge and driving ability (strong AI) like human beings can be realized in the future.
  • the driving assistance control unit 100Be determines the driving operation according to the traveling situation recognized by the traveling situation recognition unit 100Ba by calculation, and the driving action information, and the driving action information.
  • the learning processing unit 104b adjusts a parameter of the driving operation determination function used in the driving operation determination unit 103b based on the driving activity information acquired by the acquiring unit (driving activity information receiving unit, etc.) 100Bd.
  • the non-robot car 100B performs the learning process of adjusting the parameters of the driving operation determination function based on the driving behavior information of the other vehicle (the robot car 100A or the other non-robot car 100B).
  • the driving operation according to is determined by the driving operation determination function, and the driving support control is performed so that the driving operation is performed. Therefore, even in a situation where the own vehicle is inexperienced, the non-robot car 100B can learn the driving behavior of another vehicle that has experienced the situation to perform driving support control.
  • the learning processing unit 104b of the driving support control unit 100Be as a deep neural network, in the future, a non-robot car 100B having general-purpose driving knowledge and driving ability (strong AI) like human beings can be realized. .
  • the provided driving behavior information it is desirable to have a function of correcting to an optimal value according to the vehicle attribute of the own vehicle and the like and performing driving support control or automatic driving control with reference to the corrected driving behavior information.
  • the embodiment of the road traffic system of the present invention shown in FIGS. 1 to 23 is also an embodiment of the vehicle sharing system of the present invention. That is, the description of the embodiment of the road traffic system can be taken as the description of the embodiment of the vehicle sharing system by replacing "road traffic system" in the text with "vehicle sharing system".
  • vehicle sharing system there can be mentioned a system which can provide a car rental service, a car sharing service, a robot taxi service, a robot bus service and the like.
  • the robot car 100A is a vehicle shared by a plurality of users.
  • the non-robot car 100B is a vehicle other than a vehicle shared by a plurality of users.
  • FIG. 24 is a conceptual view showing a configuration example of a robot car training system according to the present invention.
  • the robot car training system 1 illustrated in FIG. 24 includes a robot car 100A, a non-robot car 100B, and a computing system 200.
  • Computing system 200 comprises server computer 210 and database 220.
  • the server computer 210 transmits the driving behavior information received by the driving behavior information receiving unit 210a that receives the driving behavior information of the non-robot car 100B via the Internet 300 and the driving behavior information received by the driving behavior information receiving unit 210a to the robot car 100A via the Internet 300 Driving behavior information transmission unit 210b.
  • the database 220 accumulates and manages driving behavior information received by the server computer 210.
  • the server computer 210 may be single or plural.
  • a database (not shown) may be disposed on one server computer or distributed on a plurality of server computers.
  • the robot car 100A includes a driving condition recognition unit 100Aa for recognizing the driving condition of the own vehicle, a driving behavior information receiving unit (driving behavior information acquisition unit) 100Ab for receiving the driving behavior information of the non-robot car 100B, and the traveling of the own vehicle.
  • the automatic operation control is performed based on the traveling situation recognized by the situation recognition unit 100A, and the learning process for learning the driving behavior of the non-robot car 100B is performed based on the driving behavior information received by the driving behavior information receiving unit 100Ab.
  • an operation control unit 100Ac is an operation control unit 100Ac.
  • the non-robot car 100B includes a traveling condition recognition unit 100Ba that recognizes the traveling condition of the host vehicle, a driving operation detection unit 100Bb that detects a driving operation by the human driver of the host vehicle, and a traveling condition recognized by the traveling condition recognition unit 100Ba. And a driving action information transmission unit (driving action information output unit) 100Bc that transmits driving action information in which the driving operation detected by the driving operation detection unit 100Bb is associated with each other.
  • FIG. 25 is a functional block diagram showing an example of a system configuration of the robot car 100A.
  • FIG. 26 is a functional block diagram showing an example of a system configuration of the non-robot car 100A.
  • the robot car 100A has an on-vehicle gateway 110A and a traveling control system 120A.
  • the in-vehicle gateway 110A communicates with the computing system 200 under the control of the traveling control system 120A.
  • the in-vehicle gateway 110A inputs data received from the computing system 200 to the traveling control system 120A.
  • the on-vehicle gateway 110 ⁇ / b> A transmits the data input from the traveling control system 120 ⁇ / b> A to the computing system 200.
  • the traveling control system 120A includes a detection unit 121A, a vehicle information input unit 122A, a positioning unit 123A, a map information input unit 124A, an operation unit 125A, a communication unit 126A, a display unit 127A, a storage unit 129A, a control unit 129A, and the like.
  • the traveling condition recognition unit 100Aa in FIG. 1 is realized by a detection unit 121A, a vehicle information input unit 122A, a positioning unit 123A, a map information input unit 124A, a communication unit 126, and the like, and a control unit 129.
  • the driving behavior information receiving unit 100Ab is realized by the on-vehicle gateway 110A.
  • the detection unit 121A is configured of sensors for detecting the presence of objects in the vicinity (other vehicles, pedestrians, stationary objects on the ground, etc.), and the position, size, relative velocity, etc. of objects in the vicinity. .
  • the detection unit 121A is embodied by, for example, a sonar 121a, a radar 121b, a camera 121c, a three-dimensional range sensor, and the like.
  • the sonar 121a transmits an ultrasonic wave to a predetermined area from each antenna directed in the front, rear, left, and right directions of the host vehicle, and receives the reflected wave.
  • the radar 121 b irradiates laser light or a millimeter wave from an antenna directed in the front, rear, left, and right directions of the host vehicle, scans a predetermined detection area, and receives the reflected wave. Then, based on the received reflected wave, the positional relationship with the host vehicle, the distance, the relative velocity, and the like are output for an object existing in the front, rear, left and right direction of the vehicle.
  • the camera 121c is provided at a predetermined position in the front, rear, left, and right directions of the host vehicle, and outputs imaging data in which surrounding vehicles present in the front, rear, left, and right directions of the host vehicle are captured.
  • a plurality of such sensors such as sonars, radars, cameras 121c, and three-dimensional range sensors may be used in combination or may be used alone.
  • the vehicle information input unit 122A controls information related to the movement status (center of gravity, yaw, roll, pitch, speed, acceleration, angular velocity, etc.) of the host vehicle and driving operation (accelerator operation, brake operation, steering operation, shift operation). Input to the part 128.
  • the positioning unit 123A measures the position (latitude, longitude) of the vehicle on the earth, and inputs the position to the control unit 128A.
  • the positioning unit 123A is embodied by, for example, a high accuracy positioning receiver or the like compatible with high accuracy GPS (Global Positioning System).
  • the map information input unit 124A acquires, from the storage medium storing the road map information, information on the road on which the vehicle is currently traveling, and inputs the information to the control unit 128A.
  • information of the road input by the map information input unit 128A information such as the number of lanes, the lane width, the bend, the slope, the merging, the restriction, and the like can be mentioned.
  • the operation unit 125A is an input device for inputting an operation instruction such as switching of various displays in the display unit 127A.
  • the communication unit 126A is a communication device for communicating with a communication device provided on the ground stationary object or a communication device mounted on a nearby vehicle. Ground stationary objects include garages and roads.
  • the display unit 127A is a display device including a center display provided in the center of the instrument panel and an indicator provided in the meter panel. Information indicating the state of the host vehicle is displayed on the display unit 127A.
  • the storage unit 128A is a storage device that stores recognition related information of the host vehicle, driving behavior information of the host vehicle, and driving behavior information of another vehicle.
  • the control unit 129A is an information processing apparatus mainly configured with a CPU, a ROM, a RAM, and the like (not shown), and centrally controls each part of the traveling control system 120A.
  • the control unit 129A executes various processes by the CPU executing a control program stored in the ROM.
  • control unit 129A is information on a road structure such as a telephone pole or a signal device.
  • the three-dimensional range sensor of the detection unit 121A detects the three-dimensional distance of the surrounding object.
  • Control unit 129A stores recognition related information of the host vehicle in storage unit 19A.
  • the recognition related information includes recognition results of peripheral objects and the like, and various data used for the recognition processing.
  • Control unit 129A stores, in storage unit 19A, driving behavior information of the host vehicle based on various information input from detection unit 121A, vehicle information input unit 122A, positioning unit 123A, and map information input unit 124A.
  • driving behavior information of the own vehicle position on the route-driving operation correspondence information obtained by the detecting unit 121A, the vehicle information input unit 122A, the positioning unit 123A, and the map information input unit 124A (position on the route-driving operation correspondence Tables etc), entry and exit route position-driving operation correspondence information (entry and exit route position-driving operation correspondence table etc) are included.
  • the controller 129A communicates with the computing system 200 via the in-vehicle gateway 110A.
  • Control unit 129A stores, in storage unit 128A, the driving behavior information of non-robot car 100B received via on-vehicle gateway 110.
  • the driving behavior information of the non-robot car 100B the on-path position-driving operation correspondence information obtained by the detecting unit 121B of the non-robot car 100B, the vehicle information input unit 122B, the positioning unit 123B, and the map information input unit 124B
  • the route position-driving operation correspondence table etc.) and the entry / exit route position-driving operation correspondence information (entry / exit route position-driving operation correspondence table etc) are included.
  • Control unit 129A communicates with surrounding ground stationary objects and surrounding vehicles via communication unit 126A.
  • Control unit 129A stores the driving behavior information of non-robot car 100B received via communication unit 126A in storage unit 128A.
  • a vehicle control unit 130A to be subjected to driving control is connected to the control unit 129A.
  • the vehicle control unit 130A includes various electronic control devices such as an engine ECU (Electronic Control Unit) 130a, a brake ECU 130b, a steering angle ECU 130c, and a stability ECU 130d.
  • the engine ECU 130a controls the output of the engine by issuing a control command according to the operation amount of the accelerator pedal and the state of the engine.
  • the brake ECU 130 b controls the braking force of the brake according to the operation amount of the brake pedal.
  • the steering angle ECU 130 c controls the steering angle of the steering.
  • the stability ECU 130 d controls the traveling stability of the vehicle.
  • Control unit 129A controls the traveling of the vehicle by giving commands to the respective ECUs in vehicle control unit 130A according to the amount of driving operation (accelerator operation amount, brake operation amount, steering operation amount, etc.).
  • the control unit 129A analyzes in real time the traveling condition of the subject vehicle, which changes from moment to moment, detected by the detecting unit 121A or the like, and the analysis result and the driving behavior information of the subject vehicle and / or the driving behavior of the non-robot car 100B. Based on the information, the driving operation amount is determined, and a command is given to each ECU in the vehicle control unit 130.
  • the control unit 129A performs a learning process of learning the driving behavior of the non-robot car 100B based on the driving behavior information received by the on-vehicle gateway 110A or the communication unit 126A.
  • the in-vehicle gateway 110A and the traveling control system 120A are separately present, the in-vehicle gateway 110A can be integrated with the traveling control system 120A.
  • the robot car 100A configured as described above drives the non-robot car 100B while performing automatic driving control based on the traveling state of the host vehicle and the driving behavior information of the host vehicle or the driving behavior information of the non-robot car 100B. Learn driving behavior of human driver.
  • the non-robot car 100B has an on-vehicle gateway 110B and a traveling control system 120B.
  • the on-vehicle gateway 110B communicates with the computing system 200 under the control of the traveling control system 120B.
  • the in-vehicle gateway 110 ⁇ / b> B inputs data received from the computing system 200 to the traveling control system 120 ⁇ / b> B.
  • the on-vehicle gateway 110 ⁇ / b> B transmits the data input from the traveling control system 120 ⁇ / b> B to the computing system 200.
  • the traveling control system 120B includes a detection unit 121B, a vehicle information input unit 122B, a positioning unit 123B, a map information input unit 124B, an operation unit 125B, a communication unit 126B, a display unit 127B, a storage unit 129B, a control unit 129B, and the like.
  • the traveling condition recognition unit 100Ba in FIG. 24 is realized by a detection unit 121B, a vehicle information input unit 122B, a positioning unit 123B, a map information input unit 124B, an operation unit 125B, a communication unit 126B, and the like and a control unit 129B.
  • the driving operation detection unit 100Bb is realized by the driving operation detection function of the vehicle information input unit 122B.
  • the driving behavior information transmission unit 100Bc is realized by the in-vehicle gateway 110B.
  • the configuration and functions of the detection unit 121B, the vehicle information input unit 122B, the positioning unit 123B, the map information input unit 124B, the communication unit 126B and the vehicle control unit 130B are the same as the detection unit 121A of the robot car 100A, the vehicle information input unit 122A, the positioning unit
  • the configuration and functions are the same as 123A, map information input unit 124A, communication unit 126A, and vehicle control unit 130A.
  • the operation unit 125B is an input device for inputting operation instructions such as on / off of travel control, switching of control modes, switching of various displays on the display unit 127B, and the like.
  • the operation unit 125B is embodied by, for example, a switch provided on a spoke portion of a steering wheel of a vehicle.
  • the display unit 127B is a display device including a center display provided in the center of the instrument panel and an indicator provided in the meter panel. Information indicating the state of the host vehicle is displayed on the display unit 127B, and on / off of travel control and a control mode are displayed.
  • the control mode includes a manual operation mode and a driving support mode.
  • the storage unit 128B is a storage device that stores driving behavior information and recognition related information of the host vehicle.
  • the control unit 129B is an information processing apparatus mainly configured with a CPU, a ROM, a RAM, and the like (not shown), and centrally controls each part of the traveling control system 120B.
  • the control unit 129B executes various processes by the CPU executing the control program stored in the ROM.
  • control unit 129B is information on a road structure such as a telephone pole or a signal device.
  • the three-dimensional range sensor of the detection unit 121B to detect the three-dimensional distance of the surrounding object.
  • Identify Control unit 129B stores, in storage unit 19B, driving behavior information of the host vehicle based on various information input from detection unit 121B, vehicle information input unit 122B, positioning unit 123B, and map information input unit 124B. In the driving behavior information of the own vehicle, position on the route-driving operation correspondence information obtained by the detection unit 121B, the vehicle information input unit 122B, the positioning unit 123B and the map information input unit 124B (position on the route-driving operation correspondence table etc.
  • Entry / exit route position-driving operation correspondence information (entry / exit route position-driving operation correspondence table etc.) is included.
  • the control unit 129B is connected to a vehicle control unit 130B which is a target of driving control.
  • Control unit 129B controls the traveling of the vehicle by giving commands to the respective ECUs in vehicle control unit 130B in accordance with the amount of driving operation (accelerator operation amount, brake operation amount, steering operation amount, etc.).
  • the control unit 129B analyzes the traveling condition of the host vehicle changing from moment to moment in real time, and based on the analysis result and the driving behavior information of the host vehicle, the human driver driving the host vehicle A learning process is performed to learn driving behavior.
  • control unit 129B In the driving support mode, the control unit 129B generates driving support information based on the analysis result and the driving behavior information of the vehicle while analyzing in real time the traveling condition of the vehicle which changes from moment to moment, and generates the driving assistance information. The driver is notified of the support information using the display unit 127A or the like. Control part 129B performs a learning process which learns the driving action of the human driver who drives self-vehicles also in driving support mode. Control unit 129B causes storage unit 128B to store driving behavior information of the host vehicle. The storage unit 128B stores driving behavior information in which the learning result of the driving behavior of the human driver driving the own vehicle is reflected. The control unit 129B communicates with the computing system 200 via the on-vehicle gateway 110B.
  • Control unit 129B transmits the driving behavior information of the host vehicle accumulated in storage unit 128B to computing system 200 via on-vehicle gateway 110B.
  • Control unit 129B communicates with surrounding ground stationary objects and surrounding vehicles via communication unit 126B.
  • Control unit 129B transmits the driving behavior information of the host vehicle stored in storage unit 128B to the stationary ground object and surrounding vehicles via communication unit 126B.
  • the non-robot car 100B configured as described above performs driving control according to the driving operation of the host vehicle by the human driver, while learning processing of the driving behavior of the human driver driving the host vehicle, and driving behavior information of the host vehicle Perform various processing such as transmission processing of
  • FIG. 27 is a flow diagram illustrating the operation content of the robot car teaching system of FIG.
  • This flow chart is also a flow chart illustrating the contents of a robot car training method implemented by the robot car training system of FIG.
  • the robot car 100A drives the non-robot car 100B by driving the robot car 100A and the non-robot car 100B on the same route R (see FIG. 29).
  • Driving learning of the robot car 100A is performed by learning.
  • the mode in which the robot car 100A and the non-robot car 100B travel along the same route R the mode in which the non-robot car 100B travels in advance of the robot car 100A (FIG. 29A) and the robot car 100A non-robot
  • the non-robot car 100B travels the same route R as the robot car 100A (S1: non-robot car travel step). While traveling on the route R, the non-robot car 100B recognizes the traveling condition of the host vehicle (S2: non-robot car traveling condition recognition step). The non-robot car 100B detects a driving operation by the human driver of the host vehicle while traveling along the route R (S3: driving operation detection step). The non-robot car 100B transmits, to the server 210, driving action information in which the traveling state of the host vehicle and the driving operation are associated (S4: driving action information transmission step, driving action information output step). The non-robot car 100B determines whether or not the own vehicle is traveling (S5), and if it is traveling (Yes in S5), the steps S2, S3 and S4 are repeatedly executed.
  • the computing system 200 receives driving behavior information from the non-robot car 100B (S11: driving behavior information receiving step).
  • the computing system 200 transmits the driving behavior information received from the non-robot car 100B to the robot car 100A (S12: driving behavior information transmitting step).
  • the robot car 100A travels the same route R as the non-robot car 100B (S21: robot car travel step). While traveling on the route R, the robot car 100A recognizes the traveling state of the vehicle (S22: robot car traveling state recognition step). The robot car 100A determines the driving operation to be performed based on the traveling condition of the host vehicle (S23: driving operation determination step). The robot car 100A performs automatic operation control so that the determined operation operation is executed (S24: automatic operation control step). The robot car 100A receives the driving behavior information of the non-robot car 100B from the server 210 (S25: driving behavior information receiving step, driving behavior information acquiring step). The robot car 100A learns the driving action of the human driver who drives the non-robot car 100B based on the driving action information received from the server 210 (S26: learning step).
  • FIG. 28 is a flow chart illustrating the contents of the learning step S26 of FIG.
  • the robot car 100A first generates the driving behavior information of the host vehicle performed by the automatic driving control (S24) (S26a1: driving behavior information generation step). Then, based on the difference between the driving behavior information (learning data set) of the non-robot car 100B received at the driving behavior information receiving step S25 and the driving behavior information of the own vehicle, the individual runs included in the driving behavior information In the situation, the learning process is performed so that the same driving operation (correct operation) as that of the non-robot car is performed in the host vehicle (S26a2: supervised learning step).
  • this robot car training system 1 and method it is possible to make the robot car 100A learn the driving behavior of the human driver who drives the non-robot car 100B and improve the automatic driving performance of the robot car 100A.
  • the automatic driving performance of the robot car 100A is improved, the safety and reliability of the robot car 100A are improved, and hence the safety and reliability of the entire road traffic system in which the robot car 100A and the non-robot car 100B coexist Do.
  • the non-robot car 100B travels in advance of the robot car 100A (FIG. 29A) or the robot car 100A travels in advance of the non-robot car 100B.
  • a driving instruction of the robot car 100A can be implemented. According to the driving instruction in the mode of FIG. 29 (A), based on the driving behavior information of the non-robot car 100B traveling on the same route R earlier, the driving behavior of the human driver who drives the non-robot car 100B is It is possible to make 100A learn.
  • the robot car 100A can learn the driving behavior of the human driver who drives the non-robot car 100B that has already experienced the situation.
  • the driving behavior information of the non-robot car 100B received by the robot car 100A is driving behavior information in which the learning result of the driving behavior of the human driver by the non-robot car 100B is reflected. Therefore, when the robot car 100A performs learning based on the driving action information of the non-robot car 100B, the driving action of the human driver driving the non-robot car 100B can be efficiently learned. According to the driving instruction in the mode of FIG.
  • the driving behavior of the human driver driving the non-robot car 100B is It can be learned. That is, after causing the robot car 100A to experience a new situation, the robot car can learn the driving behavior of the human driver of the non-robot car 100B who has experienced the situation (reinforcement learning: learning based on a posteriori information) . Also in this aspect, the robot car 100A can efficiently learn the driving behavior of the human driver who drives the non-robot car 100B.
  • the robot car 100A performs automatic driving control based on the driving behavior information of the non-robot car 100B traveling on the same route R first, thereby performing non-robot car 100B and
  • the route R can be traveled with the same level of high driving performance.
  • the route R is a narrow and winding road or a narrow road where there are many obstacles such as a power pole
  • the driver rides different vehicles each time by a driver or car sharing service who is not used to driving. It is not easy for the driver who is driving the route R to travel smoothly. It is not good at the robot car 100A to travel smoothly on this kind of road.
  • the robot car 100A performs non-operation control by referring to the driving behavior information of the non-robot car 100B. It becomes possible to travel the route R smoothly with the same level of driving performance as the robot car 100B.
  • the teaching of the robot car 100A is most preferably performed by the robot car 100A and the non-robot car 100B traveling on the same route R in one and the same traveling condition.
  • a fake city for robot car teaching is used.
  • narrow and winding roads, narrow roads with many obstacles such as telephone poles, uneven roads, poor intersections, urban highways, garages that are difficult to get in and out of inexperienced people Etc.
  • Other vehicles, pedestrians, livestock, etc. can be freely arranged in this fake town.
  • this fake city it is possible to jump out pedestrians (dolls) suddenly on the road, throw in baseball balls and balloons, or scatter leaves.
  • the robot car 100A compares the driving operation of the non-robot car 100B performed at each point (P1, P2,..., Pn) on the route R with the driving operation of the host vehicle and determines each point (P1, P2,. In Pn), learning processing is performed such that the same driving operation (correct operation) as that performed in the non-robot car 100B is performed in the host vehicle.
  • the own vehicle at the point P3 when the operation amount yyy of the deceleration operation (braking operation) of the own vehicle performed at the point P3 is larger (or smaller) than the operation amount xxx of the deceleration operation (braking operation) of the non-robot car 100B, the own vehicle at the point P3
  • the learning process is performed so that the operation amount yyy of the decelerating operation (braking operation) of matches (or becomes as close as possible) to xxx.
  • This learning process is performed on "recognition" of the driving situation and "judgment / planning" on the recognized driving situation, among the elements of the driving behavior.
  • the robot car 100A performs a process of finding an error of "recognition" of the traveling situation.
  • a learning process is performed to correct the error. If an error of "recognition” is found, processing is performed to find an error in "decision / planning" for the recognized travel situation. If an error in "decision / planning” is found, a learning process is performed to correct the error. If an error in “determination / planning” is not found, the process returns to the process of finding an error in “recognition” for another recognition target, or this learning process is ended.
  • the robot car 100A when the non-robot car 100B does not perform the decelerating operation (the braking operation) at the point PP3, while the robot car 100A performs the decelerating operation (the braking operation), the robot car 100A is recognized at the point PP3. Reconfirm the object and the process that led to the recognition. This reconfirmation can be performed by reading the driving behavior information and the recognition information of the own vehicle accumulated in the storage unit 128A.
  • the robot car 100A is recognized (detected) by the traveling state recognition unit 100Aa (detection unit 121A)
  • a learning process is performed to correct the processing content (program parameters and / or data) of "recognition” that has led to recognition of an object as a stone.
  • the fact that the non-robot car 100B does not perform the decelerating operation (braking operation) at the point PP3 is because there is a high possibility that there is no high risk object such as a stone falling to the front of the own vehicle. .
  • the robot car 100A erroneously recognizes that a leaf or the like falling in front of the host vehicle (an object with a low degree of risk) is a stone (an object with a high degree of risk).
  • the non-robot car 100B only travels while turning to the right and does not perform the decelerating operation (braking operation), whereas the robot car 100A performs the decelerating operation (braking operation) In this case, the robot car 100A reconfirms the object recognized at the point Pn-2 and the process of achieving the recognition.
  • the robot car 100A is It is judged that the "recognition" of the traveling situation of the host vehicle is correctly made, and the learning processing to correct the processing contents (program parameters and / or data) of the "determination / planning" which has reached deceleration operation (braking operation) Do.
  • the driving behavior information is transferred from the non-robot car 100B to the robot car 100A through the computing system 200, but the method of transferring the driving behavior information from the non-robot car 100B to the robot car 100A is arbitrary. is there.
  • direct communication between the robot car 100A and the non-robot car 100B (inter-vehicle communication, see FIG. 7), delivery via the ground stationary object 410 (see FIG. 8), and road 420 Delivery (inter-vehicle communication, see FIG. 9), delivery via the portable terminal 500 (see FIG. 10), and the like can be mentioned.
  • FIG. 22A is also a block diagram showing a configuration example of the automatic operation control unit Ac of the robot car 100A in the robot car teaching system 1 of FIG.
  • the automatic driving control unit 100Ac in this configuration example receives (acquires) the driving knowledge unit 101a storing the knowledge information (judgment criteria etc.) to be referred to when determining the driving operation of the own vehicle, and the driving behavior information receiving unit 100Ab.
  • a learning processing unit (knowledge update processing unit) 102a that performs learning processing for updating the knowledge information stored in the driving knowledge unit 101a based on the driving behavior information.
  • FIG. 30 is a flow diagram illustrating the contents of the learning step S26 in this configuration example.
  • the robot car 100A first generates the driving behavior information of the host vehicle performed by the automatic driving control (S24) (S26b1: driving behavior information generation step).
  • the driving behavior information of the non-robot car 100B received in the driving behavior information receiving step S25 is used as a learning data set (a combination of a traveling situation and a driving operation performed in the situation) in each of the traveling situations included in the driving behavior information.
  • the learning process (learning process by supervised learning) is performed to update the knowledge information stored in the driving knowledge unit 101a so that the same driving operation (correct operation) as the non-robot car 100B is performed in the own vehicle (S26b2: Supervised learning step).
  • the robot car 100A carries out the driving behavior of the human driver who drives the non-robot car 100B by supervised learning using the driving behavior information of the non-robot car 100B as the learning data set. Can be trained.
  • FIG. 23A is also a block diagram showing another configuration example of the automatic operation control unit Ac of the robot car 100A in the robot car training system 1 of FIG.
  • the automatic operation control unit 100Ac of this another configuration example is A driving operation determination unit 103a that determines a driving behavior according to the traveling condition recognized by the traveling condition recognition unit 100Aa by calculation; And a learning processing unit (parameter adjustment unit) 104a that adjusts parameters of the driving operation determination function used in the driving operation determination unit 103a based on the driving activity information received (acquired) by the driving activity information reception unit 100Ab. .
  • FIG. 31 is a flow chart illustrating the contents of the learning step S26 in this another configuration example.
  • the robot car 100A first generates the driving behavior information of the host vehicle performed by the automatic driving control (S24) (S26c1: driving behavior information generation step).
  • the driving behavior information of the non-robot car 100B received in the driving behavior information receiving step S25 is used as a learning data set (a combination of a traveling situation and a driving operation performed in the situation) in each of the traveling situations included in the driving behavior information.
  • the robot car 100A carries out the driving behavior of the human driver who drives the non-robot car 100B by supervised learning using the driving behavior information of the non-robot car 100B as the learning data set. Can be trained.
  • FIG. 32 is a conceptual diagram showing another configuration example of the robot car training system according to the present invention.
  • the configuration of the server 210 is different from that of the system of FIG.
  • the server 210 of the computing system 200 in the robot car training system 1 generates the optimized driving behavior information based on the driving behavior information received by the driving behavior information receiving unit 210a;
  • An optimization information updating unit 210d that updates and manages the optimized driving behavior information to the latest information, and a driving behavior information transmitting unit 210b that transmits the latest driving behavior information to the robot car 100A.
  • FIG. 33 is a flow chart illustrating the operation content of the robot car teaching system of FIG. This flow chart is also a flow chart illustrating the contents of a robot car training method implemented by the robot car training system of FIG.
  • the operation of the computing system 200 is different from the flow diagram of FIG.
  • the computing system 200 receives driving behavior information from the non-robot car 100B (S11: driving behavior information receiving step).
  • the computing system 200 generates driving behavior information optimized based on the driving behavior information received from the non-robot car 100B (S13: optimization information generating step).
  • the computing system 200 updates and manages the optimized driving behavior information to the latest information (S14: optimization information updating step).
  • the computing system 200 transmits the optimized latest driving behavior information to the robot car (S12: driving behavior information transmission step).
  • the robot car 100A that has received the optimized driving behavior information from the computing system 200 is a non-robot car based on the optimized driving behavior information. It is possible to learn the driving behavior of the human driver driving the 100B.
  • the robot car 100A receiving the provision of the driving behavior information and the driving behavior information optimized according to the vehicle attribute of the robot car 100A receiving the provision of the driving behavior information has a fault Driving behavior information optimized to minimize the possibility of contact with objects, driving behavior information optimized to minimize energy consumption of the robot car 100A receiving the driving behavior information, and driving behavior information Driving behavior information optimized to maximize the regenerative energy of the robot car 100A receiving the provision, and driving behavior information optimized to minimize the number of accelerations or acceleration time in a predetermined traveling distance or predetermined traveling time , Optimized to minimize or maximize the number of braking times or braking times in a given travel distance or in a given travel time Driving behavior information, driving behavior information optimized to minimize travel distance from departure point to arrival point, driving behavior information optimized to minimize travel time from departure point to arrival point, Etc. are included.
  • the configuration of the automatic operation control unit Ac illustrated in FIGS. 22A and 23A is also applicable to the automatic operation control unit Ac of the robot car 100A in the system of FIG.
  • the contents of the learning step S26 in this case are the same as those shown in FIGS. [Other embodiments etc]
  • the robot car training system of the present invention is included in the road traffic system of the present invention. Therefore, the description of the embodiment of the robot car training system of the present invention made with reference to FIGS. 24 to 33 is also the description of the embodiment of the road traffic system of the present invention. 12 to 14 are also explanatory views of the embodiment of the robot car training system according to the present invention. The description of the embodiment of the road traffic system of the present invention made with reference to FIGS. 12 to 14 is also the description of the embodiment of the robot car training system of the present invention.
  • the non-robot car 100B learns the driving behavior of the human driver of the host vehicle daily to improve the driving support performance daily, so that the robot car 100A can be automatically operated. Driving performance can also be improved daily. That is, in a situation where the robot car 100A and the non-robot car 100B coexist, the robot car 100A learns the driving technique of the human driver who drives the non-robot car 100B, and the automatic driving performance of the robot car 100A is made highly efficient. It can be improved. As the automatic driving performance of the robot car 100A is improved, the safety and reliability of the entire road traffic system can be improved.
  • driving behavior information obtained when a professional driver such as a taxi driver or a bus driver drives the non-robot car 100B is used to teach the robot car 100A. It can be used. It is possible to provide driving behavior information when driving a non-robot car 100B by setting a system in which a professional driver who has provided driving behavior information useful for improving the automatic driving performance of the robot car 100A can obtain a price. Incentives can be given to professional drivers to encourage them to provide driving behavior information using their advanced driving techniques.
  • the driving behavior information such as the robot car teaching system side, taxi driver, bus driver, etc. It is a convenient system for both parties providing
  • the teaching of the robot car 100A is performed by the robot car 100A and the non-robot car 100B traveling on the same route R, but may not necessarily be performed in the same traveling situation.
  • the teaching of the robot car 100A can also be performed in a real city.
  • the road traffic system in which the robot car 100A and the non-robot car 100B are mixed if an environment capable of teaching the robot car 100A is realized, the huge road traffic system formed on the earth is the robot car It becomes a teaching system.
  • the learning processing unit 104a of the automatic driving control unit 100Ac takes a driving action closer to the driving action of the non-robot car 100B grasped from the driving action information of the non-robot car 100B. At times, a positive reward is given, and a negative reward (penalty) is given when driving behavior farther from the driving behavior of the non-robot car 100B is taken, and the driving behavior likely to receive the most reward is taken. It is desirable to perform learning processing (learning processing by reinforcement learning). According to this configuration, it is possible to make the robot car 100A learn the driving behavior of the human driver driving the non-robot car 100B by reinforcement learning based on the driving behavior information of the non-robot car 100B.
  • a more positive reward is given when driving behavior closer to the driving behavior of the non-robot car 100B obtained from the driving behavior information of the non-robot car 100B is taken It is used in the driving operation determination unit 103a so as to give a more negative reward (punishment) when taking a driving action farther from the driving action of the non-robot car 100B and take a driving action likely to obtain the most reward. It includes learning to adjust the parameters of the driving operation determination function.
  • reinforcement learning based on driving behavior information of the non-robot car 100B
  • reinforcement learning it is merely trial and error (if the steering angle is too small compared to the curve of the path that collides with the surrounding object if brake is not applied at a certain timing).
  • the automatic driving performance of the robot car 100A can be improved with much higher efficiency as compared with the conventional reinforcement learning that repeatedly collides with peripheral objects. That is, conventional reinforcement learning gives a positive reward when traveling at a higher speed along a determined route, collides with peripheral objects such as guardrails and other vehicles, or deviates from a determined route.
  • the automatic driving performance of the robot car can be made in a short time to the level of the driving skill of the human driver. It can be reached.
  • the robot car 100A performs conventional reinforcement learning (or unsupervised learning) in which learning is performed based only on the correctness of the driving behavior of the own vehicle regardless of the driving behavior of the human driver. It is desirable to perform reinforcement learning (imitation learning) based on the driving behavior information of the non-robot car 100B while performing.
  • reinforcement learning based on the driving behavior information of the non-robot car 100B while performing.
  • the driving behavior that is not learned by reinforcement learning (imitation learning) based on the driving behavior information of the non-robot car 100B is the driving that is performed as a result of learning by reinforcement learning (or unsupervised learning) as in the conventional case. It can be supplemented by action.
  • the learning processing unit 104a of the automatic operation control unit 100Ac be formed of a multilayer neural network (deep neural network).
  • This configuration is realized by installing a multilayer neural network program in the automatic driving control unit 100Ac and executing learning processing by the multilayer neural network program.
  • the robot car 100A is provided with a deep learning function realized by a multilayer neural network program, and the characteristics of the driving behavior (recognition, judgment, planning, operation) of the human driver who drives the non-robot car 100B.
  • the robot car 100A can make it extract itself and carry out learning.
  • a robot car 100A having general-purpose driving knowledge and driving ability (strong AI) like human beings can be realized in the future.
  • the learning processing unit 104a of the automatic driving control unit 100Ac be formed of a neuromorphic chip.
  • the robot car 100A has a deep learning function realized by a neuromorphic chip, and the characteristics of the driving behavior (recognition, judgment, planning, operation) of the human driver who drives the non-robot car 100B
  • the robot car 100A can be made to extract and perform learning (self-organization).
  • a robot car 100A having general-purpose driving knowledge and driving ability (strong AI) like human beings can be realized in the future.
  • the robot car 100A is provided with a learning function imitating a real brain realized by the spiking neural network, and the driving behavior of the human driver driving the non-robot car 100B (recognition, judgment, planning, The characteristics of the operation) can be extracted by the robot car 100A like a human by itself for learning (self-organization).
  • the robot car 100A corrects the driving behavior information provided by the non-robot car 100B to an optimal value according to the vehicle attribute of the host vehicle, and performs learning processing and automatic processing based on the corrected driving behavior information. It is desirable to have a function to perform operation control. For example, when the vehicle size and the inner and outer ring difference are different from those of the non-robot car 100B, the robot car 100A corrects the steering operation amount and the timing of the brake operation included in the driving behavior information provided by the non-robot car 100B. The learning process and the automatic driving control are performed based on the driving behavior information including the corrected steering operation amount and the timing of the brake operation.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Le but de l'invention est de fournir un système de trafic routier permettant d'améliorer la performance d'entraînement automatique de voitures robots à l'aide de l'expérience d'autres véhicules. L'invention concerne un système informatique 200 qui a une fonction de réception d'informations de comportement de conduite 210a pour recevoir des informations de comportement de conduite à partir d'une voiture non robot 100B et une fonction de transmission d'informations de comportement de conduite 210b pour transmettre les informations de comportement de conduite à une voiture robot 100A. La voiture robot 100A a une fonction de reconnaissance de situation de déplacement 100Aa pour reconnaître les circonstances de déplacement du véhicule hôte, une fonction de réception d'informations de comportement de conduite 100Ab pour recevoir les informations de comportement de conduite de la voiture non robot 100B à partir du système informatique 200, et une fonction de commande de fonctionnement automatique 100Ac pour effectuer une commande de conduite automatique qui correspond aux circonstances de déplacement reconnues par la fonction de reconnaissance de situation de déplacement 100Aa du véhicule hôte tout en référençant les informations de comportement de conduite reçues par la fonction de réception d'informations de comportement de conduite 100Ab.
PCT/JP2016/078747 2015-10-01 2016-09-29 Voiture non robot, voiture robot, système de circulation routière, système de partage de véhicule, système d'apprentissage de voiture robot et procédé d'apprentissage de voiture robot WO2017057528A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017543533A JPWO2017057528A1 (ja) 2015-10-01 2016-09-29 非ロボットカー、ロボットカー、道路交通システム、車両共用システム、ロボットカー教習システム及びロボットカー教習方法

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2015196299 2015-10-01
JP2015-196298 2015-10-01
JP2015-196299 2015-10-01
JP2015196298 2015-10-01
JP2015-205132 2015-10-17
JP2015205132 2015-10-17

Publications (1)

Publication Number Publication Date
WO2017057528A1 true WO2017057528A1 (fr) 2017-04-06

Family

ID=58427570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/078747 WO2017057528A1 (fr) 2015-10-01 2016-09-29 Voiture non robot, voiture robot, système de circulation routière, système de partage de véhicule, système d'apprentissage de voiture robot et procédé d'apprentissage de voiture robot

Country Status (2)

Country Link
JP (1) JPWO2017057528A1 (fr)
WO (1) WO2017057528A1 (fr)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018198824A1 (fr) * 2017-04-26 2018-11-01 日立オートモティブシステムズ株式会社 Dispositif de commande de véhicule et système d'aide à la conduite
WO2018220829A1 (fr) * 2017-06-02 2018-12-06 本田技研工業株式会社 Véhicule et dispositif de génération de politique
WO2019049141A1 (fr) * 2017-09-06 2019-03-14 Osr Enterprises Ag Système et procédé d'utilisation de connaissances collectées par un véhicule
WO2019077685A1 (fr) * 2017-10-17 2019-04-25 本田技研工業株式会社 Système de génération de modèle de fonctionnement, véhicule dans un système de génération de modèle de fonctionnement, procédé de traitement et programme
CN109711946A (zh) * 2018-12-28 2019-05-03 深圳市元征科技股份有限公司 一种车辆共享的方法及车辆共享服务器
CN109726795A (zh) * 2017-10-30 2019-05-07 罗伯特·博世有限公司 用于训练中央人工智能模块的方法
JP2019106674A (ja) * 2017-12-14 2019-06-27 Adiva株式会社 自動運転制御システム、自動運転制御方法、及び車両
EP3359439A4 (fr) * 2015-10-05 2019-07-10 Aptiv Technologies Limited Modèle de direction humanisé pour véhicules automatisés
US10860028B2 (en) * 2017-08-14 2020-12-08 Honda Motor Co., Ltd. Vehicle control apparatus, vehicle control method, and program
JP6818118B1 (ja) * 2019-11-27 2021-01-20 株式会社日立製作所 演算装置、車載装置、自動運転システム
US10981564B2 (en) 2018-08-17 2021-04-20 Ford Global Technologies, Llc Vehicle path planning
JP2021086638A (ja) * 2019-11-27 2021-06-03 株式会社日立製作所 演算装置、車載装置、自動運転システム
US11037063B2 (en) 2017-08-18 2021-06-15 Diveplane Corporation Detecting and correcting anomalies in computer-based reasoning systems
JP2021109508A (ja) * 2020-01-09 2021-08-02 トヨタ自動車株式会社 車両用制御装置、車両制御方法及び車両用制御プログラム
US11092962B1 (en) * 2017-11-20 2021-08-17 Diveplane Corporation Computer-based reasoning system for operational situation vehicle control
JP2021520541A (ja) * 2018-04-04 2021-08-19 トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド 周囲のビークルの観察を使用して交通フローを判定するためのシステム及び方法
US11176465B2 (en) 2018-11-13 2021-11-16 Diveplane Corporation Explainable and automated decisions in computer-based reasoning systems
US11205126B1 (en) 2017-10-04 2021-12-21 Diveplane Corporation Evolutionary programming techniques utilizing context indications
KR20220019204A (ko) * 2020-08-07 2022-02-16 한국전자통신연구원 자율주행 차량에서의 주행 경로 생성 및 제어 시스템 및 방법
US11385633B2 (en) 2018-04-09 2022-07-12 Diveplane Corporation Model reduction and training efficiency in computer-based reasoning and artificial intelligence systems
US11454939B2 (en) 2018-04-09 2022-09-27 Diveplane Corporation Entropy-based techniques for creation of well-balanced computer based reasoning systems
US11494669B2 (en) 2018-10-30 2022-11-08 Diveplane Corporation Clustering, explainability, and automated decisions in computer-based reasoning systems
US11625625B2 (en) 2018-12-13 2023-04-11 Diveplane Corporation Synthetic data generation in computer-based reasoning systems
US11640561B2 (en) 2018-12-13 2023-05-02 Diveplane Corporation Dataset quality for synthetic data generation in computer-based reasoning systems
US11657294B1 (en) 2017-09-01 2023-05-23 Diveplane Corporation Evolutionary techniques for computer-based optimization and artificial intelligence systems
US11669769B2 (en) 2018-12-13 2023-06-06 Diveplane Corporation Conditioned synthetic data generation in computer-based reasoning systems
US11676069B2 (en) 2018-12-13 2023-06-13 Diveplane Corporation Synthetic data generation using anonymity preservation in computer-based reasoning systems
US11727286B2 (en) 2018-12-13 2023-08-15 Diveplane Corporation Identifier contribution allocation in synthetic data generation in computer-based reasoning systems
US11763176B1 (en) 2019-05-16 2023-09-19 Diveplane Corporation Search and query in computer-based reasoning systems
US11823080B2 (en) 2018-10-30 2023-11-21 Diveplane Corporation Clustering, explainability, and automated decisions in computer-based reasoning systems
KR102606632B1 (ko) * 2022-11-08 2023-11-30 주식회사 라이드플럭스 인공지능 기반 자율주행 차량의 주행 경로 보정방법, 장치 및 컴퓨터프로그램
US11880775B1 (en) 2018-06-05 2024-01-23 Diveplane Corporation Entropy-based techniques for improved automated selection in computer-based reasoning systems
US11941542B2 (en) 2017-11-20 2024-03-26 Diveplane Corporation Computer-based reasoning system for operational situation control of controllable systems

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1186183A (ja) * 1997-09-11 1999-03-30 Hitachi Ltd 交通流計測装置、及びこれを利用する装置
JP2004030132A (ja) * 2002-06-25 2004-01-29 Mitsubishi Heavy Ind Ltd 移動体制御装置及び方法、遠隔制御装置、移動体制御システム、コンピュータプログラム
JP2009137410A (ja) * 2007-12-05 2009-06-25 Toyota Motor Corp 走行軌跡生成方法及び走行軌跡生成装置
JP2015067154A (ja) * 2013-09-30 2015-04-13 トヨタ自動車株式会社 運転支援装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2995970B2 (ja) * 1991-12-18 1999-12-27 トヨタ自動車株式会社 車両用走行制御装置
JP3143063B2 (ja) * 1996-06-07 2001-03-07 株式会社日立製作所 移動体の走行制御装置
JP4480995B2 (ja) * 2003-12-18 2010-06-16 富士重工業株式会社 車両用運転支援装置
JP5003465B2 (ja) * 2007-12-25 2012-08-15 住友電気工業株式会社 運転支援システム、路上通信装置、および、情報提供装置
JP5287736B2 (ja) * 2010-01-12 2013-09-11 トヨタ自動車株式会社 車両制御装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1186183A (ja) * 1997-09-11 1999-03-30 Hitachi Ltd 交通流計測装置、及びこれを利用する装置
JP2004030132A (ja) * 2002-06-25 2004-01-29 Mitsubishi Heavy Ind Ltd 移動体制御装置及び方法、遠隔制御装置、移動体制御システム、コンピュータプログラム
JP2009137410A (ja) * 2007-12-05 2009-06-25 Toyota Motor Corp 走行軌跡生成方法及び走行軌跡生成装置
JP2015067154A (ja) * 2013-09-30 2015-04-13 トヨタ自動車株式会社 運転支援装置

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3359439A4 (fr) * 2015-10-05 2019-07-10 Aptiv Technologies Limited Modèle de direction humanisé pour véhicules automatisés
EP4105105A1 (fr) * 2015-10-05 2022-12-21 Aptiv Technologies Limited Modèle de direction humanisé pour véhicules automatisés
JP2018185669A (ja) * 2017-04-26 2018-11-22 日立オートモティブシステムズ株式会社 車両制御装置および運転支援システム
WO2018198824A1 (fr) * 2017-04-26 2018-11-01 日立オートモティブシステムズ株式会社 Dispositif de commande de véhicule et système d'aide à la conduite
WO2018220829A1 (fr) * 2017-06-02 2018-12-06 本田技研工業株式会社 Véhicule et dispositif de génération de politique
JPWO2018220829A1 (ja) * 2017-06-02 2020-04-16 本田技研工業株式会社 ポリシー生成装置及び車両
US10860028B2 (en) * 2017-08-14 2020-12-08 Honda Motor Co., Ltd. Vehicle control apparatus, vehicle control method, and program
US11037063B2 (en) 2017-08-18 2021-06-15 Diveplane Corporation Detecting and correcting anomalies in computer-based reasoning systems
US11748635B2 (en) 2017-08-18 2023-09-05 Diveplane Corporation Detecting and correcting anomalies in computer-based reasoning systems
US11657294B1 (en) 2017-09-01 2023-05-23 Diveplane Corporation Evolutionary techniques for computer-based optimization and artificial intelligence systems
WO2019049141A1 (fr) * 2017-09-06 2019-03-14 Osr Enterprises Ag Système et procédé d'utilisation de connaissances collectées par un véhicule
US11205126B1 (en) 2017-10-04 2021-12-21 Diveplane Corporation Evolutionary programming techniques utilizing context indications
US11853900B1 (en) 2017-10-04 2023-12-26 Diveplane Corporation Evolutionary programming techniques utilizing context indications
US11586934B1 (en) 2017-10-04 2023-02-21 Diveplane Corporation Evolutionary programming techniques utilizing context indications
CN111201554A (zh) * 2017-10-17 2020-05-26 本田技研工业株式会社 行驶模型生成系统、行驶模型生成系统中的车辆、处理方法以及程序
JPWO2019077685A1 (ja) * 2017-10-17 2020-11-05 本田技研工業株式会社 走行モデル生成システム、走行モデル生成システムにおける車両、処理方法およびプログラム
CN111201554B (zh) * 2017-10-17 2022-04-08 本田技研工业株式会社 行驶模型生成系统、行驶模型生成系统中的车辆、处理方法以及存储介质
WO2019077685A1 (fr) * 2017-10-17 2019-04-25 本田技研工業株式会社 Système de génération de modèle de fonctionnement, véhicule dans un système de génération de modèle de fonctionnement, procédé de traitement et programme
CN109726795A (zh) * 2017-10-30 2019-05-07 罗伯特·博世有限公司 用于训练中央人工智能模块的方法
US11941542B2 (en) 2017-11-20 2024-03-26 Diveplane Corporation Computer-based reasoning system for operational situation control of controllable systems
US11092962B1 (en) * 2017-11-20 2021-08-17 Diveplane Corporation Computer-based reasoning system for operational situation vehicle control
JP2019106674A (ja) * 2017-12-14 2019-06-27 Adiva株式会社 自動運転制御システム、自動運転制御方法、及び車両
JP7043241B2 (ja) 2017-12-14 2022-03-29 aidea株式会社 自動運転制御システム、自動運転制御方法、及び車両
JP7179866B2 (ja) 2018-04-04 2022-11-29 トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド 周囲のビークルの観察を使用して交通フローを判定するためのシステム及び方法
JP2021520541A (ja) * 2018-04-04 2021-08-19 トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド 周囲のビークルの観察を使用して交通フローを判定するためのシステム及び方法
US11454939B2 (en) 2018-04-09 2022-09-27 Diveplane Corporation Entropy-based techniques for creation of well-balanced computer based reasoning systems
US11385633B2 (en) 2018-04-09 2022-07-12 Diveplane Corporation Model reduction and training efficiency in computer-based reasoning and artificial intelligence systems
US12001177B2 (en) 2018-04-09 2024-06-04 Howso Incorporated Entropy-based techniques for creation of well-balanced computer based reasoning systems
US11880775B1 (en) 2018-06-05 2024-01-23 Diveplane Corporation Entropy-based techniques for improved automated selection in computer-based reasoning systems
US10981564B2 (en) 2018-08-17 2021-04-20 Ford Global Technologies, Llc Vehicle path planning
US11494669B2 (en) 2018-10-30 2022-11-08 Diveplane Corporation Clustering, explainability, and automated decisions in computer-based reasoning systems
US11823080B2 (en) 2018-10-30 2023-11-21 Diveplane Corporation Clustering, explainability, and automated decisions in computer-based reasoning systems
US11361232B2 (en) 2018-11-13 2022-06-14 Diveplane Corporation Explainable and automated decisions in computer-based reasoning systems
US11361231B2 (en) 2018-11-13 2022-06-14 Diveplane Corporation Explainable and automated decisions in computer-based reasoning systems
US11176465B2 (en) 2018-11-13 2021-11-16 Diveplane Corporation Explainable and automated decisions in computer-based reasoning systems
US11741382B1 (en) 2018-11-13 2023-08-29 Diveplane Corporation Explainable and automated decisions in computer-based reasoning systems
US11676069B2 (en) 2018-12-13 2023-06-13 Diveplane Corporation Synthetic data generation using anonymity preservation in computer-based reasoning systems
US11625625B2 (en) 2018-12-13 2023-04-11 Diveplane Corporation Synthetic data generation in computer-based reasoning systems
US12008446B2 (en) 2018-12-13 2024-06-11 Howso Incorporated Conditioned synthetic data generation in computer-based reasoning systems
US11640561B2 (en) 2018-12-13 2023-05-02 Diveplane Corporation Dataset quality for synthetic data generation in computer-based reasoning systems
US11669769B2 (en) 2018-12-13 2023-06-06 Diveplane Corporation Conditioned synthetic data generation in computer-based reasoning systems
US11783211B2 (en) 2018-12-13 2023-10-10 Diveplane Corporation Synthetic data generation in computer-based reasoning systems
US11727286B2 (en) 2018-12-13 2023-08-15 Diveplane Corporation Identifier contribution allocation in synthetic data generation in computer-based reasoning systems
CN109711946A (zh) * 2018-12-28 2019-05-03 深圳市元征科技股份有限公司 一种车辆共享的方法及车辆共享服务器
US11763176B1 (en) 2019-05-16 2023-09-19 Diveplane Corporation Search and query in computer-based reasoning systems
CN113179635A (zh) * 2019-11-27 2021-07-27 株式会社日立制作所 运算装置、车载装置及自动驾驶系统
JP2021086638A (ja) * 2019-11-27 2021-06-03 株式会社日立製作所 演算装置、車載装置、自動運転システム
WO2021106295A1 (fr) * 2019-11-27 2021-06-03 株式会社日立製作所 Dispositif de calcul, dispositif monté sur véhicule et système de conduite autonome
JP2021084527A (ja) * 2019-11-27 2021-06-03 株式会社日立製作所 演算装置、車載装置、自動運転システム
JP6818118B1 (ja) * 2019-11-27 2021-01-20 株式会社日立製作所 演算装置、車載装置、自動運転システム
JP2021109508A (ja) * 2020-01-09 2021-08-02 トヨタ自動車株式会社 車両用制御装置、車両制御方法及び車両用制御プログラム
JP7211375B2 (ja) 2020-01-09 2023-01-24 トヨタ自動車株式会社 車両用制御装置
KR20220019204A (ko) * 2020-08-07 2022-02-16 한국전자통신연구원 자율주행 차량에서의 주행 경로 생성 및 제어 시스템 및 방법
US11866067B2 (en) 2020-08-07 2024-01-09 Electronics And Telecommunications Research Institute System and method for generating and controlling driving paths in autonomous vehicle
KR102525191B1 (ko) 2020-08-07 2023-04-26 한국전자통신연구원 자율주행 차량에서의 주행 경로 생성 및 제어 시스템 및 방법
KR102606632B1 (ko) * 2022-11-08 2023-11-30 주식회사 라이드플럭스 인공지능 기반 자율주행 차량의 주행 경로 보정방법, 장치 및 컴퓨터프로그램

Also Published As

Publication number Publication date
JPWO2017057528A1 (ja) 2018-08-30

Similar Documents

Publication Publication Date Title
WO2017057528A1 (fr) Voiture non robot, voiture robot, système de circulation routière, système de partage de véhicule, système d'apprentissage de voiture robot et procédé d'apprentissage de voiture robot
EP3795457B1 (fr) Préparation de véhicules autonomes pour virages
EP3428028B1 (fr) Dispositif de commande de véhicule embarqué et procédé de commande de véhicule
CN110325928B (zh) 自主车辆运行管理
RU2660158C1 (ru) Устройство управления движением и способ управления движением
RU2657656C1 (ru) Устройство управления движением и способ управления движением
RU2659670C1 (ru) Устройство и способ управления движением для транспортного средства
CN110356402B (zh) 车辆控制装置、车辆控制方法及存储介质
JP6201102B2 (ja) 自動車とコンピューティングシステム
EP3626569B1 (fr) Dispositif d'aide à la conduite et procédé d'aide à la conduite
RU2671457C1 (ru) Устройство управления движением и способ управления движением
US10967861B2 (en) Using discomfort for speed planning in responding to tailgating vehicles for autonomous vehicles
JP2019159426A (ja) 車両制御装置、車両制御方法、およびプログラム
US11945433B1 (en) Risk mitigation in speed planning
US20230168095A1 (en) Route providing device and route providing method therefor
CA3094795C (fr) Utilisation d'inconfort pour une programmation de vitesse de vehicules autonomes
JP2019137189A (ja) 車両制御システム、車両制御方法、およびプログラム
EP3995379B1 (fr) Prédiction de comportement d'agents ferroviaires pour système de conduite autonome
JP2019156269A (ja) 車両制御装置、車両制御方法、及びプログラム
KR20210070387A (ko) 자율 주행 차량들에 대한 폴백 거동들을 구현하기 위한 시스템
CN112977473A (zh) 用于预测移动障碍物驶出十字路口的方法及系统
An et al. Automatic valet parking system incorporating a nomadic device and parking servers
CN115593429A (zh) 自动驾驶车辆对紧急车辆的响应
US20220176987A1 (en) Trajectory limiting for autonomous vehicles
US12017681B2 (en) Obstacle prediction system for autonomous driving vehicles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16851699

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017543533

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16851699

Country of ref document: EP

Kind code of ref document: A1