JP2019096012A - Method and device for controlling mobile body - Google Patents

Method and device for controlling mobile body Download PDF

Info

Publication number
JP2019096012A
JP2019096012A JP2017224130A JP2017224130A JP2019096012A JP 2019096012 A JP2019096012 A JP 2019096012A JP 2017224130 A JP2017224130 A JP 2017224130A JP 2017224130 A JP2017224130 A JP 2017224130A JP 2019096012 A JP2019096012 A JP 2019096012A
Authority
JP
Japan
Prior art keywords
distance
moving object
control
stop
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2017224130A
Other languages
Japanese (ja)
Other versions
JP6839067B2 (en
Inventor
后宏 水谷
Kimihiro Mizutani
后宏 水谷
吉田 学
Manabu Yoshida
学 吉田
崇洋 秦
Takahiro Hata
崇洋 秦
社家 一平
Ippei Shake
一平 社家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2017224130A priority Critical patent/JP6839067B2/en
Publication of JP2019096012A publication Critical patent/JP2019096012A/en
Application granted granted Critical
Publication of JP6839067B2 publication Critical patent/JP6839067B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

To provide a method and a device for controlling a mobile body that can consider a signal state and situations of obstacles at one time and can reduce the time to a destination, for example.SOLUTION: The method for controlling a mobile body according to the present invention includes: grasping the position of a mobile body recognized by a camera, for example, the distance to a vehicle in front of or behind the mobile body, the distance to a vehicle in the next lane in front of or behind the mobile body, and the state of cycles of the lamp of a signal obtained from an optical beacon; calculating a feature amount vector used for reinforcement learning from the state; and calculating a control guideline, using a reward value obtained from the feature amount vector and the control guideline at the present moment, by the reinforcement learning.SELECTED DRAWING: Figure 1

Description

本開示は、移動体の動作を制御する技術に関する。   The present disclosure relates to technology for controlling the operation of a mobile.

移動体の動作を制御する技術として、障害物を回避するように移動体を操縦する研究がなされている。   As a technology for controlling the movement of a mobile, research has been made to steer the mobile so as to avoid an obstacle.

U.S.of Japan, “Amis: Advanced mobile information systems”, http://www.utms.or.jp/english/system/amis.html.U. S. of Japan, “Amis: Advanced mobile information systems”, http: // www. utms. or. jp / english / system / amis. html. VICS,“Beacon and fm broadcasting”, https://www.vics.or.jp/en/ vics/beacon.html.VICS, "Beacon and fm broadcasting", https: // www. vics. or. jp / en / vics / beacon. html. HONDA, “Honda sensing technology”, http://www.honda.co.jp/ hondasensing/.HONDA, “Honda sensing technology”, http: // www. honda. co. jp / hondasensing /. W.Liu, J.Liu, J.Peng,and Z.Zhu, “Cooperative multi−agent traffic signal control system using fast gradient−descent function approximation for v2i networks”, in Proc. IEEE International Conference on Communications(ICC),2014, pp.2562−2567.W. Liu, J.J. Liu, J.J. Peng, and Z. Zhu, “Cooperative multi-agent traffic signal control system using fast gradient-descent function approximation for v2i networks”, in Proc. IEEE International Conference on Communications (ICC), 2014, pp. 2562-2567. W.Lu, Y.Zhang,and Y.Xie,“A multi−agent adaptive traffic signal control system using swarm intelligence and neuro−fuzzy reinforcement learning”, in Proc. IEEE Forum on Integrated and Sustainable Transportation System(FISTS),2011, pp.233−238.W. Lu, Y. Zhang, and Y. Xie, "A multi-agent adaptive traffic signal control system using swarm intelligence and neuro-fuzzy reinforcement learning", in Proc. IEEE Forum on Integrated and Sustainable Transportation System (FISTS), 2011, pp. 233-238. TOYOTA,“Toyota to boost investment in artificial intelligence by strengthening relationship with preferred networks inc”. http://newsroom.toyota.co.jp/en/detail/10679722/.TOYOTA, “Toyota to boost investment in artificial intelligence by strength relationship with preferred networks inc”. http: // newsroom. toyota. co. jp / en / detail / 10679722 /. R.S.Sutton and A.G.Barto, Introduction to reinforcement learning. MIT Press Cambridge,1998, vol. 135.R. S. Sutton and A. G. Barto, Introduction to reinforcement learning. MIT Press Cambridge, 1998, vol. 135. V.Mnih, K.Kavukcuoglu, D.Silver, A.A.Rusu, J.Veness, M.G.Bellemare, A.Graves, M.Riedmiller, A.K.Fidjeland, G.Ostrovski et al., “Human−level control through deep reinforcement learning”, Nature, vol.518, no.7540, pp.529−533,2015.V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al. , "Human-level control through deep reinforcement learning", Nature, vol. 518, no. 7540, pp. 529-533, 2015. H.Van Hasselt, A.Guez, and D.Silver, “Deep reinforcement learning with double q−learning”. in AAAI,2016, pp.2094−2100.H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning”. in AAAI, 2016, pp. 2094-2100. 向井正和,青木博,川邊武俊,“信号機情報を利用した混合整数計画法によるモデル予測型省燃費走行制御”, 計測自動制御学会論文集,vol.51, no.12, pp.866−872,2015.Masakazu Mukai, Hiroshi Aoki, Taketoshi Kawashima, "Model prediction type fuel saving driving control by mixed integer programming using traffic light information", Proc. Of the Society of Measurement and Control Engineers, vol. 51, no. 12, pp. 866-872, 2015. V.Nair and G.E.Hinton, “Rectified linear units improve restricted boltzmann machines”, in Proc.International Conference on Machine Learning(ICML),2010, pp.807−814.V. Nair and G. E. Hinton, “Rectified linear units improved restricted boltzmann machines”, in Proc. International Conference on Machine Learning (ICML), 2010, pp. 807-814. S.Adam, L.Busoniu, and R.Babuska,“Experience replay for realtime reinforcement learning control”, IEEE Transactions on Systems, Man, and Cybernetics, Part C(Applications and Reviews),vol.42, no.2, pp.201−212,2012.S. Adam, L. Bosoniu, and R. Babuska, "Experience replay for realtime reinforcement learning control", IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 2, pp. 201-212, 2012.

近年、信号情報の取得に関して、近年の技術開発により路側に設置された高度化光ビーコンを用いて交通管制センターから路線信号情報(進行方向にある信号までの距離情報、交差点に設置された信号機の赤信号の残時間情報等)を自動車に提供できるようになったため、急速に運転支援の自動化の実現性が高まるようになった(例えば非特許文献1、2を参照)。これらの技術を前提に一般道路における運転支援を検討する場合、信号の状態を取得したうえで、信号状態と周りの他車の挙動を加味し、目的地まで早く到着できるような運転支援技術が必要となると考えられる。   In recent years, regarding acquisition of signal information, using the advanced light beacon installed on the roadside by recent technological development, route signal information from the traffic control center (distance information to a signal in the traveling direction, traffic signals installed at intersections Since it became possible to provide a car with a remaining time information of a red light, etc., the feasibility of automation of driving assistance rapidly increased (see, for example, non-patent documents 1 and 2). When considering driving assistance on a general road based on these technologies, after acquiring the signal status, the driving assistance technology that can arrive at the destination quickly by adding the signal status and the behavior of other vehicles around is available. It is considered necessary.

信号状態を加味した運転支援技術について、赤信号の回避・待ち時間の削減を目指した技術が提案されており、1つ先の信号状態から赤信号による停車を避けるために加減速調整を行う方法(例えば、非特許文献3を参照。)や、信号自体がランプの間隔を調整する方法が提案されている(例えば、非特許文献4、5を参照。)。他車の挙動から安全に運転する技術も多数存在しており、近年では、深層学習手法を用いて障害物や他車の状況を検知し、回避行動を自動的に行う技術が提案されている(例えば、非特許文献6を参照。)。   With regard to driving support technology that takes into account signal conditions, techniques aimed at avoiding red light and reducing waiting time have been proposed, and acceleration / deceleration adjustment is performed to avoid stopping at red light from the signal condition one ahead. (See, for example, Non-Patent Document 3) or a method in which the signal itself adjusts the distance between lamps (for example, see Non-Patent Documents 4 and 5). There are also many technologies that drive safely from the behavior of other vehicles, and in recent years, technologies have been proposed that use the deep learning method to detect the situation of obstacles and other vehicles and automatically perform avoidance actions. (For example, refer to nonpatent literature 6).

信号状態を加味する運転支援技術は、直近の信号のみを加味しており、その先の信号状態を加味しているわけではないので、目的地までの赤信号による停車時間や目的地までの時間等を短縮できるわけではない。周りの環境変化を加味した既存の運転支援技術は、障害物(他車の動き)に対する回避行動が可能だが、信号の状態等を同時に考慮する運転支援技術は存在しない。   The driving support technology that takes into account the signal state takes into account only the latest signal, not the signal state ahead, so the stop time by the red signal to the destination and the time to the destination Etc. can not be shortened. The existing driving support technology that takes into consideration environmental changes can avoid the obstacle (movement of another vehicle), but there is no driving support technology that simultaneously considers the signal status and the like.

そこで、本発明は、信号状態と障害物の状況を同時に考慮し、目的地までの時間等を短縮できる移動体制御方法及び移動体制御装置を提供することを目的とする。   Therefore, an object of the present invention is to provide a moving object control method and a moving object control device capable of shortening the time to the destination etc. by simultaneously considering the signal state and the situation of the obstacle.

上記目的を達成するために、本発明に係る移動体制御方法は、カメラ等で認識した移動体の位置、移動体の前後方との車の距離、隣の車線における前後方の車との距離、及び光ビーコンから得られた信号のランプ周期等の状態を把握し、当該状態から強化学習で用いる特徴量ベクトルを算出し、強化学習により現時点における特徴量ベクトルと制御指針から得られる報酬値を用いて制御指針を算出することとした。   In order to achieve the above object, according to the mobile object control method of the present invention, the position of the mobile object recognized by a camera or the like, the distance between the front and rear of the mobile, and the distance between the front and rear of the adjacent lane , And the state such as the lamp cycle of the signal obtained from the light beacon, calculate the feature quantity vector used in reinforcement learning from the state, calculate the reward value obtained from the feature quantity vector and control guideline at the current time by reinforcement learning We used it to calculate control guidelines.

具体的には、本発明に係る移動体制御方法は、
移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得する状態把握手順と、
前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出手順と、
前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御手順と、
を行う。
Specifically, the mobile object control method according to the present invention is
A state grasping procedure for acquiring a position of a moving object, an alerting time until a plurality of stop commands for the moving object are issued, a distance from the moving object to each of the stop commands, and a distance to another moving object; ,
The spatio-temporal distance of the number of stop instructions is calculated from the position of the moving body from the current speed of the moving body and the distance from the notification time to the stop command, the current speed, the space-time A feature amount extraction procedure for obtaining a feature amount vector configured by a distance and a distance to the other moving object;
Reward value representing the avoidance of the stop command and the avoidance of the contact with the other moving object, obtained as a result of performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector. A learning control procedure for performing reinforcement learning using a controller, calculating a new control guideline, and controlling the mobile object;
I do.

また、本発明に係る移動体制御装置は、
移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得する状態把握部と、
前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出部と、
前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御部と、
を備える。
Further, according to the mobile object control device of the present invention,
A state grasping unit that acquires a position of a moving object, an alerting time until a plurality of stop commands for the moving object are issued, a distance from the moving object to each of the stop commands, and a distance to another moving object ,
The spatio-temporal distance of the number of stop instructions is calculated from the position of the moving body from the current speed of the moving body and the distance from the notification time to the stop command, the current speed, the space-time A feature amount extraction unit that obtains a feature amount vector configured by a distance and a distance to the other moving object;
Reward value representing the avoidance of the stop command and the avoidance of the contact with the other moving object, obtained as a result of performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector. A learning control unit that performs reinforcement learning using the control unit, calculates a new control guideline, and controls the mobile unit;
Equipped with

周囲の車との距離と目的地までの信号の状況(赤信号のタイミング)を特徴量ベクトルに含め、自車の加減速についての制御指針を導き出すので、自車の速度を調整して停止時間を短縮することができる。従って、本発明は、信号状態と障害物の状況を同時に考慮し、目的地までの時間等を短縮できる移動体制御方法及び移動体制御装置を提供することができる。   Since the distance to the surrounding car and the situation of the signal to the destination (the timing of the red light) are included in the feature quantity vector to derive control guidelines for the acceleration and deceleration of the vehicle, the speed of the vehicle is adjusted and the stop time is adjusted. Can be shortened. Therefore, the present invention can provide a mobile control method and a mobile control apparatus capable of shortening the time to the destination etc. by simultaneously considering the signal status and the situation of the obstacle.

本発明は、信号状態と障害物の状況を同時に考慮し、目的地までの時間等を短縮できる移動体制御方法及び移動体制御装置を提供することができる。   The present invention can provide a mobile control method and a mobile control apparatus capable of shortening the time to the destination etc. by simultaneously considering the signal status and the situation of the obstacle.

本発明に係る移動体制御方法を説明するフローチャートである。It is a flow chart explaining the mobile control method concerning the present invention. 本発明に係る移動体制御方法で使用する移動体の特徴量ベクトルを説明する図である。It is a figure explaining the feature-value vector of the mobile used by the mobile control method which concerns on this invention. 本発明に係る移動体制御方法で判断する報酬関数のうち、信号状態に対する車の加速の結果を説明する図である。It is a figure explaining the result of acceleration of the vehicle with respect to a signal state among the reward functions judged by the moving body control method which concerns on this invention. 本発明に係る移動体制御装置を説明する図である。It is a figure explaining the mobile control device concerning the present invention.

添付の図面を参照して本発明の実施形態を説明する。以下に説明する実施形態は本発明の実施例であり、本発明は、以下の実施形態に制限されるものではない。なお、本明細書及び図面において符号が同じ構成要素は、相互に同一のものを示すものとする。   Embodiments of the invention will be described with reference to the accompanying drawings. The embodiments described below are examples of the present invention, and the present invention is not limited to the following embodiments. In the present specification and drawings, components having the same reference numerals denote the same components.

強化学習とは、状態・行動・報酬の値を環境に応じて設定し、設定した全ての状態にて報酬の累積和を最大化する行動を算出する方法であり、障害物回避等の技術に応用されている。本願で対象とする、信号情報と他車の挙動を加味した運転支援技術は、3つのモジュールを用いて達成されており、それらは、状態把握部11、特徴量抽出部12及び学習制御部13である(図1を参照。)。   Reinforcement learning is a method of setting values of states, actions, and rewards according to the environment, and calculating an action that maximizes the cumulative sum of rewards in all the set states. It is applied. The driving support technology in which the signal information and the behavior of the other vehicle, which is the object of the present invention, is taken into consideration is achieved using three modules, which include the state grasping unit 11, the feature extraction unit 12, and the learning control unit 13. (See FIG. 1).

図1の移動体制御装置301は、
移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得する状態把握部11と、
前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出部12と、
前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御部13と、
を備える。
The mobile control device 301 of FIG.
A state grasping unit 11 that acquires the position of a moving object, an alerting time until a plurality of stop commands for the moving object is issued, the distance from the moving object to each of the stop commands, and the distance to another moving object. When,
The spatio-temporal distance of the number of stop instructions is calculated from the position of the moving body from the current speed of the moving body and the distance from the notification time to the stop command, the current speed, the space-time A feature amount extraction unit 12 for obtaining a feature amount vector configured by a distance and a distance to the other moving object;
Reward value representing the avoidance of the stop command and the avoidance of the contact with the other moving object, obtained as a result of performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector. A learning control unit 13 that performs reinforcement learning using the control unit, calculates a new control pointer, and controls the mobile unit;
Equipped with

図1は、本実施形態の移動体制御方法を説明するフローチャートである。本移動体制御方法は、
状態把握部11が、移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得するS11と、
特徴量抽出部12が、前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出手順S12と、
学習制御部13が、前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御手順S13と、
を行う。
FIG. 1 is a flow chart for explaining a mobile control method according to the present embodiment. This mobile control method is
The state grasping unit 11 determines the position of the moving body, the notification time until a plurality of stop commands are issued to the moving body, the distance between the moving body and each of the stop commands, and the distance to another moving body. With S11 to acquire,
The feature amount extraction unit 12 calculates the spatio-temporal distance of the number of the stop commands from the position of the moving body to the current velocity of the moving body, and the distance between the notification time and the stop command, A feature quantity extraction procedure S12 for acquiring a feature quantity vector composed of the velocity, the space-time distance, and the distance to the other moving object;
Avoidance of the stop command and contact with the other moving object obtained as a result of the learning control unit 13 performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector A learning control procedure S13 for performing reinforcement learning using a reward value representing the avoidance of the following and calculating a new control guideline to control the mobile object;
I do.

[状態把握部]
状態把握部11は、現在の移動体の位置、光ビーコンから得られた信号のランプ周期、移動体の前後方との車の距離、両隣の車線における前後方の車との距離を取得できるものとする。なお、取得方法については、車載センサー・カメラ等を用いることができる。
[Status grasping unit]
The state grasping part 11 can acquire the current position of the moving object, the lamp cycle of the signal obtained from the light beacon, the distance of the vehicle to the front and rear of the moving object, and the distance to the vehicle of the front and rear in both adjacent lanes I assume. In addition, about an acquisition method, a vehicle-mounted sensor * camera etc. can be used.

[特徴量抽出部]
特徴量抽出部12は、状態把握部11から得られた情報から、強化学習で用いる特徴量ベクトルを作成して学習制御部13に渡す。数1は当該特徴量ベクトルsの例である。

Figure 2019096012
ここで、vは移動体の現在の速度(履歴)、(dt、dt、・・・、dt)は得られた複数の信号情報から各信号の赤信号(停止指令)になるまでの時間と赤信号までの距離を加味したn個の時空間距離、(df、df、df)は現在の車線と両隣の車線の前方の車までの距離、(db、db、db)は現在の車線と両隣の車線の後方の車までの距離である。なお、特徴量ベクトルの各距離は、任意の定数との除算によって[0,1]に正規化し、除算結果が1を超える場合は1とみなす。 [Feature extraction unit]
The feature quantity extraction unit 12 creates a feature quantity vector used in reinforcement learning from the information obtained from the state grasping unit 11 and passes it to the learning control unit 13. The equation 1 is an example of the feature quantity vector s t .
Figure 2019096012
Here, v is the current velocity (history) of the mobile unit, (dt 1 , dt 2 ,..., Dt n ) until the red signal (stop command) of each signal is obtained from the plurality of obtained signal information N space-time distances (df 1 , df 2 , df 3 ) taking into account the time and the distance to the red light, the distance to the car ahead of the current lane and both adjacent lanes, (db 1 , db 2 , Db 3 ) is the distance to the car behind the current lane and both adjacent lanes. Note that each distance of the feature amount vector is normalized to [0, 1] by division with an arbitrary constant, and when the division result exceeds 1, it is regarded as 1.

図2は、時空間距離の概要を説明する図である。横軸は時間、縦軸は目的地への進行方向を表す。ここに各信号の位置と赤信号のタイミングを記載し、自車から赤信号までの距離と時間を含むベクトルが時空間距離となる。図2において、破線は赤信号を回避して走行する理想経路(制御された移動体の経路)を示す。   FIG. 2 is a diagram for explaining the outline of the space-time distance. The horizontal axis represents time, and the vertical axis represents the direction of travel to the destination. The position of each signal and the timing of the red signal are described here, and a vector including the distance from the vehicle to the red signal and the time is the space-time distance. In FIG. 2, a broken line indicates an ideal route (a route of a controlled mobile body) traveling while avoiding a red light.

このような時空間距離を利用することで、直近の赤信号だけでなく、いくつもの先の赤信号の回避を目的とすることが可能となることが実験によってわかっている。また、他車との距離については、他車の距離の遷移履歴を用いることも可能とする。なお、車線数が2以下の場合、存在しない車線における前後方の車までの距離を0とする。   Experiments have shown that it is possible to avoid such red light as well as the most recent red light by using such space-time distances. In addition, regarding the distance to other vehicles, it is also possible to use the transition history of the distance of other vehicles. In addition, when the number of lanes is 2 or less, the distance to the car of the front back in the non-existent lane is set to 0.

[学習制御部]
学習制御部13は、得られた特徴量ベクトルに対して、図2で示す赤信号区間を避けつつ、他車との衝突を回避するための最適な制御指針(例えば、加減速の程度、車線変更等)を決定し、実行する。この制御指針により信号状態と他車の挙動を加味した運転支援が達成できる。学習制御部13は強化学習を用いる。強化学習では、現在(時刻t)、観測している特徴量ベクトルsに対して、制御指針aを実行した際に得られる数2の報酬値を用いて、sにおける制御指針aの価値Q(s,a)を数3のように更新する。

Figure 2019096012
Figure 2019096012
[Learning control unit]
With respect to the obtained feature quantity vector, the learning control unit 13 avoids a red light section shown in FIG. Determine the change etc.) and execute. With this control guideline, driving support can be achieved in consideration of the signal state and the behavior of other vehicles. The learning control unit 13 uses reinforcement learning. In reinforcement learning, the current (time t), for the feature quantity vector s t that observed number 2 obtained when executing the control pointer a t using a compensation value, the control pointer a in s t Update the value Q (s t , a t ) as in equation 3.
Figure 2019096012
Figure 2019096012

α(0≦α≦1)は学習率を示し、γ(0≦γ≦1)は割引率を示している。αが大きい場合には最新の報酬を重視し、αが1の場合には、過去の報酬を全く考慮しない。また、γは遷移先の状態に対する制御評価値が現在の制御評価値に与える影響を表し、γが0の時は遷移先の状態st+1に対する制御評価値が現在の状態sの制御評価値に依存しない。 α (0 ≦ α ≦ 1) indicates a learning rate, and γ (0 ≦ γ ≦ 1) indicates a discount rate. When α is large, the latest reward is emphasized, and when α is 1, the past reward is not considered at all. In addition, γ represents the influence of the control evaluation value for the transition destination state on the current control evaluation value, and when γ is 0, the control evaluation value for the transition destination state s t + 1 is the control evaluation value of the current state s t Does not depend on

この更新式は、Q学習(例えば、非特許文献7を参照。)と呼ばれており、上記の更新を再帰的に行うことで、最も大きい報酬値を得ることのできる制御の評価値Q(s、a)を理論上、最大にすることが可能とされる。   This update formula is called Q learning (see, for example, non-patent document 7), and the evaluation value Q of control which can obtain the largest reward value by recursively performing the above update. It is possible to maximize s, a) theoretically.

次に,赤信号や他車との接触を回避するための報酬関数は、信号状態に対する車の加速の結果B(a)、現在の状態sにおける時空間距離の総和T、加速による他車との衝突判定C(a)、及び現在の車線と両隣の車線の前後方の車までの距離の総和Dを用いて,以下のように表現する。

Figure 2019096012
Next, the reward function for avoiding contact with red light and other vehicles, the result B (a t) of the acceleration of the vehicle with respect to the signal condition, the sum T of the space-time distances in the current state s t, et acceleration collision determination that vehicle C (a t), and using the sum D of the distance to the front and rear side of the car lane of the current lane and both sides, expressed as follows.
Figure 2019096012

なお、各パラメータは次の通りである。
総和Tは、数5の通りである。

Figure 2019096012
結果B(a)は、下記の3つの値域をとる値であり、図3にその概要を示す。
Figure 2019096012
衝突判定C(a)は下記の二つの値を取る。
Figure 2019096012
総和Dは、数8の通りである。
Figure 2019096012
The parameters are as follows.
The sum T is as shown in Equation 5.
Figure 2019096012
Result B (a t) is a value taking three range below shows the outline in Figure 3.
Figure 2019096012
Collision determination C (a t) takes two values below.
Figure 2019096012
The total sum D is as shown in Equation 8.
Figure 2019096012

上記で定義した特徴量ベクトルと報酬関数を用いた強化学習は、実験により、赤信号と他車の回避を行うとともに高い速度で運転できることを確認できた。なお、特徴量の数や値域によりQ(s、a)が膨大になる場合がある。この場合、深層強化学習(例えば、非特許文献8、9を参照。)を用いることで計算時間を短縮することが可能になる(例えば、非特許文献10−12を参照。)。 It has been confirmed by experiments that reinforcement learning using the feature quantity vector and the reward function defined above can operate at high speed while avoiding red light and other vehicles. Note that Q (s t , a t ) may be enormous depending on the number of feature amounts and the value range. In this case, calculation time can be shortened by using deep reinforcement learning (see, for example, non-patent documents 8 and 9) (see, for example, non-patent documents 10-12).

11:状態把握部
12:特徴量抽出部
13:学習制御部
11: state grasping unit 12: feature amount extraction unit 13: learning control unit

Claims (2)

移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得する状態把握手順と、
前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出手順と、
前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御手順と、
を行う移動体制御方法。
A state grasping procedure for acquiring a position of a moving object, an alerting time until a plurality of stop commands for the moving object are issued, a distance from the moving object to each of the stop commands, and a distance to another moving object; ,
The spatio-temporal distance of the number of stop instructions is calculated from the position of the moving body from the current speed of the moving body and the distance from the notification time to the stop command, the current speed, the space-time A feature amount extraction procedure for obtaining a feature amount vector configured by a distance and a distance to the other moving object;
Reward value representing the avoidance of the stop command and the avoidance of the contact with the other moving object, obtained as a result of performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector. A learning control procedure for performing reinforcement learning using a controller, calculating a new control guideline, and controlling the mobile object;
Do mobile control method.
移動体の位置、前記移動体に対する複数の停止指令が出されるまでの発報時間、前記移動体とそれぞれの前記停止指令までの距離、及び他の移動体との距離を取得する状態把握部と、
前記移動体の位置から前記移動体の現在の速度を、及び前記発報時間と前記停止指令までの距離とから前記停止指令の数の時空間距離を算出し、現在の前記速度、前記時空間距離及び前記他の移動体との距離で構成される特徴量ベクトルを取得する特徴量抽出部と、
前記特徴量ベクトルに対する、前記移動体に加減速と方向転換の少なくとも一つをさせる制御指針を行った結果得られる、前記停止指令の回避及び前記他の移動体との接触の回避を表す報酬値を用いて強化学習を行い、新たな制御指針を算出して前記移動体の制御を行う学習制御部と、
を備える移動体制御装置。
A state grasping unit that acquires a position of a moving object, an alerting time until a plurality of stop commands for the moving object are issued, a distance from the moving object to each of the stop commands, and a distance to another moving object ,
The spatio-temporal distance of the number of stop instructions is calculated from the position of the moving body from the current speed of the moving body and the distance from the notification time to the stop command, the current speed, the space-time A feature amount extraction unit that obtains a feature amount vector configured by a distance and a distance to the other moving object;
Reward value representing the avoidance of the stop command and the avoidance of the contact with the other moving object, obtained as a result of performing a control guideline for causing the moving body to perform at least one of acceleration and deceleration and direction change with respect to the feature quantity vector. A learning control unit that performs reinforcement learning using the control unit, calculates a new control guideline, and controls the mobile unit;
Mobile control device comprising:
JP2017224130A 2017-11-22 2017-11-22 Mobile control method and mobile control device Active JP6839067B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017224130A JP6839067B2 (en) 2017-11-22 2017-11-22 Mobile control method and mobile control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2017224130A JP6839067B2 (en) 2017-11-22 2017-11-22 Mobile control method and mobile control device

Publications (2)

Publication Number Publication Date
JP2019096012A true JP2019096012A (en) 2019-06-20
JP6839067B2 JP6839067B2 (en) 2021-03-03

Family

ID=66971762

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2017224130A Active JP6839067B2 (en) 2017-11-22 2017-11-22 Mobile control method and mobile control device

Country Status (1)

Country Link
JP (1) JP6839067B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554707A (en) * 2019-10-17 2019-12-10 陕西师范大学 Q learning automatic parameter adjusting method for aircraft attitude control loop
JP2021077286A (en) * 2019-11-13 2021-05-20 オムロン株式会社 Robot control model learning method, robot control model learning apparatus, robot control model learning program, robot control method, robot control apparatus, robot control program, and robot
WO2023132092A1 (en) * 2022-01-05 2023-07-13 日立Astemo株式会社 Vehicle control system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10254505A (en) * 1997-03-14 1998-09-25 Toyota Motor Corp Automatic controller
JP2009181187A (en) * 2008-01-29 2009-08-13 Toyota Central R&D Labs Inc Behavioral model creation device and program
JP2012022565A (en) * 2010-07-15 2012-02-02 Denso Corp On-vehicle drive support device and road-to-vehicle communication system
US8478500B1 (en) * 2009-09-01 2013-07-02 Clemson University System and method for utilizing traffic signal information for improving fuel economy and reducing trip time
JP2013213780A (en) * 2012-04-04 2013-10-17 Mic Ware:Kk Navigation device, navigation method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10254505A (en) * 1997-03-14 1998-09-25 Toyota Motor Corp Automatic controller
JP2009181187A (en) * 2008-01-29 2009-08-13 Toyota Central R&D Labs Inc Behavioral model creation device and program
US8478500B1 (en) * 2009-09-01 2013-07-02 Clemson University System and method for utilizing traffic signal information for improving fuel economy and reducing trip time
JP2012022565A (en) * 2010-07-15 2012-02-02 Denso Corp On-vehicle drive support device and road-to-vehicle communication system
JP2013213780A (en) * 2012-04-04 2013-10-17 Mic Ware:Kk Navigation device, navigation method, and program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554707A (en) * 2019-10-17 2019-12-10 陕西师范大学 Q learning automatic parameter adjusting method for aircraft attitude control loop
CN110554707B (en) * 2019-10-17 2022-09-30 陕西师范大学 Q learning automatic parameter adjusting method for aircraft attitude control loop
JP2021077286A (en) * 2019-11-13 2021-05-20 オムロン株式会社 Robot control model learning method, robot control model learning apparatus, robot control model learning program, robot control method, robot control apparatus, robot control program, and robot
WO2021095464A1 (en) * 2019-11-13 2021-05-20 オムロン株式会社 Robot control model learning method, robot control model learning device, robot control model learning program, robot control method, robot control device, robot control program, and robot
JP7400371B2 (en) 2019-11-13 2023-12-19 オムロン株式会社 Robot control model learning method, robot control model learning device, robot control model learning program, robot control method, robot control device, robot control program, and robot
WO2023132092A1 (en) * 2022-01-05 2023-07-13 日立Astemo株式会社 Vehicle control system

Also Published As

Publication number Publication date
JP6839067B2 (en) 2021-03-03

Similar Documents

Publication Publication Date Title
US11987238B2 (en) Driving support apparatus
CN107851392B (en) Route generation device, route generation method, and medium storing route generation program
US9415775B2 (en) Drive assist apparatus, and drive assist method
US20150291216A1 (en) Drive assist device, and drive assist method
CN112638749A (en) Vehicle travel control method and travel control device
US11247677B2 (en) Vehicle control device for maintaining inter-vehicle spacing including during merging
JPWO2016098238A1 (en) Travel control device
KR102041080B1 (en) Parking Control Method And Parking Control Device
WO2020157533A1 (en) Travel control method and travel control device for vehicle
US20190047468A1 (en) Driving assistance device, and storage medium
JP2013224094A (en) Vehicle traveling control device
CN112601690A (en) Vehicle travel control method and travel control device
JP2019096012A (en) Method and device for controlling mobile body
JP5846106B2 (en) Driving support device and driving support method
JP2019191839A (en) Collision avoidance device
CN111788616A (en) Method for operating at least one automated vehicle
CN110799402B (en) Vehicle control device
CN112230646A (en) Vehicle fleet implementation under autonomous driving system designed for single-vehicle operation
JP2021041851A (en) Operation support method and operation support apparatus
CN112985435B (en) Method and system for operating an autonomously driven vehicle
JP6253175B2 (en) Vehicle external environment recognition device
CN116265309A (en) Method for controlling a distance-dependent speed control device
JP7393260B2 (en) estimation device
EP3857327B1 (en) Implementation of dynamic cost function of self-driving vehicles
JP2018024393A (en) Drive support method and drive support apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20191211

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20200923

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20201006

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20201126

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20210209

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20210212

R150 Certificate of patent or registration of utility model

Ref document number: 6839067

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150