WO2017159126A1 - Direct inverse reinforcement learning with density ratio estimation - Google Patents

Direct inverse reinforcement learning with density ratio estimation Download PDF

Info

Publication number
WO2017159126A1
WO2017159126A1 PCT/JP2017/004463 JP2017004463W WO2017159126A1 WO 2017159126 A1 WO2017159126 A1 WO 2017159126A1 JP 2017004463 W JP2017004463 W JP 2017004463W WO 2017159126 A1 WO2017159126 A1 WO 2017159126A1
Authority
WO
WIPO (PCT)
Prior art keywords
estimating
behaviors
reinforcement learning
subject
density ratio
Prior art date
Application number
PCT/JP2017/004463
Other languages
French (fr)
Inventor
Eiji UCHIBE
Kenji Doya
Original Assignee
Okinawa Institute Of Science And Technology School Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Okinawa Institute Of Science And Technology School Corporation filed Critical Okinawa Institute Of Science And Technology School Corporation
Priority to KR1020187026764A priority Critical patent/KR102198733B1/en
Priority to CN201780017406.2A priority patent/CN108885721B/en
Priority to JP2018546050A priority patent/JP6910074B2/en
Priority to EP17766134.5A priority patent/EP3430578A4/en
Publication of WO2017159126A1 publication Critical patent/WO2017159126A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present invention relates to inverse reinforcement learning, and more particularly, to system and method of inverse reinforcement learning.
  • RL Reinforcement Learning
  • RL is a computational framework for investigating decision-making processes of both biological and artificial systems that can learn an optimal policy by interacting with an environment.
  • LMDP Linearly solvable Markov Decision Process
  • NPL 8 Since the likelihood of the optimal trajectory is parameterized by the cost function, the parameters of the cost can be optimized by maximizing likelihood. However, their methods require the entire trajectory data.
  • a model-based IRL method is proposed by Dvijotham and Todorov (2010) (NPL 6) based on the framework of LMDP, in which the likelihood of the optimal state transition is represented by the value function. As opposed to path-integral approaches of IRL, it can be optimized from any dataset of state transitions.
  • a major drawback is to evaluate the integral which cannot be solved analytically. In practice, they discretized the state space to replace the integral with a sum, but it is not feasible in high-dimensional continuous problems.
  • U.S. Patent No. 8,756,177 Methods and systems for estimating subject intent from surveillance.
  • U.S. Patent No. 7,672,739. System for multiresolution analysis assisted reinforcement learning approach to run-by-run control.
  • Japanese Patent No. 5815458 Reward function estimating device, method and program.
  • Inverse reinforcement learning is a framework to solve the above problems, but as mentioned above, the existing methods have the following drawbacks: (1) intractable when the state is continuous, (2) computational cost is expensive, and (3) entire trajectories of states should be necessary to estimate. Methods disclosed in this disclosure solve these drawbacks.
  • the previous method proposed in NPL 14 does not work well as many previous studies reported.
  • the method proposed in NPL 6 cannot solve continuous problems in practice because their algorithm involves a complicated evaluation of integrals.
  • the present invention is directed to system and method for inverse reinforcement learning.
  • An object of the present invention is to provide a new and improved inverse reinforcement learning system and method so as to obviate one or more of the problems of the existing art.
  • the present invention provides a method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising: acquiring data representing changes in state variables that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (1) to the acquired data: where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and ⁇ represents a discount factor, and b(y
  • the present invention provides a method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising: acquiring data representing state transition with action that define the behaviors of the subject; applying a modified Bellman equation given by Eq.
  • the present invention provides a non-transitory storage medium storing instructions to cause a processor to perform an algorithm for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, said instructions causing the processor to perform the following steps: acquiring data representing changes in state variables that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (1) to the acquired data: where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and ⁇ represents a discount factor, and b(y
  • the present invention provides a system for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, comprising: a data acquisition unit to acquire data representing changes in state variables that define the behaviors of the subject; a processor with a memory, the processor and the memory are configured to: applying a modified Bellman equation given by Eq.
  • r(x) and V(x) denote a reward function and a value function, respectively, at state x, and ⁇ represents a discount factor, and b(y
  • the present invention provides a system for predicting a preference in topic of articles that a user is likely to read from a series of articles the user selected in an Internet web surfing, comprising: the system for inverse reinforcement learning as set forth in claim 8, implemented in a computer connected to the Internet, wherein said subject is the user, and said state variables that define the behaviors of the subject include topics of articles selected by the user while browsing each webpage, and wherein the processor causes an interface through which the user is browsing Internet websites to display a recommended article for the user to read in accordance with the estimated cost and value functions.
  • the present invention provides a method for programming a robot to perform complex tasks, comprising: controlling a first robot to accomplish a task so as to record a sequence of states and actions; estimating the reward and value functions using the system for inverse reinforcement learning as set forth in claim 8 based on the recorded sequence of the states and actions; and providing the estimated reward and value functions to a forward reinforcement leaning controller of a second robot to program the second robot with the estimated reward and value functions.
  • Fig. 1 shows normalized squared errors for the results of the swing-up inverted pendulum experiments to which embodiments of the present invention was applied for each of the following density ratio estimation methods: (1) LSCDE-IRL, (2) uLSIF-IRL, (3) LogReg-IRL, (4) Gauss-IRL, (5) LSCDE-OptV, and (6) Gauss-OptV. As indicated in the drawing, (a)-(d) differ from each other in terms of sampling methods and other parameters.
  • Fig. 2 is a graph showing cross-validation errors in the swing-up inverted pendulum experiments for various density ratio estimation methods.
  • Fig. 1 shows normalized squared errors for the results of the swing-up inverted pendulum experiments to which embodiments of the present invention was applied for each of the following density ratio estimation methods: (1) LSCDE-IRL, (2) uLSIF-IRL, (3) LogReg-IRL, (4) Gauss-IRL, (5) LSCDE-OptV, and (6) Gau
  • FIG. 3 shows an experimental setup for the pole balancing task for the long pole; left: the start position, middle: the goal position, and right: state variables.
  • Fig. 4 shows learning curves in the pole balancing task experiment with respect to various subjects according to an embodiment of the present invention; sold line: long pole, dotted line: short pole.
  • Fig. 5 shows estimated cost functions derived for the pole balancing task experiment according to the embodiment of the present invention for Subject Nos. 4, 5, and 7, projected to the defined subspace.
  • Fig. 6 shows negative log likelihood values for the test datasets in the pole balancing task experiment for Subject Nos. 4 and 7, evaluating the estimated cost functions.
  • Fig. 4 shows learning curves in the pole balancing task experiment with respect to various subjects according to an embodiment of the present invention; sold line: long pole, dotted line: short pole.
  • Fig. 5 shows estimated cost functions derived for the pole balancing task experiment according to the embodiment of the present invention for Subject Nos. 4, 5, and 7, projected to the defined subspace.
  • FIG. 7 schematically shows a framework of inverse reinforcement learning according to Embodiment 1 of the present invention that can infer an objective function from observed state transitions generated by demonstrators.
  • Fig. 8 is a schematic block diagram showing an example of implementation of the inverse reinforcement learning of the present invention in imitation learning of robot behaviors.
  • Fig. 9 is a schematic block diagram showing an example of implementation of the inverse reinforcement learning of the present invention in interpreting human behaviors.
  • Fig. 10 schematically shows a series of clicking actions by a web-visitor, showing the visitor’s preference in topic in web surfing.
  • Fig. 11 schematically shows an example of an inverse reinforcement learning system according to an embodiment of the present invention.
  • Fig. 12 schematically shows differences between Embodiment 1 and Embodiment 2 of the present invention.
  • Fig. 12 schematically shows differences between Embodiment 1 and Embodiment 2 of the present invention.
  • Fig. 13 schematically explains the computational scheme of the second DRE for step (2) in Embodiment 2.
  • Fig. 14 shows the experimental results of the swing-up inverted pendulum problem comparing Embodiment 2 with Embodiment 1 and other methods.
  • Fig. 15 shows the experimental results of the robot navigation task using Embodiments 1 and 2 and RelEnt-IRL.
  • the present disclosure provides a novel inverse reinforcement learning method and system based on density ratio estimation under the framework of Linearly solvable Markov Decision Process (LMDP).
  • LMDP Linearly solvable Markov Decision Process
  • the logarithm of the ratio between the controlled and uncontrolled state transition densities is represented by the state-dependent cost and value functions.
  • the present inventors have devised novel inverse reinforcement learning method and system, as described in PCT International Application No. PCT/JP2015/004001, in which density ratio estimation methods are used to estimate the transition density ratio, and the least squares method with regularization is used to estimate the state-dependent cost and value functions that satisfy the relation. That method can avoid computing the integral such as evaluating the partition function.
  • Embodiment 1 includes the descriptions of the invention described in PCT/JP2015/004001 as Embodiment 1 below, and further describes a new embodiment as Embodiment 2 that has improved characteristics in several aspects than Embodiment 1.
  • the subject matter described and/or claimed in PCT/JP2015/004001 may or may not be prior art against Embodiment 2.
  • Embodiment 1 a simple numerical simulation of a pendulum swing-up was performed, and its superiority over conventional methods have been demonstrated.
  • the present inventors further apply the method to humans behaviors in performing a pole balancing task and show that the estimated cost functions can predict the performance of the subjects in new trials or environments in a satisfactory manner.
  • One aspect of the present invention is based on the framework of linearly solvable Markov decision processes like the OptV algorithm.
  • the present inventors have derived a novel Bellman equation given by: where q(x) and V(x) denote the cost and value function at state x and ⁇ represents a discount factor. p(y
  • the density ratio the left hand side of the above equation, is efficiently computed from observed behaviors by density ratio estimation methods. Once the density ratio is estimated, the cost and value function can be estimated by regularized least-squares method. An important feature is that our method can avoid computing the integrals, where it is usually calculated with high computational cost.
  • the present inventors have applied this method to humans behaviors in performing a pole balancing task and show that the estimated cost functions can predict the performance of the subjects in new trials or environments, verifying universal applicability and effectiveness of this new computation technique in inverse reinforcement learning, which has well-recognized wide applicability in control system, machine learning, operations research, information theory, etc.
  • Embodiment 1> Linearly Solvable Markov Decision Process> ⁇ 1.1. Forward Reinforcement Learning>
  • the present disclosure provides a brief introduction of Markov Decision Process and its simplification for a discrete-time continuous-space domain.
  • a learning agent observes the environmental current state x t ⁇ X and executes action u t ⁇ U sampled from a stochastic policy ⁇ (u t
  • an immediate cost c(x t , u t ) is given from the environment and the environment makes a state transition according to a state transition probability P T (y
  • the goal of reinforcement learning is to construct an optimal policy ⁇ (u
  • ⁇ (0, 1) is called the discount factor. It is known that the optimal value function satisfies the following Bellman equation: Eq. (2) is a nonlinear equation due to the min operator.
  • Linearly solvable Markov Decision Process simplifies Eq. (2) under some assumptions (Todorov, 2007; 2009a, NPLs 23-24).
  • the key trick of LMDP is to optimize the state transition probability directly instead of optimizing the policy. More specifically, two conditional probability density functions are introduced.
  • x) is arbitrary and it can be constructed by p(y
  • x) ⁇ P T (y
  • the weight vector w V can be optimized by maximizing the likelihood.
  • N ⁇ denotes the number of data from the controlled probability.
  • the log-likelihood and its derivative are given by: where ⁇ (y
  • the simplified Bellman equation (4) can be used to retrieve the cost function. It means that the cost function q(x) is uniquely determined when and ⁇ are given, and q(x) is expressed by the basis functions used in the value function. While the representation of the cost function is not important in the case of imitation learning, we want to find a simpler representation of the cost for analysis. Therefore, the present inventors introduce an approximator: where w q and denote the learning weights and basis function vector, respectively.
  • the objective function with L1 regularization to optimize w q is given by: where ⁇ q is a regularization constant. A simple gradient descent algorithm is adopted, and J(w q ) is evaluated at the observed states.
  • the most significant problem of Dvijotham and Todorov (2010) is the integral in Eqs. (8) and (10) which cannot be solved analytically, and they discretized the state space and replaced the integral with a sum. However, as they suggested, it is infeasible in high-dimensional problems.
  • x) is not necessarily Gaussian.
  • the Metropolis Hastings algorithm is applied to evaluate the gradient of the log-likelihood, in which the uncontrolled probability p(y
  • Eq. (11) plays an important role in the IRL algorithms according to embodiments of the present invention. Similar equations can be derived for first-exit, average cost, and finite horizon problems. It should be noted that the left hand side of Eq. (11) is not a temporal difference error because q(x) is the state-dependent part of the cost function shown in Eq. (3). Our IRL is still an ill-posed problem and the cost function is not uniquely determined although the form of the cost function is constrained by Eq. (3) under LMDP.
  • the disclosed IRL method consists of two parts. One is to estimate the density ratio of the right hand side of Eq. (11) described below. The other is to estimate q(x) and V(x) by the least squares method with regularization as shown below.
  • Density Ratio Estimation for IRL> Estimating the ratio of controlled and uncontrolled transition probability densities can be regarded as a problem of density ratio estimation (Sugiyama et al., 2012, NPL 20). According to the setting of the problem, the present disclosure considers the following formulation.
  • the first decomposition (14) shows the difference of logarithms of conditional probability densities.
  • the first one is LSCDE-IRL which adopts Least Squares Conditional Density Estimation (LSCDE) (Sugiyama et al., 2010) to estimate ⁇ (y
  • LSCDE Least Squares Conditional Density Estimation
  • the other is Gauss-IRL which uses a Gaussian process (Rasmussen & Williams, 2006, NPL 15) to estimate the conditional densities in Eq. (14).
  • the second decomposition shows the difference of logarithms of density ratio.
  • two methods are implemented to estimate ⁇ (x)/p(x) and ⁇ (x, y)/p(x, y).
  • uLSIF-IRL using the unconstrained Least Squares Importance Fitting (uLSIF) (Kanamori et al., 2009, NPL 9).
  • the other is LogReg, which utilizes a logistic regression in a different way. Section 2.3 below describes their implementation.
  • LSCDE> LSCDE (Sugiyama et al., 2010, NPL 19) is regarded as a special case of uLSIF to estimate a conditional probability density function.
  • Computing H and h in LSCDE are slightly different from those in uLSIF, and they are computed as follows: where is defined as: Since the basis function shown in Eq. (18) is used, this integral can be computed analytically.
  • the estimated weight of LSCDE is given by Eq. (17). In order to assure that the estimated ratio is a conditional density, the solution should be normalized when it is used to estimate the cost and value function.
  • can be regarded as a label.
  • a logarithm of the density ratio is given by a linear model in the case of LogReg:
  • the second term lnN P /N ⁇ can be ignored in our IRL formulation shown in Eq. (15).
  • the objective function is derived from the negative regularized log-likelihood expressed by: The closed-form solution is not derived, but it is possible to minimize efficiently by standard nonlinear optimization methods since this objective function is convex
  • L1 regularization is used for w q to yield sparse models that are more easily interpreted by the experimenters. It is possible to use L2 regularization for w q if sparseness is not important.
  • the non-negative constraints of w q and w V are not introduced because Eq. (12) can be used by setting to satisfy the non-negativity of the cost function efficiently.
  • Eq. (12) can be used by setting to satisfy the non-negativity of the cost function efficiently.
  • a Gaussian function shown in Eq. (18) is used for simplicity: where ⁇ is a width parameter. The center position is randomly selected from D ⁇ .
  • x) is constructed by using a stochastic policy ⁇ (u
  • the uniform sampling method x is sampled from a uniform distribution defined over the entire state space. In other words, p(x) and ⁇ (x) are regarded as a uniform distribution. Then, y is sampled from the uncontrolled and the controlled probability to construct D p and D ⁇ , respectively.
  • the trajectory-based sampling method p(y
  • the corresponding value function is calculated by solving Eq. (4) and the corresponding optimal controlled probability is evaluated by Eq. (5).
  • exp(-V(x)) is represented by a linear model, but it is difficult under the objective function (1) because the discount factor ⁇ makes the linear model complicated. Therefore, the value function is approximated by the linear model shown in Eq. (6) and the Metropolis Hastings algorithm is used to evaluate the integral.
  • Embodiment 1 The methods according to the embodiments of the present invention in Embodiment 1 can be compared with OptV because the assumptions of OptV are identical to those of our methods according to the embodiments of the present invention.
  • the density ratio estimation methods there exist several variants as described above. More specifically, the following six algorithms are considered: (1) LSCDE-IRL, (2) uLSIF-IRL, (3) LogReg-IRL, (4) Gauss-IRL, (5) LSCDE-OptV, which is the OptV method where p(y
  • Fig. 2 shows the cross-validation error of the discount factor ⁇ where other parameters such as ⁇ q , ⁇ V , and1 ⁇ are set to the optimal values.
  • the cross validation error was minimum at the true discount factor in all the methods.
  • the embodiments of the present invention have been proven to have sufficiently small errors, confirming the effectiveness of the present invention effective.
  • Fig. 3 shows the experimental setup. A subject can move the base left, right, top and bottom to swing the pole several times and decelerate the pole to balance it at the upright position.
  • the dynamics is described by the six-dimensional state vector where ⁇ and are the angle and angular velocity of the pole, x and y are the horizontal and vertical positions of the base, and and are their time derivatives, respectively.
  • the task was performed under two conditions: long pole (73 cm) and short pole (29 cm). Each subject had 15 trials to balance the pole in each condition. Each trial ended when the subject could keep the pole upright for 3 seconds or 40 seconds elapsed.
  • Fig. 4 shows the learning curves of seven subjects, which shows that the learning processes were quite different among subjects. Two subject Nos. 1 and 3 could not accomplish the task. Since a set of successful trajectories should be used by the IRL algorithms, we picked up the data from five subject Nos. 2 and 4-7.
  • Fig. 5 shows the estimated cost function of the subjects 4, 5, and 7 projected to the subspace while are set to zeros for visualization.
  • the cost function of the long pole condition was not so different from that of the short pole condition while there was a significant difference in those of the subject 5, who did not perform well in the short pole condition as shown in Fig. 4.
  • Fig. 6 shows the results.
  • the left figure (a) we used the test dataset of the subject in the long pole condition.
  • the minimum negative log-likelihood was achieved by the cost function estimated from the training datasets and of the same condition.
  • the right panel (b) of Fig. 6 shows that the test data of the subject 7 in both the long and short pole conditions were best predicted by the cost function estimated from the training dataset of the same subject 7 only in the long pole condition.
  • the present disclosure presented a novel inverse reinforcement learning under the framework of LMDP.
  • One of the features of the present invention is to show Eq. (11), which means the temporal difference error is zero for the optimal value function with the corresponding cost function. Since the right hand side of Eq. (11) can be estimated from samples by the efficient methods of density ratio estimation, the IRL of present invention results in a simple least-squares method with regularization.
  • the method according to the embodiments of the present invention in Embodiment 1 does not need to compute the integral, which is usually intractable in high-dimensional continuous problems. As a result, the disclosed method is computationally inexpensive than OptV.
  • LMDP and path integral methods have been receiving attention recently in the field of robotics and machine learning fields (Theodorou & Todorov, 2012, NPL 22) because there exist a number of interesting properties in the linearized Bellman equation (Todorov, 2009a, NPL 24). They have been successfully applied to learning of stochastic policies for robots with large degrees of freedom (Kinjo et al., 2013, NPL 11; Stulp & Sigaud, 2012, NPL 17; Sugimoto and Morimoto, 2011, NPL 18; Theodorou et al., 2010, NPL 21).
  • the IRL methods according to the embodiments of the present invention may be integrated with the existing forward reinforcement learning methods to design complicated controllers.
  • the present disclosure provides a computational algorithm that can infer the reward/cost function from observed behaviors effectively.
  • the algorithm of the embodiments of the present invention can be implemented in general-purpose computer systems with appropriate hardware and software as well as specifically designed proprietary hardware/software.
  • Various advantages according to at least some embodiments of the present invention include: A) Model-free method/system: the method and system according to the embodiments of the present invention do not need to know the environmental dynamics in advance; i.e., the method/system is regarded as a model-free method--it is not necessary to model the target dynamics explicitly although some prior art approaches assume that the environmental dynamics is known in advance.
  • the present invention provides inverse reinforcement learning that can infer the objective function from observed state transitions generated by demonstrators.
  • Fig. 7 schematically shows a framework of the method according to Embodiment 1 of the present invention.
  • An embodiment of the inverse reinforcement learning according to Embodiment 1 of the present invention includes two components: (1) learning the ratio of state transition probabilities with and without control by density ratio estimation and (2) estimation of the cost and value functions that are compatible with the ratio of transition probabilities by a regularized least squares method.
  • Fig. 8 schematically shows such an implementation of the present invention.
  • the demonstrator controls a robot to accomplish a task and the sequence of states and actions is recorded.
  • an inverse reinforcement learning component according to an embodiment of the present invention estimates the cost and value functions, which are then given to forward reinforcement learning controllers for different robots.
  • a behavior is represented by a sequence of states, which are extracted by the motion tracking system.
  • the cost function estimated by the inverse reinforcement learning method/system according to an embodiment of the present invention can be regarded as a compact representation to explain the given behavioral dataset. Through pattern classification of the estimated cost functions, it becomes possible to estimate the user’s expertise or preference.
  • Fig. 9 schematically shows this implementation according to an embodiment of the present invention.
  • ⁇ Analysis of the web experience> In order to increase the likelihood for visitors to read articles that are presented to the visitors, the designers of online news websites, for example, should investigate the web experiences of visitors from a viewpoint of decision making. In particular, recommendation systems are receiving attention as an important business application for personalized services. However, previous methods such as collaborative filtering do not consider the sequences of decision making explicitly.
  • Embodiments of the present invention can provide a different and effective way to model the behaviors of visitors during net surfing.
  • Fig. 10 shows an example of a series of clicking actions by a user, indicating what topics were accessed in what order by the user. The topic that the visitor is reading is regarded as the state and clicking the link is considered as the action.
  • inverse reinforcement learning according to an embodiment of the present invention can analyze the decision-making in the user’s net surfing. Since the estimated cost function represents the preference of the visitor, it becomes possible to recommend a list of articles for the user.
  • Fig. 11 shows an example of the implementation using a general computer system and a sensor system.
  • the methods explained above with mathematical equations can be implemented in such a general computer system, for example.
  • the system of this example includes a sensor system 111 (an example of a data acquisition unit) to receive information about state transitions--i.e., observed behavior--from the object being observed.
  • the sensor system 111 may include one or more of an image capturing device with image processing software/hardware, displacement sensors, velocity sensors, acceleration sensors, microphone, keyboards, and any other input devices.
  • the sensor system 111 is connected to a computer 112 having a processor 113 with an appropriate memory 114 so that the received data can be analyzed according to embodiments of the present invention.
  • the result of the analysis is outputted to any output system 115, such as a display monitor, controllers, drivers, etc. (examples of an output interface), or, an object to be controlled in the case of utilizing the results for control.
  • the result can be used to program or transferred to another system, such as another robot or computer, or website software that responds to user’s interaction, as described above.
  • the implemented system may include a system for inverse reinforcement learning as described in any one of the embodiments above, implemented in a computer connected to the Internet.
  • the state variables that define the behaviors of the user include topics of articles selected by the user while browsing each webpage. Then, the result of the inverse reinforcement learning is used to cause an interface through which the user is browsing Internet websites, such as portable smartphone, personal computer, etc., to display a recommended article for the user.
  • Embodiment 2 that has superior characteristics than Embodiment 1 in some aspects will be described below.
  • Fig. 12 schematically shows differences between Embodiment 1 and Embodiment 2. As described above and shown in (a) in Fig. 12, Embodiment 1 used the density ratio estimation algorithm twice and the regularized least squares method.
  • Embodiment 2 of the present invention a logarithm of the density ratio ⁇ (x)/b(x) is estimated using a standard density ratio estimation (DRE) algorithm, and r(x) and V(x), which are a reward function and a value function, respectively, are computed through the estimation of a log of the density ratio ⁇ (x, y)/b(x, y) with the Bellman equation.
  • DRE standard density ratio estimation
  • Embodiment 1 the following three steps were needed: (1) estimate ⁇ (x)/b(x) by a standard DRE algorithm; (2) estimate ⁇ (x, y)/b(x, y) by a standard DRE algorithm, and (3) compute r(x) and V(x) by a regularized least squares method with the Bellman equation.
  • Embodiment 2 uses only two-step optimization: (1) estimate ln ⁇ (x)/b(x) by a standard density ratio estimation (DRE) algorithm, and (2) compute r(x) and V(x) through a DRE (second time) of ln ⁇ (x, y)/b(x, y) with the Bellman equation.
  • DRE standard density ratio estimation
  • Fig. 13 schematically explains the computational scheme of the second DRE for step (2) in Embodiment 2.
  • the second DRE of ln ⁇ (x, y)/b(x, y) leads to an estimation of r(x)+ ⁇ V(y)-V(x) using the following equation because the first DRE estimates ln ⁇ (x)/b(x).
  • Equation (11) and (15) described above are, in essence, the same as Equation (11) and (15) described above.
  • Embodiment 2 in order to execute the second step (2) of computing r(x) and V(x) through a DRE (second time) of ln ⁇ (x, y)/b(x, y) with the Bellman equation, the basis functions are designed in the state space, which reduces the number of parameters to be optimized.
  • step (2) of estimating ⁇ (x, y)/b(x, y) by a standard DRE algorithm the basis functions need to be designed in the product of the state spaces, which requires relatively a large number of parameters to be optimized.
  • Embodiment 2 requires a relatively low memory usage as compared with Embodiment 1.
  • Embodiment 2 has these various significant advantages over Embodiment 1.
  • Other features and setups of Embodiment 2 are same as various methodologies and schemes described above for Embodiment 1 unless otherwise specifically explained below.
  • Table 1 below shows general comparison of Embodiment 2 versus various conventional methods. Specifically, various features are compared for Embodiment 2 with respect to the above-described OptV, maximum entropy IRL (MaxEnt-IRL), and relative entropy IPL (RelEnt-IRL). As shown in Table 1, Embodiment 1 of the present invention has various advantages over the conventional methods.
  • FIG. 14 shows the results of the experiment comparing Embodiment 2 with Embodiment 1, MaxEnt-IRL, RelEnt-IRL and OptV.
  • Embodiment 2 is indicated as “New Invention” and Embodiment 1 is indicated as “PCT/JP2015/004001” in the figure.
  • Embodiment 2 has successfully recovered observed policies better than other methods, including Embodiment 1, even though the number of samples are small.
  • Embodiment 2 a robot navigation task was studied for Embodiment 2, Embodiment 1, and RelEt-IRL.
  • Three target objects of red (r), green (g), and blue (b) were placed in front of a programmable robot with camera eyes. The goal was to reach the green (g) target among the three targets.
  • Five predetermined starting positions A-E were lined up in front of the three targets. Training data were collected from the starting positions A-C and E, and test data was taken using the starting position D.
  • the basis function for V(x) was given as follows: Where c i is the center position selected from the data set.
  • the basis function for r(x) is given as: where f g is Gaussian function, and f s is sigmoid function.
  • Fig. 15 shows the results of the experiment.
  • Embodiment 2 is indicated as “New Invention,” and Embodiment 1 is indicated as “PCT/JP2015/004001.”
  • the results are compared with the result of RelEnt-IRL, described above.
  • Embodiment 2 yielded a significantly better result. This also indicates that the estimated value function according to Embodiment 2 may be used as a potential function for shaping rewards.
  • Embodiment 2 Computing times (in minutes) in the inverted pendulum task discussed above were evaluated.
  • LogReg IRL and KLIEP IRL in Embodiment 2 required only about 2.5 minutes in the calculation.
  • uLSIF IRL, LSCDE IRL, and LogReg IRL in Embodiment 1 required about 4 minutes to 9.5 minutes, respectively.
  • Embodiment 2 required significantly less computing times than various versions of Embodiment 1, which are discussed above.
  • Embodiment 2 applications of Embodiment 2 are essentially the same as various applications for Embodiment 1 discussed above.
  • various versions of Embodiment 2 will be applicable to, among other things: interpretation of human behaviors, analysis of the web experience, and design of robot controllers by imitation in which by showing some ideal behaviors, the corresponding objective function is estimated as an immediate reward.
  • a robot can use the estimated reward with forward reinforcement learning to generalize behaviors for unexperienced situations.
  • highly economical and reliable systems and methodology can be constructed in accordance with Embodiment 2 of the preset invention.
  • Embodiment 2 can recover observed policies with a small number of observations better than other methods. This is a significant advantage.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

A method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject includes: acquiring data representing changes in state variables that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (1) to the acquired data: where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively; estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2); estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and outputting the estimated r(x) and V(x).

Description

DIRECT INVERSE REINFORCEMENT LEARNING WITH DENSITY RATIO ESTIMATION
The present invention relates to inverse reinforcement learning, and more particularly, to system and method of inverse reinforcement learning. This application claims the benefit of and hereby incorporates by reference: United States Provisional Application No. 62/308,722, filed March 15, 2016.
Understanding behaviors of human from observation is very crucial for developing artificial systems that can interact with human beings. Since our decision making processes are influenced by rewards/costs associated with selected actions, the problem can be formulated as an estimation of the rewards/costs from observed behaviors.
The idea of inverse reinforcement learning is originally proposed by Ng and Russel (2000) (NPL 14). The OptV algorithm proposed by Dvijotham and Todorov (2010) (NPL 6) is a prior work and they show that the policy of the demonstrator is approximated by the value function, which is a solution of the linearized Bellman equation.
Generally speaking, Reinforcement Learning (RL) is a computational framework for investigating decision-making processes of both biological and artificial systems that can learn an optimal policy by interacting with an environment. There exist several open questions in RL, and one of the critical problems is how to design and prepare an appropriate reward/cost function. It is easy to design a sparse reward function, which gives a positive reward when the task is accomplished and zero otherwise, but that makes it hard to find an optimal policy.
In some situations, it is easier to prepare examples of a desired behavior than to handcraft an appropriate reward/cost function. Recently, several methods of Inverse Reinforcement Learning (IRL) (Ng & Russell, 2000, NPL 14) and apprenticeship learning (Abbeel & Ng, 2004, NPL 1) have been proposed in order to derive a reward/cost function from demonstrator's performance and to implement imitation learning. However, most of the existing studies (Abbeel & Ng, 2004, NPL 1; Ratliff et al., 2009, NPL 16; Ziebart et al., 2008, NPL 26) require a routine to solve forward reinforcement learning problems with estimated reward/cost functions. This process is usually very time-consuming even when the model of the environment is available.
Recently, the concept of Linearly solvable Markov Decision Process (LMDP) (Todorov, 2007; 2009, NPLs 23-24) is introduced, which is a sub-class of Markov Decision Process by restricting the form of the cost function. This restriction plays an important role in IRL. LMDP is also known as KL-control and path-integral approaches (Kappen et al., 2012, NPL 10; Theodorou et al., 2010, NPL 21) and similar ideas are proposed in the field of control theory (Fleming and Soner, 2006, NPL 7). Model-free IRL algorithms based on the path-integral method are proposed by Aghasadeghi & Bretl (2011) (NPL 2); Kalakrishnan et al. (2013) (NPL 8). Since the likelihood of the optimal trajectory is parameterized by the cost function, the parameters of the cost can be optimized by maximizing likelihood. However, their methods require the entire trajectory data. A model-based IRL method is proposed by Dvijotham and Todorov (2010) (NPL 6) based on the framework of LMDP, in which the likelihood of the optimal state transition is represented by the value function. As opposed to path-integral approaches of IRL, it can be optimized from any dataset of state transitions. A major drawback is to evaluate the integral which cannot be solved analytically. In practice, they discretized the state space to replace the integral with a sum, but it is not feasible in high-dimensional continuous problems.
U.S. Patent No. 8,756,177, Methods and systems for estimating subject intent from surveillance. U.S. Patent No. 7,672,739. System for multiresolution analysis assisted reinforcement learning approach to run-by-run control. Japanese Patent No. 5815458. Reward function estimating device, method and program.
Abbeel, P. and Ng, A.Y. Apprenticeship learning via inverse reinforcement learning. In Proc. of the 21st International Conference on Machine Learning, 2004. Aghasadeghi, N. and Bretl, T. Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1561-1566, 2011. Boularias, A., Kober, J., and Peters, J. Relative entropy inverse reinforcement learning. In Proc. of the 14th International Conference on Artificial Intelligence and Statistics, Deisenroth, M.P., Rasmussen, C.E, and Peters, J. Gaussian process dynamic programming. Neurocomputing, 72(7-9):1508-1524, 2009. Doya, K. Reinforcement learning in continuous time and space. Neural Computation, 12:219-245, 2000. Dvijotham, K. and Todorov, E. Inverse optimal control with linearly solvable MDPs. In Proc. of the 27th International Conference on Machine Learning, 2010. Fleming, W.H. and Soner, H.M. Controlled Markov Processes and Viscosity Solutions. Springer, second edition, 2006. Kalakrishnan, M., Pastor, P., Righetti, L., and Schaal, S. Learning objective functions for manipulation. In Proc. of IEEE International Conference on Robotics and Automation, pp.1331-1336, 2013. Kanamori, T., Hido, S., and Sugiyama, M. A Least-squares Approach to Direct Importance Estimation. Journal of Machine Learning Research, 10:1391-1445, 2009. Kappen, H.J., Gomez, V., and Opper, M. Optimal control as a graphical model inference problem. Machine Learning, 87(2):159-182, 2012. Kinjo, K., Uchibe, E., and Doya, K. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task. Frontiers in Neurorobotics, 7(7), 2013. Levine, S. and Koltun, V. Continuous inverse optimal control with locally optimal examples. In Proc. of the 27th International Conference on Machine Learning, 2012. Levine, S., Popovic, Z., and Koltun, V. Nonlinear inverse reinforcement learning with Gaussian processes. Advances in Neural Information Processing Systems 24, pp.19-27. 2011. Ng, A.Y. and Russell, S. Algorithms for inverse reinforcement learning. In Proc. of the 17th International Conference on Machine Learning, 2000. Rasmussen, C.E. and Williams, C. K.I. Gaussian Processes for Machine Learning. MIT Press, 2006. Ratliff, N.D., Silver, D, and Bagnell, J.A. Learning to search: Functional gradient techniques for imitation learning. Autonomous Robots, 27(1): 25-53, 2009. Stulp, F. and Sigaud, O. Path integral policy improvement with covariance matrix adaptation. In Proc. of the 10th European Workshop on Reinforcement Learning, 2012. Sugimoto, N. and Morimoto, J. Phase-dependent trajectory optimization for periodic movement using path integral reinforcement learning. In Proc. of the 21st Annual Conference of the Japanese Neural Network Society, 2011. Sugiyama, M., Takeuchi, I., Suzuki, T., Kanamori, T., Hachiya, H., and Okanohara, D. Least-squares conditional density estimation. IEICE Transactions on Information and Systems, E93-D(3): 583-594, 2010. Sugiyama, M., Suzuki, T., and Kanamori, T. Density ratio estimation in machine learning. Cambridge University Press, 2012. Theodorou, E., Buchli, J., and Schaal, S. A generalized path integral control approach to reinforcement learning. Journal of Machine Learning Research, 11: 3137--3181, 2010. Theodorou, E.A and Todorov, E. Relative entropy and free energy dualities: Connections to path integral and KL control. In Proc. of the 51st IEEE Conference on Decision and Control, pp. 1466-1473, 2012. Todorov, E. Linearly-solvable Markov decision problems. Advances in Neural Information Processing Systems 19, pp. 1369-1376. MIT Press, 2007. Todorov, E. Efficient computation of optimal actions. Proceedings of the National Academy of Sciences of the United States of America, 106(28): 11478-83, 2009. Todorov, E. Eigenfunction approximation methods for linearly-solvable optimal control problems. In Proc. of the 2nd IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 161-168, 2009. Ziebart, B.D., Maas, A., Bagnell, J.A., and Dey, A.K. Maximum entropy inverse reinforcement learning. In Proc. of the 23rd AAAI Conference on Artificial Intelligence, 2008. Vroman, M. (2014). Maximum likelihood inverse reinforcement learning. PhD Thesis, Rutgers University, 2014. Raita, H. (2012). On the performance of maximum likelihood inverse reinforcement learning. arXiv preprint. Choi, J. and Kim, K. (2012). Nonparametric Bayesian inverse reinforcement learning for multiple reward functions. Choi, J. and Kim, J. (2011). Inverse reinforcement learning in partially observable environments. Journal of Machine Learning Research. Neu, and Szepesvari, C. (2007). Apprenticeship learning using inverse reinforcement learning and gradient methods. In Proc. of UAI. Mahadevan, S. (2005). Proto-value functions: developmental reinforcement learning. In Proc. of the 22nd ICML.
Inverse reinforcement learning is a framework to solve the above problems, but as mentioned above, the existing methods have the following drawbacks: (1) intractable when the state is continuous, (2) computational cost is expensive, and (3) entire trajectories of states should be necessary to estimate. Methods disclosed in this disclosure solve these drawbacks. In particular, the previous method proposed in NPL 14 does not work well as many previous studies reported. Moreover, the method proposed in NPL 6 cannot solve continuous problems in practice because their algorithm involves a complicated evaluation of integrals.
The present invention is directed to system and method for inverse reinforcement learning.
An object of the present invention is to provide a new and improved inverse reinforcement learning system and method so as to obviate one or more of the problems of the existing art.
To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, in one aspect, the present invention provides a method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising: acquiring data representing changes in state variables that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (1) to the acquired data:
Figure JPOXMLDOC01-appb-I000005
where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively; estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2); estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and outputting the estimated r(x) and V(x).
In another aspect, the present invention provides a method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising: acquiring data representing state transition with action that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (3) to the acquired data:
Figure JPOXMLDOC01-appb-I000006
where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(u | x) and π(u | x) denote, respectively, stochastic policies before and after learning that represent a probability to select action u at state x; estimating a logarithm of the density ratio π(x)/b(x) in Eq. (3); estimating r(x) and V(x) in Eq. (4) from the result of estimating a log of the density ratio π(x, u)/b(x, u); and outputting the estimated r(x) and V(x).
In another aspect, the present invention provides a non-transitory storage medium storing instructions to cause a processor to perform an algorithm for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, said instructions causing the processor to perform the following steps: acquiring data representing changes in state variables that define the behaviors of the subject; applying a modified Bellman equation given by Eq. (1) to the acquired data:
Figure JPOXMLDOC01-appb-I000007
where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively; estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2); estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and outputting the estimated r(x) and V(x).
In another aspect, the present invention provides a system for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, comprising: a data acquisition unit to acquire data representing changes in state variables that define the behaviors of the subject; a processor with a memory, the processor and the memory are configured to: applying a modified Bellman equation given by Eq. (1) to the acquired data:
Figure JPOXMLDOC01-appb-I000008
where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively; estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2); estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and an output interface that outputs the estimated r(x) and V(x).
In another aspect, the present invention provides a system for predicting a preference in topic of articles that a user is likely to read from a series of articles the user selected in an Internet web surfing, comprising: the system for inverse reinforcement learning as set forth in claim 8, implemented in a computer connected to the Internet, wherein said subject is the user, and said state variables that define the behaviors of the subject include topics of articles selected by the user while browsing each webpage, and wherein the processor causes an interface through which the user is browsing Internet websites to display a recommended article for the user to read in accordance with the estimated cost and value functions.
In another aspect, the present invention provides a method for programming a robot to perform complex tasks, comprising: controlling a first robot to accomplish a task so as to record a sequence of states and actions; estimating the reward and value functions using the system for inverse reinforcement learning as set forth in claim 8 based on the recorded sequence of the states and actions; and providing the estimated reward and value functions to a forward reinforcement leaning controller of a second robot to program the second robot with the estimated reward and value functions.
According to one or more aspects of the present invention, it becomes possible to perform inverse reinforcement learning effectively and efficiently. In some embodiments, there is no need to know the environmental dynamics in advance and there is no need to execute integration.
Additional or separate features and advantages of the invention will be set forth in the descriptions that follow and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory, and are intended to provide further explanation of the invention as claimed.
Fig. 1 shows normalized squared errors for the results of the swing-up inverted pendulum experiments to which embodiments of the present invention was applied for each of the following density ratio estimation methods: (1) LSCDE-IRL, (2) uLSIF-IRL, (3) LogReg-IRL, (4) Gauss-IRL, (5) LSCDE-OptV, and (6) Gauss-OptV. As indicated in the drawing, (a)-(d) differ from each other in terms of sampling methods and other parameters. Fig. 2 is a graph showing cross-validation errors in the swing-up inverted pendulum experiments for various density ratio estimation methods. Fig. 3 shows an experimental setup for the pole balancing task for the long pole; left: the start position, middle: the goal position, and right: state variables. Fig. 4 shows learning curves in the pole balancing task experiment with respect to various subjects according to an embodiment of the present invention; sold line: long pole, dotted line: short pole. Fig. 5 shows estimated cost functions derived for the pole balancing task experiment according to the embodiment of the present invention for Subject Nos. 4, 5, and 7, projected to the defined subspace. Fig. 6 shows negative log likelihood values for the test datasets in the pole balancing task experiment for Subject Nos. 4 and 7, evaluating the estimated cost functions. Fig. 7 schematically shows a framework of inverse reinforcement learning according to Embodiment 1 of the present invention that can infer an objective function from observed state transitions generated by demonstrators. Fig. 8 is a schematic block diagram showing an example of implementation of the inverse reinforcement learning of the present invention in imitation learning of robot behaviors. Fig. 9 is a schematic block diagram showing an example of implementation of the inverse reinforcement learning of the present invention in interpreting human behaviors. Fig. 10 schematically shows a series of clicking actions by a web-visitor, showing the visitor’s preference in topic in web surfing. Fig. 11 schematically shows an example of an inverse reinforcement learning system according to an embodiment of the present invention. Fig. 12 schematically shows differences between Embodiment 1 and Embodiment 2 of the present invention. Fig. 13 schematically explains the computational scheme of the second DRE for step (2) in Embodiment 2. Fig. 14 shows the experimental results of the swing-up inverted pendulum problem comparing Embodiment 2 with Embodiment 1 and other methods. Fig. 15 shows the experimental results of the robot navigation task using Embodiments 1 and 2 and RelEnt-IRL.
The present disclosure provides a novel inverse reinforcement learning method and system based on density ratio estimation under the framework of Linearly solvable Markov Decision Process (LMDP). In LMDP, the logarithm of the ratio between the controlled and uncontrolled state transition densities is represented by the state-dependent cost and value functions. Previously, the present inventors have devised novel inverse reinforcement learning method and system, as described in PCT International Application No. PCT/JP2015/004001, in which density ratio estimation methods are used to estimate the transition density ratio, and the least squares method with regularization is used to estimate the state-dependent cost and value functions that satisfy the relation. That method can avoid computing the integral such as evaluating the partition function. The present disclosure includes the descriptions of the invention described in PCT/JP2015/004001 as Embodiment 1 below, and further describes a new embodiment as Embodiment 2 that has improved characteristics in several aspects than Embodiment 1. Depending on regional country laws, the subject matter described and/or claimed in PCT/JP2015/004001 may or may not be prior art against Embodiment 2. As described below, in Embodiment 1, a simple numerical simulation of a pendulum swing-up was performed, and its superiority over conventional methods have been demonstrated. The present inventors further apply the method to humans behaviors in performing a pole balancing task and show that the estimated cost functions can predict the performance of the subjects in new trials or environments in a satisfactory manner.
One aspect of the present invention is based on the framework of linearly solvable Markov decision processes like the OptV algorithm. In Embodiment 1, the present inventors have derived a novel Bellman equation given by:
Figure JPOXMLDOC01-appb-I000009
where q(x) and V(x) denote the cost and value function at state x and γ represents a discount factor. p(y | x) and π(y | x) denote the state transition probabilities before and after learning, respectively. The density ratio, the left hand side of the above equation, is efficiently computed from observed behaviors by density ratio estimation methods. Once the density ratio is estimated, the cost and value function can be estimated by regularized least-squares method. An important feature is that our method can avoid computing the integrals, where it is usually calculated with high computational cost. The present inventors have applied this method to humans behaviors in performing a pole balancing task and show that the estimated cost functions can predict the performance of the subjects in new trials or environments, verifying universal applicability and effectiveness of this new computation technique in inverse reinforcement learning, which has well-recognized wide applicability in control system, machine learning, operations research, information theory, etc.
<I. Embodiment 1>
<1. Linearly Solvable Markov Decision Process>
<1.1. Forward Reinforcement Learning>
The present disclosure provides a brief introduction of Markov Decision Process and its simplification for a discrete-time continuous-space domain. Let X and U be the continuous state and continuous action spaces, respectively. At a time step t, a learning agent observes the environmental current state xt∈X and executes action ut∈U sampled from a stochastic policy π(ut | xt). Consequently, an immediate cost c(xt, ut) is given from the environment and the environment makes a state transition according to a state transition probability PT(y | xt, ut) from xt to y∈X under the action ut. The goal of reinforcement learning is to construct an optimal policy π(u | x) which minimizes the given objective function. There exist several objective functions and the most widely used one is a discounted sum of costs given by:
Figure JPOXMLDOC01-appb-I000010
where γ∈ (0, 1) is called the discount factor. It is known that the optimal value function satisfies the following Bellman equation:
Figure JPOXMLDOC01-appb-I000011
Eq. (2) is a nonlinear equation due to the min operator.
Linearly solvable Markov Decision Process (LMDP) simplifies Eq. (2) under some assumptions (Todorov, 2007; 2009a, NPLs 23-24). The key trick of LMDP is to optimize the state transition probability directly instead of optimizing the policy. More specifically, two conditional probability density functions are introduced. One is the uncontrolled probability denoted by p(y | x) which can be regarded as an innate state transition. p(y | x) is arbitrary and it can be constructed by p(y | x)=∫PT(y | x, u)π0(u | x)du, where π0(u | x) is a random policy. The other is the controlled probability denoted by π(y | x) which can be interpreted as an optimal state transition. Then, the cost function is restricted to the following form:
Figure JPOXMLDOC01-appb-I000012
where q(x) and
Figure JPOXMLDOC01-appb-I000013
denote the state dependent cost function and Kullback Leibler divergence between the controlled and uncontrolled state transition densities, respectively. In this case, the Bellman equation (2) is simplified to the following equation:
Figure JPOXMLDOC01-appb-I000014
The optimal controlled probability is given by:
Figure JPOXMLDOC01-appb-I000015
It should be noted that Eq. (4) is still nonlinear even though the desirability function Z(x) = exp (-V(x)) is introduced because of the existence of the discount factor γ. In the forward reinforcement learning under the framework of LMDP, V(x) is computed by solving Eq. (4), then π(y | x) is computed (Todorov, 2009, NPL 25).
<1.2. Inverse Reinforcement Learning>
The inverse reinforcement learning (IRL) algorithm under LMDP was proposed by Dvijotham and Todorov (2010) (NPL 6). In particular, OptV is quite efficient for discrete state problems. The advantage of OptV is that the optimal state transition is explicitly represented by the value function so that the maximum likelihood method can be applied to estimate the value function. Suppose that the observed trajectories are generated by the optimal state transition density (5). The value function is approximated by the following linear model:
Figure JPOXMLDOC01-appb-I000016
where wV and ΨV(x) denote the learning weights and basis function vector, respectively.
Since the controlled probability is given by Eq. (5), the weight vector wV can be optimized by maximizing the likelihood. Suppose that we have a dataset of state transitions:
Figure JPOXMLDOC01-appb-I000017
where Nπdenotes the number of data from the controlled probability. Then, the log-likelihood and its derivative are given by:
Figure JPOXMLDOC01-appb-I000018
where π(y | x;wV) is the controlled policy in which the value function is parameterized by Eq. (6). Once the gradient is evaluated, the weight vector wV is updated according to the gradient ascent method.
After the value function is estimated, the simplified Bellman equation (4) can be used to retrieve the cost function. It means that the cost function q(x) is uniquely determined when
Figure JPOXMLDOC01-appb-I000019
and γ are given, and q(x) is expressed by the basis functions used in the value function. While the representation of the cost function is not important in the case of imitation learning, we want to find a simpler representation of the cost for analysis. Therefore, the present inventors introduce an approximator:
Figure JPOXMLDOC01-appb-I000020
where wq and
Figure JPOXMLDOC01-appb-I000021
denote the learning weights and basis function vector, respectively. The objective function with L1 regularization to optimize wq is given by:
Figure JPOXMLDOC01-appb-I000022
where λq is a regularization constant. A simple gradient descent algorithm is adopted, and J(wq) is evaluated at the observed states.
The most significant problem of Dvijotham and Todorov (2010) (NPL 6) is the integral in Eqs. (8) and (10) which cannot be solved analytically, and they discretized the state space and replaced the integral with a sum. However, as they suggested, it is infeasible in high-dimensional problems. In addition, the uncontrolled probability p(y | x) is not necessarily Gaussian. In at least some embodiments of the present invention, the Metropolis Hastings algorithm is applied to evaluate the gradient of the log-likelihood, in which the uncontrolled probability p(y | x) is used as a causal density.
<2. Inverse Reinforcement Learning by Density Ratio Estimation>
<2.1. Bellman Equation for IRL>
From Equations (4) and (5), the present inventors have derived the following important relation for the discounted-cost problems:
Figure JPOXMLDOC01-appb-I000023
Eq. (11) plays an important role in the IRL algorithms according to embodiments of the present invention. Similar equations can be derived for first-exit, average cost, and finite horizon problems. It should be noted that the left hand side of Eq. (11) is not a temporal difference error because q(x) is the state-dependent part of the cost function shown in Eq. (3). Our IRL is still an ill-posed problem and the cost function is not uniquely determined although the form of the cost function is constrained by Eq. (3) under LMDP. More specifically, if the state-dependent cost function is modified by:
Figure JPOXMLDOC01-appb-I000024
the corresponding value function is changed to:
Figure JPOXMLDOC01-appb-I000025
where C is a constant value. Then, the controlled probability derived from V(x) is identical to that from V'(x). This property is useful when estimating the cost function as described below. In one aspect of the present invention, the disclosed IRL method consists of two parts. One is to estimate the density ratio of the right hand side of Eq. (11) described below. The other is to estimate q(x) and V(x) by the least squares method with regularization as shown below.
<2.2. Density Ratio Estimation for IRL>
Estimating the ratio of controlled and uncontrolled transition probability densities can be regarded as a problem of density ratio estimation (Sugiyama et al., 2012, NPL 20). According to the setting of the problem, the present disclosure considers the following formulation.
<2.2.1. General Case>
First, a general setting is considered. Suppose that we have two datasets of state transitions: One is Dπ shown in Eq. (7) and the other is a dataset from the uncontrolled probability:
Figure JPOXMLDOC01-appb-I000026
where Np denotes the number of data. Then, we are interested in estimating the ratio π(y | x)/p(y | x) from Dp and Dπ.
From Eq. (11), we can consider the following two decompositions:
Figure JPOXMLDOC01-appb-I000027
The first decomposition (14) shows the difference of logarithms of conditional probability densities. In order to estimate Eq. (14), the present disclosure considers two implementations. The first one is LSCDE-IRL which adopts Least Squares Conditional Density Estimation (LSCDE) (Sugiyama et al., 2010) to estimate π(y | x) and p(y | x). The other is Gauss-IRL which uses a Gaussian process (Rasmussen & Williams, 2006, NPL 15) to estimate the conditional densities in Eq. (14).
The second decomposition (15) shows the difference of logarithms of density ratio. The advantage of the second decomposition is that lnπ(x)/p(x) can be neglected if π(x)=p(x). This condition may be satisfied according to the setup. Currently, two methods are implemented to estimate π(x)/p(x) and π(x, y)/p(x, y). One is uLSIF-IRL using the unconstrained Least Squares Importance Fitting (uLSIF) (Kanamori et al., 2009, NPL 9). The other is LogReg, which utilizes a logistic regression in a different way. Section 2.3 below describes their implementation.
<2.2.2. When p(y | x) is Unknown>
The state transition probability PT(y | x, u) is assumed to be known in advance in the case of standard IRL problems, and this corresponds to the assumption that the uncontrolled probability p(y | x) is given in the case of LMDP. This can be regarded as a model-based IRL. In this case, Eq. (14) is appropriate and it is enough to estimate the controlled probability π(y | x) from the dataset Dπ.
In some situations, we have neither an analytical model nor a dataset from the uncontrolled probability density. Then, p(y | x) is replaced by a uniform distribution, which is an improper distribution for unbounded variables. Without loss of generality, p(y | x) is set to 1 since it can be compensated by shifting the cost and value function by Eqs. (12) and (13).
<2.3. Density Ratio Estimation Algorithms>
This section describes density ratio estimation algorithms appropriate for the IRL method disclosed in this disclosure.
<2.3.1. uLSIF>
uLSIF (Kanamori et al., 2009, NPL 9) is a least-squares method for the direct density ratio estimation method. The goal of uLSIF is to estimate the ratio of two densities π(x)/p(x) and π(x, y)/p(x, y). Hereafter, the present disclosure explains how to estimate r(z)=π(z)/p(z) from Dp and Dπ, where z = (x, y) for simplicity. Let us approximate the ratio by the linear model:
Figure JPOXMLDOC01-appb-I000028
where
Figure JPOXMLDOC01-appb-I000029
denotes the basis function vector and are the parameters to be learned, respectively. The objective function is given by:
Figure JPOXMLDOC01-appb-I000030
where λ is a regularization constant and
Figure JPOXMLDOC01-appb-I000031
It should be noted that H is estimated from Dp while h from Dπ, respectively. Eq. (16) can be analytically minimized as
Figure JPOXMLDOC01-appb-I000032
but this minimizer ignores the non-negativity constraint of the density ratio. To compensate for this problem, uLSIF modifies the solution by:
Figure JPOXMLDOC01-appb-I000033
where the max operator above is applied in the element-wise manner. As recommended by Kanamori et al. (2009) (NPL 9), a Gaussian function centered at the states of Dπ is used as a basis function described by:
Figure JPOXMLDOC01-appb-I000034
where σ is a width parameter.
Figure JPOXMLDOC01-appb-I000035
is the state which is randomly selected from Dπ. The parameters λ and σ are selected by leave-one-out cross-validation.
<2.3.2. LSCDE>
LSCDE (Sugiyama et al., 2010, NPL 19) is regarded as a special case of uLSIF to estimate a conditional probability density function. For example, the objective function to estimate π(y | x)=π(x, y)/π(x) from Dπ is given by:
Figure JPOXMLDOC01-appb-I000036
where
Figure JPOXMLDOC01-appb-I000037
is a linear model and λ is a regularization constant. Computing H and h in LSCDE are slightly different from those in uLSIF, and they are computed as follows:
Figure JPOXMLDOC01-appb-I000038
where
Figure JPOXMLDOC01-appb-I000039
is defined as:
Figure JPOXMLDOC01-appb-I000040
Since the basis function shown in Eq. (18) is used, this integral can be computed analytically. The estimated weight of LSCDE is given by Eq. (17). In order to assure that the estimated ratio is a conditional density, the solution should be normalized when it is used to estimate the cost and value function.
<2.3.3. LogReg>
LogReg is a method of density estimation using a logistic regression. Let us assign a selector variable η=-1 to samples from the uncontrolled probability and η=1 to samples from the controlled probability:
Figure JPOXMLDOC01-appb-I000041
The density ratio can be represented by applying the Bayes rule as follows:
Figure JPOXMLDOC01-appb-I000042
The first ratio Pr(η=-1)/Pr(η=1) is estimated by NP/Nπand the second ratio is computed after estimating the conditional probabilityπ(η | z) by a logistic regression classifier:
Figure JPOXMLDOC01-appb-I000043
where η can be regarded as a label. It should be noted that a logarithm of the density ratio is given by a linear model in the case of LogReg:
Figure JPOXMLDOC01-appb-I000044
The second term lnNP/Nπ can be ignored in our IRL formulation shown in Eq. (15).
The objective function is derived from the negative regularized log-likelihood expressed by:
Figure JPOXMLDOC01-appb-I000045
The closed-form solution is not derived, but it is possible to minimize efficiently by standard nonlinear optimization methods since this objective function is convex.
<2.4. Estimating the Cost and Value Functions>
Once the density ratio π(y | x)/p(y | x) is estimated, the least squares method with regularization is applied to estimate the state-dependent cost function q(x) and value function V(x). Suppose that
Figure JPOXMLDOC01-appb-I000046
is an approximation of a negative log ratio;
Figure JPOXMLDOC01-appb-I000047
and consider linear approximators of q(x) and V(x) as defined in Eqs. (6) and (9), respectively. The objective function is given by:
Figure JPOXMLDOC01-appb-I000048
where λq and λV are regularization constants. L2 regularization is used for wV because L2 regularization is an effective means of achieving numerical stability. On the other hand, L1 regularization is used for wq to yield sparse models that are more easily interpreted by the experimenters. It is possible to use L2 regularization for wq if sparseness is not important. In addition, the non-negative constraints of wq and wV are not introduced because Eq. (12) can be used by setting
Figure JPOXMLDOC01-appb-I000049
to satisfy the non-negativity of the cost function efficiently.
Theoretically, we can choose arbitrary basis functions. In one embodiment of the present invention, a Gaussian function shown in Eq. (18) is used for simplicity:
Figure JPOXMLDOC01-appb-I000050
where σ is a width parameter. The center position
Figure JPOXMLDOC01-appb-I000051
is randomly selected from Dπ.
<3. Experiments>
<3.1. Swing-up Inverted Pendulum>
<3.1.1. Task Description>
To demonstrate and confirm the effectiveness of the above-described embodiments belonging to Embodiment 1 of the present invention, the present inventors have studied a swing-up inverted pendulum problem in which the state vector is given by a two dimensional vector x=[θ, ω]T, where θ and ω denote the angle and the angular velocity of the pole, respectively. The equation of motion is given by the following stochastic differential equation:
Figure JPOXMLDOC01-appb-I000052
where l, m, g, κ, σe, and ω denote the length of the pole, mass, gravitational acceleration, coefficient of friction, scaling parameter for the noise, and Brownian noise, respectively. As opposed to the previous studies (Deisenroth et al., 2009, NPL 4; Doya, 2000, NPL 5), the applied torque u is not restricted and it is possible to swing-up directly. By discretizing the time axis with step h, the corresponding state transition probability PT(y | x, u), which is represented by a Gaussian distribution, is obtained. In this simulation, the parameters are given as follows:
Figure JPOXMLDOC01-appb-I000053
The present inventors have conducted a series of experiments by changing (1) the state dependent cost function q(x), (2) the uncontrolled probability p(y | x), and (3) the datasets Dp and Dπ as follows.
<Cost function>
The goal is to keep the pole upright and the following three cost functions are prepared:
Figure JPOXMLDOC01-appb-I000054
where Q=diag[1,0.2]. qcost(x) is used by Doya (2000) while qexp(x) by Deisenroth et al. (2009) (NPL 4).
<Uncontrolled probability>
Two densities pG(y | x) and pM(y | x) are considered. pG(y | x) is constructed by using a stochastic policy π(u | x) represented by a Gaussian distribution. Since the equation of motion in discrete time is given by the Gaussian, pG(y | x) is also Gaussian. In the case of pM(y | x), a mixture of Gaussian distributions is used as a stochastic policy.
<Preparation of the datasets>
Two sampling methods are considered. One is the uniform sampling and the other is the trajectory-based sampling. In the uniform sampling method, x is sampled from a uniform distribution defined over the entire state space. In other words, p(x) and π(x) are regarded as a uniform distribution. Then, y is sampled from the uncontrolled and the controlled probability to construct Dp and Dπ, respectively. In the trajectory-based sampling method, p(y | x) and π(y | x) are used to generate trajectories of states from the same start state x0. Then, a pair of state transitions are randomly selected from the trajectories to construct Dp and Dπ. It is expected that p(x) is different from π(x).
For each cost function, the corresponding value function is calculated by solving Eq. (4) and the corresponding optimal controlled probability is evaluated by Eq. (5). In the previous method (Todorov, 2009b, NPL 25), exp(-V(x)) is represented by a linear model, but it is difficult under the objective function (1) because the discount factor γ makes the linear model complicated. Therefore, the value function is approximated by the linear model shown in Eq. (6) and the Metropolis Hastings algorithm is used to evaluate the integral.
The methods according to the embodiments of the present invention in Embodiment 1 can be compared with OptV because the assumptions of OptV are identical to those of our methods according to the embodiments of the present invention. According to the choice of the density ratio estimation methods, there exist several variants as described above. More specifically, the following six algorithms are considered: (1) LSCDE-IRL, (2) uLSIF-IRL, (3) LogReg-IRL, (4) Gauss-IRL, (5) LSCDE-OptV, which is the OptV method where p(y | x) is estimated by LSCDE, and (6) Gauss-OptV, where the Gaussian process method is used to estimate p(y | x).
We set the number of samples of Dp and Dπ at Np=Nπ=300. The parameters
λq, λV, σ, and γ are optimized by cross-validation from the following regions: logλq, logλV∈linspace(-3,1,9), log σ∈linspace(-1.5,1.5,9), and log γ∈linspace(-0.2,0,9), where linspace(xmin,xmax,n) generates a set of n points which is equally spaced between xmin and xmax.
<3.1.2. Experimental Results>
The accuracy of the estimated cost functions is measured by the normalized squared error for the test samples:
Figure JPOXMLDOC01-appb-I000055
where q(xj) is one of the true cost function shown in Eq. (19) at state xj while
Figure JPOXMLDOC01-appb-I000056
is the estimated cost function, respectively. Fig.1 (a)-(d) compare the accuracy of the IRL methods of the present embodiments; it is shown that our methods (1)-(4) performed better than OptV methods (5)-(6) in all settings. More specifically, LogReg-IRL showed the best performance, but there were no significant differences among our methods (1)-(3). The accuracy of the cost estimated by Gauss-IRL increased significantly if the stochastic policy π(u | x) was given by the mixture of Gaussians because the standard Gaussian process cannot represent the mixture of Gaussians.
Fig. 2 shows the cross-validation error of the discount factor γ where other parameters such as λq, λV, and1 σ are set to the optimal values. In this simulation, the cross validation error was minimum at the true discount factor
Figure JPOXMLDOC01-appb-I000057
in all the methods. As show in Fig. 2 and also as explained in Fig. 1 above, the embodiments of the present invention have been proven to have sufficiently small errors, confirming the effectiveness of the present invention effective.
<3.2. Human Behavior Analysis>
<3.2.1. Task Description>
In order to evaluate our IRL algorithm in a realistic situation, the present inventors have conducted a dynamic motor control, pole-balancing problem. Fig. 3 shows the experimental setup. A subject can move the base left, right, top and bottom to swing the pole several times and decelerate the pole to balance it at the upright position. The dynamics is described by the six-dimensional state vector
Figure JPOXMLDOC01-appb-I000058
where θ and
Figure JPOXMLDOC01-appb-I000059
are the angle and angular velocity of the pole, x and y are the horizontal and vertical positions of the base, and
Figure JPOXMLDOC01-appb-I000060
and
Figure JPOXMLDOC01-appb-I000061
are their time derivatives, respectively.
The task was performed under two conditions: long pole (73 cm) and short pole (29 cm). Each subject had 15 trials to balance the pole in each condition. Each trial ended when the subject could keep the pole upright for 3 seconds or 40 seconds elapsed. We collected the data from 7 subjects (5 right-handed and 2 left-handed) and the trajectory-based sampling method was used to construct the following two datasets of controlled probability:
Figure JPOXMLDOC01-appb-I000062
for training and
Figure JPOXMLDOC01-appb-I000063
for testing of the i-th subject. It is assumed that all subjects had a unique uncontrolled probability p(y | x), which was generated by a random policy. This means the datasets
Figure JPOXMLDOC01-appb-I000064
for training and
Figure JPOXMLDOC01-appb-I000065
for testing are shared among subjects. The number of samples in the datasets was 300.
<3.2.2. Experimental Results>
Fig. 4 shows the learning curves of seven subjects, which shows that the learning processes were quite different among subjects. Two subject Nos. 1 and 3 could not accomplish the task. Since a set of successful trajectories should be used by the IRL algorithms, we picked up the data from five subject Nos. 2 and 4-7.
The experimental results in the case of using LogReg-IRL will be described below (LSCDE-IRL and uLSIF-IRL showed similar results). Fig. 5 shows the estimated cost function of the subjects 4, 5, and 7 projected to the subspace
Figure JPOXMLDOC01-appb-I000066
while
Figure JPOXMLDOC01-appb-I000067
are set to zeros for visualization. In the case of the subject 7, the cost function of the long pole condition was not so different from that of the short pole condition while there was a significant difference in those of the subject 5, who did not perform well in the short pole condition as shown in Fig. 4.
In order to evaluate the cost functions estimated from the training data sets, the present inventors applied the forward reinforcement learning to find the optimal controlled transition probability for the estimated cost function and then computed the negative log-likelihood for the test datasets:
Figure JPOXMLDOC01-appb-I000068
where
Figure JPOXMLDOC01-appb-I000069
is the number of samples in
Figure JPOXMLDOC01-appb-I000070
Fig. 6 shows the results. In the left figure (a), we used the test dataset of the subject
Figure JPOXMLDOC01-appb-I000071
in the long pole condition. The minimum negative log-likelihood was achieved by the cost function estimated from the training datasets
Figure JPOXMLDOC01-appb-I000072
and
Figure JPOXMLDOC01-appb-I000073
of the same condition. The right panel (b) of Fig. 6 shows that the test data of the subject 7 in both the long and short pole conditions were best predicted by the cost function estimated from the training dataset of the same subject 7 only in the long pole condition. Thus, the effectiveness and usefulness of the embodiments of the present invention have been confirmed and demonstrated by this experiment as well.
The present disclosure presented a novel inverse reinforcement learning under the framework of LMDP. One of the features of the present invention is to show Eq. (11), which means the temporal difference error is zero for the optimal value function with the corresponding cost function. Since the right hand side of Eq. (11) can be estimated from samples by the efficient methods of density ratio estimation, the IRL of present invention results in a simple least-squares method with regularization. In addition, the method according to the embodiments of the present invention in Embodiment 1 does not need to compute the integral, which is usually intractable in high-dimensional continuous problems. As a result, the disclosed method is computationally inexpensive than OptV.
LMDP and path integral methods have been receiving attention recently in the field of robotics and machine learning fields (Theodorou & Todorov, 2012, NPL 22) because there exist a number of interesting properties in the linearized Bellman equation (Todorov, 2009a, NPL 24). They have been successfully applied to learning of stochastic policies for robots with large degrees of freedom (Kinjo et al., 2013, NPL 11; Stulp & Sigaud, 2012, NPL 17; Sugimoto and Morimoto, 2011, NPL 18; Theodorou et al., 2010, NPL 21). The IRL methods according to the embodiments of the present invention may be integrated with the existing forward reinforcement learning methods to design complicated controllers.
As described above, in at least some aspects of Embodiment 1 of the present invention, the present disclosure provides a computational algorithm that can infer the reward/cost function from observed behaviors effectively. The algorithm of the embodiments of the present invention can be implemented in general-purpose computer systems with appropriate hardware and software as well as specifically designed proprietary hardware/software. Various advantages according to at least some embodiments of the present invention include:
A) Model-free method/system: the method and system according to the embodiments of the present invention do not need to know the environmental dynamics in advance; i.e., the method/system is regarded as a model-free method--it is not necessary to model the target dynamics explicitly although some prior art approaches assume that the environmental dynamics is known in advance.
B) Data efficient: the dataset for the method and system according to the embodiments of the present invention consist of a set of state transition while many previous methods require a set of trajectories of states. Thus, in the methods and system according to the embodiments of the present invention it is easier to collect the data.
C) Computationally efficient (1): the method and system according to the embodiments of the present invention do not need to solve a (forward) reinforcement learning problem. In contrast, some previous methods required solving such a forward reinforcement learning problem many times with the estimated reward/cost function. That computation must be performed for each candidate and it usually takes long time to find the optimal solution.
D) Computationally efficient (2): the method and system according to the embodiments of the present invention use two optimization algorithms: (a) density ratio estimation and (b) regularized least squares. In contrast, some previous methods use a stochastic gradient method or a Markov chain Monte Carlo method, which usually take time to optimize as compared with least-squares methods.
As described above, in one aspect, the present invention provides inverse reinforcement learning that can infer the objective function from observed state transitions generated by demonstrators. Fig. 7 schematically shows a framework of the method according to Embodiment 1 of the present invention. An embodiment of the inverse reinforcement learning according to Embodiment 1 of the present invention includes two components: (1) learning the ratio of state transition probabilities with and without control by density ratio estimation and (2) estimation of the cost and value functions that are compatible with the ratio of transition probabilities by a regularized least squares method. By the use of efficient algorithms for each step, the embodiments of the present invention are more efficient in data and computation than other inverse reinforcement learning methods.
The industrial applicability and usefulness of inverse reinforcement leaning have been well understood and recognized. Examples of the system/configuration to which the embodiments of the present invention can be applied are described below.
<Imitation learning of robot behaviors>
Programming robots to perform complex tasks is difficult with standard methods such as motion planning. In many situations, it is much easier to demonstrate the desired behaviors to the robot. However, a major drawback of classical imitation learning is that the obtained controller cannot cope with new situations because it just reproduces the demonstrated movements. Embodiments of the present invention can estimate the objective function from the demonstrated behaviors and then the estimated objection function can be used for learning different behaviors for different situations.
Fig. 8 schematically shows such an implementation of the present invention. First, the demonstrator controls a robot to accomplish a task and the sequence of states and actions is recorded. Then an inverse reinforcement learning component according to an embodiment of the present invention estimates the cost and value functions, which are then given to forward reinforcement learning controllers for different robots.
<Interpretation of human behaviors>
Understanding of the human intentions behind behaviors is a basic issue in building a user-friendly support system. In general, a behavior is represented by a sequence of states, which are extracted by the motion tracking system. The cost function estimated by the inverse reinforcement learning method/system according to an embodiment of the present invention can be regarded as a compact representation to explain the given behavioral dataset. Through pattern classification of the estimated cost functions, it becomes possible to estimate the user’s expertise or preference. Fig. 9 schematically shows this implementation according to an embodiment of the present invention.
<Analysis of the web experience>
In order to increase the likelihood for visitors to read articles that are presented to the visitors, the designers of online news websites, for example, should investigate the web experiences of visitors from a viewpoint of decision making. In particular, recommendation systems are receiving attention as an important business application for personalized services. However, previous methods such as collaborative filtering do not consider the sequences of decision making explicitly. Embodiments of the present invention can provide a different and effective way to model the behaviors of visitors during net surfing. Fig. 10 shows an example of a series of clicking actions by a user, indicating what topics were accessed in what order by the user. The topic that the visitor is reading is regarded as the state and clicking the link is considered as the action. Then, inverse reinforcement learning according to an embodiment of the present invention can analyze the decision-making in the user’s net surfing. Since the estimated cost function represents the preference of the visitor, it becomes possible to recommend a list of articles for the user.
As described above, the inverse reinforcement learning schemes according to embodiments in Embodiment 1 of the present invention are applicable to a wide variety of industrial and/or commercial systems. Fig. 11 shows an example of the implementation using a general computer system and a sensor system. The methods explained above with mathematical equations can be implemented in such a general computer system, for example. As shown in the figure, the system of this example includes a sensor system 111 (an example of a data acquisition unit) to receive information about state transitions--i.e., observed behavior--from the object being observed. The sensor system 111 may include one or more of an image capturing device with image processing software/hardware, displacement sensors, velocity sensors, acceleration sensors, microphone, keyboards, and any other input devices. The sensor system 111 is connected to a computer 112 having a processor 113 with an appropriate memory 114 so that the received data can be analyzed according to embodiments of the present invention. The result of the analysis is outputted to any output system 115, such as a display monitor, controllers, drivers, etc. (examples of an output interface), or, an object to be controlled in the case of utilizing the results for control. The result can be used to program or transferred to another system, such as another robot or computer, or website software that responds to user’s interaction, as described above.
In the case of predicting the user’s web article preference described above, the implemented system may include a system for inverse reinforcement learning as described in any one of the embodiments above, implemented in a computer connected to the Internet. Here, the state variables that define the behaviors of the user include topics of articles selected by the user while browsing each webpage. Then, the result of the inverse reinforcement learning is used to cause an interface through which the user is browsing Internet websites, such as portable smartphone, personal computer, etc., to display a recommended article for the user.
<II. Embodiment 2>
Embodiment 2 that has superior characteristics than Embodiment 1 in some aspects will be described below. Fig. 12 schematically shows differences between Embodiment 1 and Embodiment 2. As described above and shown in (a) in Fig. 12, Embodiment 1 used the density ratio estimation algorithm twice and the regularized least squares method. In contrast, in Embodiment 2 of the present invention, a logarithm of the density ratio π(x)/b(x) is estimated using a standard density ratio estimation (DRE) algorithm, and r(x) and V(x), which are a reward function and a value function, respectively, are computed through the estimation of a log of the density ratio π(x, y)/b(x, y) with the Bellman equation. In more detail, in Embodiment 1, the following three steps were needed: (1) estimate π(x)/b(x) by a standard DRE algorithm; (2) estimate π(x, y)/b(x, y) by a standard DRE algorithm, and (3) compute r(x) and V(x) by a regularized least squares method with the Bellman equation. In contrast, Embodiment 2 uses only two-step optimization: (1) estimate lnπ(x)/b(x) by a standard density ratio estimation (DRE) algorithm, and (2) compute r(x) and V(x) through a DRE (second time) of lnπ(x, y)/b(x, y) with the Bellman equation.
Fig. 13 schematically explains the computational scheme of the second DRE for step (2) in Embodiment 2. As shown in Fig. 13, the second DRE of lnπ(x, y)/b(x, y) leads to an estimation of r(x)+ γV(y)-V(x) using the following equation because the first DRE estimates lnπ(x)/b(x).
Figure JPOXMLDOC01-appb-I000074
These equations are, in essence, the same as Equation (11) and (15) described above. Thus, in Embodiment 2, there is no need to compute third step (3) of Embodiment 1 by a regularized least squares method, and the computational costs can be significantly reduced as compared with Embodiment 1. In Embodiment 2, in order to execute the second step (2) of computing r(x) and V(x) through a DRE (second time) of lnπ(x, y)/b(x, y) with the Bellman equation, the basis functions are designed in the state space, which reduces the number of parameters to be optimized. In contrast, in Embodiment 1, in step (2) of estimating π(x, y)/b(x, y) by a standard DRE algorithm, the basis functions need to be designed in the product of the state spaces, which requires relatively a large number of parameters to be optimized. Thus, Embodiment 2 requires a relatively low memory usage as compared with Embodiment 1. Thus, Embodiment 2 has these various significant advantages over Embodiment 1. Other features and setups of Embodiment 2 are same as various methodologies and schemes described above for Embodiment 1 unless otherwise specifically explained below.
Table 1 below shows general comparison of Embodiment 2 versus various conventional methods. Specifically, various features are compared for Embodiment 2 with respect to the above-described OptV, maximum entropy IRL (MaxEnt-IRL), and relative entropy IPL (RelEnt-IRL). As shown in Table 1, Embodiment 1 of the present invention has various advantages over the conventional methods.
Figure JPOXMLDOC01-appb-T000075
To demonstrate and confirm the effectiveness of Embodiment 2 of the present invention, the above-described swing-up inverted pendulum problem was studied. Fig. 14 shows the results of the experiment comparing Embodiment 2 with Embodiment 1, MaxEnt-IRL, RelEnt-IRL and OptV. Embodiment 2 is indicated as “New Invention” and Embodiment 1 is indicated as “PCT/JP2015/004001” in the figure. As shown in Fig. 14, Embodiment 2 has successfully recovered observed policies better than other methods, including Embodiment 1, even though the number of samples are small.
<Robot Navigation Task Experiment>
To further demonstrate and confirm the effectiveness of Embodiment 2 of the present invention, a robot navigation task was studied for Embodiment 2, Embodiment 1, and RelEt-IRL. Three target objects of red (r), green (g), and blue (b) were placed in front of a programmable robot with camera eyes. The goal was to reach the green (g) target among the three targets. Five predetermined starting positions A-E were lined up in front of the three targets. Training data were collected from the starting positions A-C and E, and test data was taken using the starting position D. The state vector was as follows: x=[θr, Nr, θg, Ng, θb, Nb, θpan, θtilt]T where θi(i=r, g, b) is the angle to the target, Ni(i= r, g, b) is the blob size, θpan and θtilt are the angles of the camera of the robot. The basis function for V(x) was given as follows:
Figure JPOXMLDOC01-appb-I000076
Where ci is the center position selected from the data set. The basis function for r(x) is given as:
Figure JPOXMLDOC01-appb-I000077
where fg is Gaussian function, and fs is sigmoid function. In this experiment, π and b were given by experimenters, and for every starting point, 10 trajectories were collected to create the datasets. Fig. 15 shows the results of the experiment. In the figure, Embodiment 2 is indicated as “New Invention,” and Embodiment 1 is indicated as “PCT/JP2015/004001.” The results are compared with the result of RelEnt-IRL, described above. As shown in Fig. 15, Embodiment 2 yielded a significantly better result. This also indicates that the estimated value function according to Embodiment 2 may be used as a potential function for shaping rewards.
Computing times (in minutes) in the inverted pendulum task discussed above were evaluated. LogReg IRL and KLIEP IRL in Embodiment 2 required only about 2.5 minutes in the calculation. uLSIF IRL, LSCDE IRL, and LogReg IRL in Embodiment 1 required about 4 minutes to 9.5 minutes, respectively. Thus, Embodiment 2 required significantly less computing times than various versions of Embodiment 1, which are discussed above.
As readily understandable, applications of Embodiment 2 are essentially the same as various applications for Embodiment 1 discussed above. In particular, as discussed above, various versions of Embodiment 2 will be applicable to, among other things: interpretation of human behaviors, analysis of the web experience, and design of robot controllers by imitation in which by showing some ideal behaviors, the corresponding objective function is estimated as an immediate reward. A robot can use the estimated reward with forward reinforcement learning to generalize behaviors for unexperienced situations. Thus, highly economical and reliable systems and methodology can be constructed in accordance with Embodiment 2 of the preset invention. In particular, as described above, Embodiment 2 can recover observed policies with a small number of observations better than other methods. This is a significant advantage.
It will be apparent to those skilled in the art that various modification and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover modifications and variations that come within the scope of the appended claims and their equivalents. In particular, it is explicitly contemplated that any part or whole of any two or more of the embodiments and their modifications described above can be combined and regarded within the scope of the present invention.

Claims (10)

  1. A method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising:
    acquiring data representing changes in state variables that define the behaviors of the subject;
    applying a modified Bellman equation given by Eq. (1) to the acquired data:
    Figure JPOXMLDOC01-appb-I000001
    where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively;
    estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2);
    estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and
    outputting the estimated r(x) and V(x).
  2. The method according to claim 1, wherein the step of estimating the logarithm of the ratio π(x)/b(x) and π(x, y)/b(x, y) includes using Kullback-Leibler Importance Estimation Procedure (KLIEP) with a log-linear model.
  3. The method according to claim 1, wherein the step of estimating the logarithm of the ratio π(x)/b(x) and π(x, y)/b(x, y) includes using a logistic regression.
  4. A method of inverse reinforcement learning for estimating reward and value functions of behaviors of a subject, comprising:
    acquiring data representing state transition with action that define the behaviors of the subject;
    applying a modified Bellman equation given by Eq. (3) to the acquired data:
    Figure JPOXMLDOC01-appb-I000002
    where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(u | x) and π(u | x) denote, respectively, stochastic policies before and after learning that represent a probability to select action u at state x;
    estimating a logarithm of the density ratio π(x)/b(x) in Eq. (3);
    estimating r(x) and V(x) in Eq. (4) from the result of estimating a log of the density ratio π(x, u)/b(x, u); and
    outputting the estimated r(x) and V(x).
  5. The method according to claim 4, wherein the step of estimating the logarithm of the ratio π(x)/b(x) and π(x, u)/b(x, u) includes using Kullback-Leibler Importance Estimation Procedure (KLIEP) with a log-linear model.
  6. The method according to claim 4, wherein the step of estimating the logarithm of the ratio π(x)/b(x) and π(x, u)/b(x, u) includes using a logistic regression.
  7. A non-transitory storage medium storing instructions to cause a processor to perform an algorithm for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, said instructions causing the processor to perform the following steps:
    acquiring data representing changes in state variables that define the behaviors of the subject;
    applying a modified Bellman equation given by Eq. (1) to the acquired data:
    Figure JPOXMLDOC01-appb-I000003
    where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively;
    estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2);
    estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and
    outputting the estimated r(x) and V(x).
  8. A system for inverse reinforcement learning for estimating cost and value functions of behaviors of a subject, comprising:
    a data acquisition unit to acquire data representing changes in state variables that define the behaviors of the subject;
    a processor with a memory, the processor and the memory are configured to:
    applying a modified Bellman equation given by Eq. (1) to the acquired data:
    Figure JPOXMLDOC01-appb-I000004
    where r(x) and V(x) denote a reward function and a value function, respectively, at state x, and γ represents a discount factor, and b(y | x) and π(y | x) denote state transition probabilities before and after learning, respectively;
    estimating a logarithm of the density ratio π(x)/b(x) in Eq. (2);
    estimating r(x) and V(x) in Eq. (2) from the result of estimating a log of the density ratio π(x, y)/b(x, y); and
    an output interface that outputs the estimated r(x) and V(x).
  9. A system for predicting a preference in topic of articles that a user is likely to read from a series of articles the user selected in an Internet web surfing, comprising:
    the system for inverse reinforcement learning as set forth in claim 8, implemented in a computer connected to the Internet,
    wherein said subject is the user, and said state variables that define the behaviors of the subject include topics of articles selected by the user while browsing each webpage, and
    wherein the processor causes an interface through which the user is browsing Internet websites to display a recommended article for the user to read in accordance with the estimated cost and value functions.
  10. A method for programming a robot to perform complex tasks, comprising:
    controlling a first robot to accomplish a task so as to record a sequence of states and actions;
    estimating the reward and value functions using the system for inverse reinforcement learning as set forth in claim 8 based on the recorded sequence of the states and actions; and
    providing the estimated reward and value functions to a forward reinforcement leaning controller of a second robot to program the second robot with the estimated reward and value functions.

PCT/JP2017/004463 2016-03-15 2017-02-07 Direct inverse reinforcement learning with density ratio estimation WO2017159126A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020187026764A KR102198733B1 (en) 2016-03-15 2017-02-07 Direct inverse reinforcement learning using density ratio estimation
CN201780017406.2A CN108885721B (en) 2016-03-15 2017-02-07 Direct inverse reinforcement learning using density ratio estimation
JP2018546050A JP6910074B2 (en) 2016-03-15 2017-02-07 Direct inverse reinforcement learning by density ratio estimation
EP17766134.5A EP3430578A4 (en) 2016-03-15 2017-02-07 Direct inverse reinforcement learning with density ratio estimation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662308722P 2016-03-15 2016-03-15
US62/308,722 2016-03-15

Publications (1)

Publication Number Publication Date
WO2017159126A1 true WO2017159126A1 (en) 2017-09-21

Family

ID=59851115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/004463 WO2017159126A1 (en) 2016-03-15 2017-02-07 Direct inverse reinforcement learning with density ratio estimation

Country Status (5)

Country Link
EP (1) EP3430578A4 (en)
JP (1) JP6910074B2 (en)
KR (1) KR102198733B1 (en)
CN (1) CN108885721B (en)
WO (1) WO2017159126A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021229626A1 (en) * 2020-05-11 2021-11-18 日本電気株式会社 Learning device, learning method, and learning program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016021210A1 (en) * 2014-08-07 2016-02-11 Okinawa Institute Of Science And Technology School Corporation Inverse reinforcement learning by density ratio estimation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8359226B2 (en) 2006-01-20 2013-01-22 International Business Machines Corporation System and method for marketing mix optimization for brand equity management
US8756177B1 (en) * 2011-04-18 2014-06-17 The Boeing Company Methods and systems for estimating subject intent from surveillance
US9090255B2 (en) * 2012-07-12 2015-07-28 Honda Motor Co., Ltd. Hybrid vehicle fuel efficiency using inverse reinforcement learning
CN104573621A (en) * 2014-09-30 2015-04-29 李文生 Dynamic gesture learning and identifying method based on Chebyshev neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016021210A1 (en) * 2014-08-07 2016-02-11 Okinawa Institute Of Science And Technology School Corporation Inverse reinforcement learning by density ratio estimation

Also Published As

Publication number Publication date
EP3430578A4 (en) 2019-11-13
JP6910074B2 (en) 2021-07-28
CN108885721B (en) 2022-05-06
CN108885721A (en) 2018-11-23
KR102198733B1 (en) 2021-01-05
EP3430578A1 (en) 2019-01-23
JP2019508817A (en) 2019-03-28
KR20180113587A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US10896382B2 (en) Inverse reinforcement learning by density ratio estimation
US10896383B2 (en) Direct inverse reinforcement learning with density ratio estimation
Chen et al. Relational graph learning for crowd navigation
Dulac-Arnold et al. Challenges of real-world reinforcement learning
Hewing et al. Learning-based model predictive control: Toward safe learning in control
Mandlekar et al. Learning to generalize across long-horizon tasks from human demonstrations
Böhmer et al. Autonomous learning of state representations for control: An emerging field aims to autonomously learn state representations for reinforcement learning agents from their real-world sensor observations
Kim et al. Socially adaptive path planning in human environments using inverse reinforcement learning
Chatzis et al. Echo state Gaussian process
Spaan et al. Decision-theoretic planning under uncertainty with information rewards for active cooperative perception
Mavrogiannis et al. Socially competent navigation planning by deep learning of multi-agent path topologies
Boney et al. Regularizing model-based planning with energy-based models
Fang et al. Dynamics learning with cascaded variational inference for multi-step manipulation
Passalis et al. Continuous drone control using deep reinforcement learning for frontal view person shooting
Dai et al. R2-B2: Recursive reasoning-based Bayesian optimization for no-regret learning in games
Huang et al. Approximate maxent inverse optimal control and its application for mental simulation of human interactions
Wang et al. Focused model-learning and planning for non-Gaussian continuous state-action systems
Lee et al. Adaptive state space partitioning for reinforcement learning
Ognibene et al. Proactive intention recognition for joint human-robot search and rescue missions through monte-carlo planning in pomdp environments
Canuto et al. Action anticipation for collaborative environments: The impact of contextual information and uncertainty-based prediction
WO2017159126A1 (en) Direct inverse reinforcement learning with density ratio estimation
Matsumoto et al. Mobile robot navigation using learning-based method based on predictive state representation in a dynamic environment
Angelov et al. From demonstrations to task-space specifications. Using causal analysis to extract rule parameterization from demonstrations
Hazara et al. Active incremental learning of a contextual skill model
Gamarra Utilizing gaze behavior for inferring task transitions using abstract hidden markov models

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018546050

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187026764

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017766134

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017766134

Country of ref document: EP

Effective date: 20181015

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17766134

Country of ref document: EP

Kind code of ref document: A1