GB2350695A - Genetic fuzzy real-time controller - Google Patents

Genetic fuzzy real-time controller Download PDF

Info

Publication number
GB2350695A
GB2350695A GB9910539A GB9910539A GB2350695A GB 2350695 A GB2350695 A GB 2350695A GB 9910539 A GB9910539 A GB 9910539A GB 9910539 A GB9910539 A GB 9910539A GB 2350695 A GB2350695 A GB 2350695A
Authority
GB
United Kingdom
Prior art keywords
learning
fuzzy
control
behaviour
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9910539A
Other versions
GB2350695B (en
GB9910539D0 (en
Inventor
Victor Callaghan
Hani Hagras
Martin John Colley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Essex Enterprises Ltd
Original Assignee
University of Essex Enterprises Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Essex Enterprises Ltd filed Critical University of Essex Enterprises Ltd
Priority to GB9910539A priority Critical patent/GB2350695B/en
Publication of GB9910539D0 publication Critical patent/GB9910539D0/en
Publication of GB2350695A publication Critical patent/GB2350695A/en
Application granted granted Critical
Publication of GB2350695B publication Critical patent/GB2350695B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)

Abstract

Apparatus and method for the control of a machine to perform a predefined operation and able to learn control solutions when in an indeterminate environment. The apparatus includes a plurality of fuzzy-logic controllers FLCs each having a respective behaviour membership function MF definition and a respective rule base, each of said behaviour membership function definitions and rule bases being dynamically modifiable during operation as learning progresses. A plurality of sensors sense a respective parameter of the environment and feed information to the fuzzy-logic controllers A coordinator receives the outputs of the fuzzy logic controllers, to weight the effect thereof and to provide suitable drive signals for the driving of a multiplicity of actuators which control operation of the machine. An experience bank stores past experiences of previous learning cycles and a learning focus engine receives information from the coordinator, to learn either new behaviour membership function definitions or new rules for the rule bases of the fuzzy-logic controllers. An adaptive genetic mechanism is provided with information from the learning focus engine and also is configured to search the experience bank so as thereby to generate a solution for loading into the fuzzy-logic controllers thereby to influence the control of the machine, dependent upon information supplied by the sensors. Such a solution takes the form of new behaviour membership functions or new rules for the rule bases thereof.

Description

2350695 Genetic Fuzzy Real-Time Controller This invention relates to
apparatus for the control of a machine to perform a pre-defined operation when in an indeterminate environment. The invention further relates to a method of controlling such a machine.
Though applicable to various robotic control situations, this invention is particularly concerned with the real-time control of a vehicle, such as an agricultural vehicle, to perform a pre-determined activity in a nondeterministic environment. To achieve this, a digital electronic real time adaptive (i. e. capable of learning) control architecture has been developed.
The invention is characterised by a novel control architectural structure and a set of innovative genetic learning mechanisms. The benefits of the invention are that it is capable of solving a class of control problem that, whilst having many commercial applications, have hitherto found no adequate solution.
Such control applications are characterised by systems or machines, tightly coupled to the world via sensors and effectors (actuators), operating in a real time control environment which in great part is dynamic, unstructured, unknown, subject to high levels of sensing uncertainty and tending towards intractable levels of complexity. It is difficult, if not impossible, to derive predictive mathematical models for such situations.
This invention solves the problems caused by the lack of predictive mathematical models by including a set of novel online genetic learning mechanisms thereby enabling the control architecture to 1each itself' how to adapt to the changing and unpredictable world.
Fuzzy logic, genetic systems and learning are not new in control but the present architecture is unique in terms of the macro-architecture and the novel genetic mechanisms which together provide a control architecture with unmatched levels of performance. For example, the apparatus has been tested in a hitherto unsolved control environment, of autonomously guiding a farm vehicle on tasks such as crop cutting. In these tests, not only has the apparatus successfully guided the vehicle but the genetic mecharnisms at the heart of this invention have enabled the architecture to learn and optimise its control within 90secs, which contrasts to many hours of off-line simulation (rather than on-line real-time learning) in the closest known prior art.
Though tested with autonomous farm vehicles, the control architecture of this invention is equally capable of effecting control in other areas such as factory machinery, telecommunications, medical instrumentation, engines, weapons and emerging areas such as intelligent-buildings, under water vehicles.
Overview of Macro Architecture In general terms, the architecture utilises fuzzy logic and genetic system principles, the fundamentals of which are widely known. The high-level operation of the control scheme belongs to a "school" labelled "behaviour based control architecture" pioneered by Rodney Brooks of MIT in the late 80's. In this approach a number of concurrent behaviours (mechanism to attain goal or maintain state) are active (sensing environment, effecting machine) to a degree determined the relationship of the machine and environment.
At this macro-level, the novelty lies in a unique combination of fuzzy based behaviours and behaviour-integration and a genetic-based Associative Experience Engine (the latter itself containing various novel genetic mechanisms).
Associative Experience Engine Figure 1 is an architectural overview of both the novel macro and micro aspects of the invention. Behaviours are represented by parallel Fuzzy Logic Controllers (FLC). Each FLC has two parameters that can be modified which are the Rule Base (RB) of each behaviour and its Membership Functions (MF). The behaviours receive their inputs from sensors. The output of each FLC is then fed to the actuators via the Co-ordinator which weights its effect. In the case of a farm vehicle controller, four FLCs may be employed, namely, Obstacle Avoidance (OA), Left Edge Following (LF), Right Edge Following (RF) and Goal Seeking (GS). When the system response fails to have a desired response (such as having a large deviation when following a crop edge or colliding with an obstacle), the learning cycle begins.
The learning depends on the Learning Focus Engine which is supplied by the Coordinator (a fuzzy engine which weights contributions to the outputs).
When the Learning Focus Engine is learning MF for individual behaviours then each behaviour MF is learnt in isolation. When the Learning Focus Engine is learning an individual rule base of a behaviour, then each rule base of each behaviour is learnt alone. When the Learning Focus Engine is adapting the coordinated behaviours online, then the algorithm will adapt different rules in the 5 different behaviours in response to the environment.
The system recalls similar experiences by checking the stored experiences in the Experience Bank. The system tests different solutions from the Experience Bank by transferring the most recent or contextually relevant experiences that are stored in a queue. If these experiences show success than they are stored in the FLC and thereby the generation of a new solution is avoided. The Experience Assessor assigns each experience solution a fitness value. When the Experience Bank is full, some experiences must be deleted. To assist with this the Experience Survival Evaluator determines which rules are removed according to their importance (as set by the Experience Assessor).
When past experiences did not solve the situation the best-fit experience is used to reduce the search space by pointing to a better starting point. An Adaptive Genetic Mechanism (AGM is then fired, the AGM using adaptive learning parameters (except when learning behaviour with immediate reinforcement, when an optimum mutation parameter is used), to speed the search for new solutions. The AGM is constrained to produce new solutions in a certain range defined by the Contextual Prompter which is supplied by the sensors and is defined by the Coordinator according to the learning focus engine in order to avoid the AGM searching options where solutions are unlikely to be found. By using these mechanisms the AGM search space is narrowed, massively improving its efficiency. After generating a new solution (either rules or MFs) the system tests the new solution and gives it a fitness or value relating to its difficulty of learning, by means of the Solution Evaluator. The AGM provides new options until a satisfactory solution is achieved.
The proposed system can be viewed as a double hierarchy system. In this both fuzzy behaviours and the online learning mechanism are hierarchies. In the case of the latter, at the higher level there is a population of queued solutions stored in the Experience Bank. If any of these stored experiences leads to a solution then the search ends; if not then each of these experiences is assigned a fitness. The fittest experience is used as a starting position to the lower level AGIVI that is used to generate a new solution.
Implementation As shown in Figure 1 the system is composed of asynchronous parallel hardware building blocks. It is possible to emulate part of this in a combination of hardware and software. Whist this is acceptable (and even convenient) for development and experimentation, for most real-time applications the implementation may well need to be wholly in hardware to give sufficient speed for solving the control problem.
Summary of Main Performance Benefits:
The control architecture of this invention has the following advantages:
Solves hitherto unsolved control problems (i.e. complex control problems lacking mathematical models); Autonomously operates and learns online, thereby reducing programming costs and enabling it to adapt to changes; and Order of magnitude faster than previous learning solutions (i.e. learns in the order of minutes rather than hours).
It will be understood from the foregoing that both the Macro-Architecture Structure and the Micro-Architecture Structure are novel. In particular, the Associative Experience Engine and its structural and logical relationships with the other components is new, as are all the sub-blocks, circuitry and architectural structural aspects shown within the Associative Experience Engine block (Figure 1).
Further details of the control architecture of this invention as described herein and also its application to the control of an agricultural vehicle are set out in the following annexe.
ANNEX 1. Introduction
A casual glance around our world reveals how dependent we are on vehicles and their drivers. As a society, much of our resources are associated with driving vehicles. A long cherished dream has been driver-less cars, in which we are transported to our destination by an unseen "electronic chauffeuC whilst we indulge in more productive activities. The aircraft and boat industry already routinely use auto-pilots as a means of automatic guidance One of the most difficult technical challenges vehicle guidance is presented by the agricultural industry due to the inconsistency of the terrain, the irregularity of the product and the open nature of the working environment. These situations result in complex problems of identification, dealing with sensing errors and control. Problems include dealing with the consequences of the robotic tractor being deeply embedded into a dynamic and partly non- deterministic physical world (e.g.
wheel-slip, imprecise sensing and other effects of varying weather and ground conditions on sensors and actuators). One of the most important tasks in a field are those based on crops planted in rows or other geometric patterns that involve making a vehicle drive in straight lines, turn at row ends and activate machinery at the start and finish of each run. Examples of this are in spraying, ploughing and harvesting. Our work addresses this challenge. We utilise a much- developed form of fuzzy logic augmented by GA learning that excels in dealing with such imprecise sensors and varying conditions, which characterises these applications.
2. Background
Artificial intelligence techniques including expert systems and machine vision have been successfully applied in agriculture. Recently, artificial neural network and fuzzy theory have been utilised for intelligent automation of farm machinery and facilities along with improvement of various sensors. A simulation of a fuzzy unmanned combine harvester operation has been proposed, but this used only on-off touch sensors for the fuzzy systems. Thus, the advantage of fuzzy systems in dealing with continuous data was lost, so the system could not obtain a smooth response and presented problems when turning around corners. This work was simulated, which is different from the real world farm environment.
Little work has been done in implementing a real robot vehicle using fuzzy logic that can operate in open out door agricultural situations. Broadly speaking, the present work situates itself in the recent line of research that concentrates on the realisation of artificial agents strongly coupled with the physical world. A first fundamental requirement is that agents must be grounded in that they must be able.to carry on their activity in the real world in real time. Another important point is that adaptive behaviour cannot be considered as a product of an agent considered in isolation from the world, but can only emerge from strong coupling of the agent and its environment. Despite most robotics regularly using simulations to test their models, the validity of computer simulations to build autonomous robots is criticised and is subject to much debate.
3. Overview The aim of the present research was to develop a fuzzy vehicle controller for real farm crop harvesting, using a hierarchical fuzzy logic controller which had many advantages including reducing the number of rules needed and facilitating better behaviour arbitration. In this paper we describe how we have added GAs to provide rule learning where reinforcement can be given as actions are performed. A modified version of the Fuzzy Classifier system (FCS) is used in this algorithm. The FCS is equipped with a rule-cache making it possible for learnt expertise to be applied to future situations and to allow GA learning to start the search from the best point found. The system uses sensory information inorder to narrow the search space for the GA. This process can be viewed as a hierarchy. The proposed techniques have resulted in rapid convergence suitable for learning individual behaviours online without need for simulation.
References will hereinafter be made to the control of a robot, in real time, to perform a predefined task.
4. The Fuzzy Logic Controller It has been suggested that one of the reasons humans are better at control than conventional controllers is that they are able to make effective decisions on the basis of imprecise linguistic information. In the following analysis we will use a singleton fuzzifier, triangular membership functions, product inference, max-product composition, height defuzzification. The selected techniques are selected due to their computational simplicity. The equation that maps the system input to output is given by:
IM G Y J1 a M G Z P 11 a i=1 Where M is the total number of rules, y is the crisp output for each rule, (xAj'P is the product of the membership functions of each rule inputs, G is the total number of inputs. The input Membership Functions (MF) shown in Figure (2) are the front and back side distance sensors (sensed by sonar or wands). The output MF are the wheel speeds (in case of the indoor robots)) and the robot speed and steering angle (in case of the outdoor robot). More details about the MF and the fuzzy controller can be found in [3]. The MF were designed according to the human experience. The rule bases were learnt using the proposed online algorithm described below.
5. The Online Algorithm as applied to learning individual behaviour rule bases In a real-time GA, it is desirable to achieve a high level of online performance while, at the same time being capable of reacting rapidly to process changes requiring new actions. Hence it is not necessary to achieve a total convergence of the population to a single string, but rather to maintain a limited amount of exploration and diversity in the population. Incidentally, it can be observed that near-convergence can be achieved in terms of fitness, with diverse structures. These requirements mean that the population size should be kept sufficiently small, so that progression towards near-convergence can be achieved within a relatively short time. Similarly the genetic operators should be used in a way that achieves high-fitness individuals in the population rapidly. Figure (3) introduces a block diagram of the operation of the proposed on-line algorithm. The rule base of the behaviour to be learnt is initialised randomly. In the following sections the various steps of the algorithm are introduced.
5.1 Identifying Poor Rules After the rule base initialisation, the robot starts moving. If it contains poor rules then it will begin deviating from its objective (e.g. not maintaining a constant distance from an edge). In this case an on- line algorithm is fired to generate new set of rules to correct this deviation. The GA population consists of all the rules contributing in an action (which is usually a small number as the rules base for each behaviour consists only of 25 rules). As in classifier systems, in order to preserve the system performance the GA is allowed to replace a subset of the classifiers (the rules in our case). The worst m classifiers are replaced by m new classifiers created by the application of the GA on the population. The new rules are tested by the combined action of the performance and apportionment of credit algorithms. In the present case, only two rules actions will be replaced (those already identified with being predominantly responsible for the error).
5.2 Fitness Determination and Credit Assignment The system fitness is evaluated by how much it reduces the absolute deviation (d) from the nominal value, which is given by:
d nomial. value - deviated. valuel (2) max. deviation Where the nominal value will correspond to the value that gives maximum normal membership function (45 cm in case of wall following and zero degrees in case of goal seeking). The deviated value is any value deviating from the nominal value. The maximum deviation correspond to the maximum deviation that can occur (which is equal to 80-45= 35 cm). So the fitness of the solution is given by dl-d2 where d2 is the absolute deviation before introducing a new solution and dl is the absolute deviation following the new solution. The deviation is measured using the robot's physical sensors (the sonar in case of the wall following), which gives the robot the ability to adapt to the imprecision and the noise found in the real sensors rather than relying on estimates from previous simulations.
The fitness of each rule at a given situation is calculated as follows. As we have two output variables (left and the right wheel speeds or steering and speed), then we have Yt, and Yt2. Then the contribution of each rule p output (Ypl, Yp2) to the total output Yti and Yt2 is denoted by SM, Sr2 where Sr, and Sr2 is given by:
YPI a Y11- G Sn = fl a,,,, Y11 Y p2 At, Y12 -- G Sr2 - 1,1, a, (3) Y 12 We then calculate each rule's contribution to the final action Sc by S,,= SrI + Sr2. Then the most two effective rules are those that have 2 the two greatest value of Sc, we use mutation only to generate new solutions because of the small population formed by the fired rules.
5.3 Memory Application After determining the rules actions to be replaced, the robot then matches the current rules to sets of rules stored in a memory containing each rule and its best fitness value up to date. The fitness of the rule in a given solution is given by:
Srt = Constant + (d 1 - d2) Sc (4) di-d2 is the absolute deviation improvement or degradation caused by the adjusted rule base produced by the algorithm. If there is improvement in the deviation, then the rules that have contributed most will be given more fitness to boost their actions. If there is degradation then the rules that contributed more must be punished by reducing their fitness w.r.t to other rules, repeating the process for the next most responsible rule. For every rule action to be replaced the best fitness rule will replace the current action in the behaviour rule base. If the deviation decreases, then the robots will keep the best rules in the behaviour rule base. If the deviation still the same or it increases the robot fires the GA to produce new solutions by mutating these best rules until the deviation begins decreasing or the rule is proved ineffective when the robot is moving thus indicating another rule might be more effective. This action is supposed to speed up the GA search as it starts the GA from the best found point in the solution space instead of starting from a random point. This is then considered a solution for the current situation and the rule fitness is calculated and is compared with the maximum fitness rule. If its fitness is greater than the best kept one then it 5 replaces the best one, otherwise the best one still is kept in the memory.
5.4 Using GA to Produce New Solutions The GA begins its search for new rule actions to replace those identified with poor performance. Mutating the two most effective rules generates new solutions. A mutation rate of 0.5 was chosen.after experimenting of different mutation rates from 0 to 1.0 and monitoring the time the robot needs to achieve its purpose (e.g. reaching its goal or following a wall). It was noticed that at mutation values less than 0.3 there is nearly no convergence as the population size and the chromosome size is small, and the low mutation rates do not introduce a lot of new genetic materials to introduce new solutions. The same occurs for high mutation rates (higher than 0.7) as the mutation rate reaches 1.0 the genetic materials available are the primary chromosomes (e.g. 0101) and its inversion (1010) which is not enough for introducing new solutions. So 0.5 gave the optimum value of finding a solution after, on average, 96 seconds. The robot also uses its sensory information to narrow up the search space of the GA and thus reducing the learning time. For example if the robot is implementing left wall following and it is moving toward the wall, then any action that suggests going to the left will be a bad action, thus if we use the front left side sensor and it senses that we are going toward the wall, then the GA solutions will have a constraint not to go left.
5.5 The Learning Length Criteria The robot assumes it had learnt the required behaviour if it succeeds in maintaining the nominal value for the behaviour for a distance enough to proof that the learnt rule base is sufficient. The optimal learning distance has been related to units of length of the robot, so that the algorithm can be applied in an invariant manner to different size robots. In order to determine the optimal learning distance we have conducted numerous experiments evaluating performance relative to the robot's length (e.g. 1x robot's length, 2x robot's length, etc.). We then followed the same track that was used during learning to determine the absolute deviation at each control cycle from the optimum value (which would be maintaining a constant distance from a wall in case of edge following). Then we calculated the average and standard deviation of this error and compared different sizes for the learning length criteria (i.e. as short as possible whilst producing a stable rule base). It was found that the average and standard error for the wall following stabilises at three times the robot's length at average value of 2 cm and standard deviation of 1 cm. Thus we use three times the robot the length as our learning length criteria.

Claims (11)

1 Apparatus for the control of a machine to perform a pre-defined operation and able to learn control solutions when in an indeterminate environment, which apparatus comprises:
- a plurality of fuzzy-logic controllers each having a respective behaviour membership function definition and a respective rule base, each of said behaviour membership function definitions and rule bases being dynamically modifiable during operation as learning progresses; a plurality of sensors each adapted to sense a respective parameter of the environment and feeding information to the fuzzy-logic controllers; and a coordinator arranged to receive the outputs of the fuzzy logic controllers, to weight the effect thereof and to provide suitable drive signals for the driving of a multiplicity of actuators arranged to control operation of the machine; in which control apparatus there is additionally provided:
an experience bank in which are stored past experiences of previous learning cycles; a learning focus engine which receives information from the coordinator and learns therefrom either new behaviour membership function definitions or new rules for the rule bases of the fuzzy-logic controllers; and - an adaptive genetic mechanism provided with information from the learning focus engine and also being configured to search the experience bank so as thereby to generate a solution for loading into the fuzzy- logic controllers to influence the control of the machine dependent upon information supplied by the sensors, which solution takes the form of new behaviour membership functions or new rules for the rule bases thereof.
2. Apparatus as claimed in claim 1, wherein said coordinator also is a fuzzylogic controller having a rule base, and the learning focus engine is arranged also to determine new rules for the rule base of the coordinator, for loading thereinto.
3. Apparatus as claimed in claim 2, wherein said coordinator further has membership function definitions and the learning focus engine is arranged to determine new membership function definitions for loading thereinto.
4. Apparatus as claimed in any of the preceding claims, wherein there is a contextual prompter associated with the learning focus engine which constrains the adaptive genetic mechanism to search for solutions most likely to produce a successful result, which contextual prompter is provided with information from 5 the sensors as weighted by the coordinator.
5. Apparatus as claimed in any of the preceding claims, wherein there is provided a solution evaluator which compares a required machine behaviour with actual control of the machine resulting from the loading the fuzzylogic controllers with a solution produced by the adaptive genetic mechanism and consisting of new behavioural membership functions or new rules, the solution evaluator then applying a fitness rating to solution.
6. Apparatus as claimed in any of the preceding claims, wherein an experience assessor applies a solution fitness rating for each experience stored in the experience bank, whereby the usefulness of a past experience may be evaluated when determining the suitability of a potential solution.
7. Apparatus as claimed in claim 6, wherein a limited number of experiences are stored in the experience bank, experiences being dynamically deleted depending upon the fitness rating or a value relating to difficulty of learning applied thereto.
8. Apparatus for the control of a machine to perform a pre-defined operation and substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.
9. A method of controlling of a machine to perform a pre-defined operation and in which there is the learning of control solutions when the machine is operating in an indeterminate environment, which method comprises:
feeding the outputs from a plurality of sensors each adapted to sense a respective parameter of the environment to a plurality of fuzzy-logic controllers; providing each plurality of fuzzy-logic controller with a respective behaviour membership function definition and a respective rule base, each of said behaviour membership function definitions and rule bases being dynamically modifiable during performance of the method as learning progresses; and processing the outputs of the fuzzy logic controllers by means of a coordinator arranged to weight the effect of those outputs and to provide suitable drive signals for driving of a multiplicity of actuators arranged to control operation of the machine; said method further comprising the steps of: processing in a learning focus engine information received from the coordinator, so as to permit the learning of either new behaviour membership function definitions or new rules for the rule bases of the fuzzy-logic controllers; and triggering the operation of an adaptive genetic mechanism receiving information from the learning focus engine and also configured to search an experience bank in which are stored past experiences of previous learning cycles, so as thereby to generate a control solution; and loading into the fuzzy-logic controllers said control solution consisting of new behaviour membership functions or new rules for the rule bases thereof, so as to influence the control of the machine dependent upon information supplied by the sensors.
10. A method as claimed in claim 9, wherein said coordinator is a fuzzy-logic controller and the learning focus engine is able to learn at least one of new membership function definitions and new rules for the coordinator, for loading thereinto.
11. A machine having a plurality of actuators arranged to control operation of the machine in combination with apparatus for the control of the actuators to cause the machine to perform a pre-defined operation and able to learn control solutions when in an indeterminate environment, which apparatus is as claimed in any of claims 1 to 8.
GB9910539A 1999-05-07 1999-05-07 Genetic-Fuzzy real-time controller Expired - Fee Related GB2350695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9910539A GB2350695B (en) 1999-05-07 1999-05-07 Genetic-Fuzzy real-time controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9910539A GB2350695B (en) 1999-05-07 1999-05-07 Genetic-Fuzzy real-time controller

Publications (3)

Publication Number Publication Date
GB9910539D0 GB9910539D0 (en) 1999-07-07
GB2350695A true GB2350695A (en) 2000-12-06
GB2350695B GB2350695B (en) 2003-08-13

Family

ID=10852964

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9910539A Expired - Fee Related GB2350695B (en) 1999-05-07 1999-05-07 Genetic-Fuzzy real-time controller

Country Status (1)

Country Link
GB (1) GB2350695B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009111380A1 (en) * 2008-03-03 2009-09-11 Alstom Technology Ltd Fuzzy logic control and optimization system
US9740214B2 (en) 2012-07-23 2017-08-22 General Electric Technology Gmbh Nonlinear model predictive control for chemical looping process

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0521643A1 (en) * 1991-07-04 1993-01-07 Hitachi, Ltd. Method of automated learning, an apparatus therefor, and a system incorporating such an apparatus
WO1996007559A1 (en) * 1994-09-09 1996-03-14 Siemens Aktiengesellschaft Control device containing a fuzzy logic system for use in a motor vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0521643A1 (en) * 1991-07-04 1993-01-07 Hitachi, Ltd. Method of automated learning, an apparatus therefor, and a system incorporating such an apparatus
WO1996007559A1 (en) * 1994-09-09 1996-03-14 Siemens Aktiengesellschaft Control device containing a fuzzy logic system for use in a motor vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fukuda T et al, IECON '94. 20th Intl Conf on Indstrl ElctncsCtrl & Instn, 1994, IEEE, pp 1220-1225 *
Hoffmann F et al, Intl Jrnl of Approx. Reasoning, Vol 17 No4November 1997, Elsevier, pages 447-469 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009111380A1 (en) * 2008-03-03 2009-09-11 Alstom Technology Ltd Fuzzy logic control and optimization system
US8160730B2 (en) 2008-03-03 2012-04-17 Xinsheng Lou Fuzzy logic control and optimization system
AU2009222061B2 (en) * 2008-03-03 2013-01-24 General Electric Technology Gmbh Fuzzy logic control and optimization system
US8374709B2 (en) 2008-03-03 2013-02-12 Alstom Technology Ltd Control and optimization system
CN101960397B (en) * 2008-03-03 2015-04-29 阿尔斯托姆科技有限公司 Fuzzy logic control and optimization system
US9122260B2 (en) 2008-03-03 2015-09-01 Alstom Technology Ltd Integrated controls design optimization
US9740214B2 (en) 2012-07-23 2017-08-22 General Electric Technology Gmbh Nonlinear model predictive control for chemical looping process

Also Published As

Publication number Publication date
GB2350695B (en) 2003-08-13
GB9910539D0 (en) 1999-07-07

Similar Documents

Publication Publication Date Title
Hoffmann Evolutionary algorithms for fuzzy control system design
Beom et al. A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning
Hagras et al. Outdoor mobile robot learning and adaptation
Hagras et al. A fuzzy-genetic based embedded-agent approach to learning and control in agricultural autonomous vehicles
Chen et al. Mobile robot obstacle avoidance using short memory: a dynamic recurrent neuro-fuzzy approach
Al Dabooni et al. Heuristic dynamic programming for mobile robot path planning based on Dyna approach
Dubrawski et al. Learning locomotion reflexes: A self-supervised neural system for a mobile robot
Пэй et al. Mobile robot automatic navigation control algorithm based on fuzzy neural network in industrial Internet of things environment
Hendzel Collision free path planning and control of wheeled mobile robot using Kohonen self-organising map
Southey et al. Approaching evolutionary robotics through population-based incremental learning
GB2350695A (en) Genetic fuzzy real-time controller
Hagras et al. Online learning of fuzzy behaviours using genetic algorithms and real-time interaction with the environment
Crestani et al. A hierarchical neuro-fuzzy approach to autonomous navigation
Kubota et al. Perception-based genetic algorithm for a mobile robot with fuzzy controllers
Blanzieri et al. Growing radial basis function networks
Dadios et al. Application of neural networks to the flexible pole-cart balancing problem
Cheng et al. Q-value based particle swarm optimization for reinforcement neuro-fuzzy system design
XU et al. Adjustment strategy for a dual-fuzzy-neuro controller using genetic algorithms–application to gas-fired water heater
Hagras et al. An embedded-agent architecture for online learning and control in intelligent machines
OLIVARES et al. Fuzzy control system navigation using priority areas
Petrov et al. An adaptive hybrid fuzzy-neural controller
Fan et al. Reinforcement learning and ART2 neural network based collision avoidance system of mobile robot
Hagras et al. Prototyping, Design and Learning in Outdoors Mobile Robots Operating In Unstructured Environments
Peng et al. Decision-making and simulation in multi-agent robot system based on PSO-neural network
Chen Hybrid soft computing approach to identification and control of nonlinear systems

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20110507