GB2553503A - Self-referential machine architecture - Google Patents

Self-referential machine architecture Download PDF

Info

Publication number
GB2553503A
GB2553503A GB1614245.7A GB201614245A GB2553503A GB 2553503 A GB2553503 A GB 2553503A GB 201614245 A GB201614245 A GB 201614245A GB 2553503 A GB2553503 A GB 2553503A
Authority
GB
United Kingdom
Prior art keywords
machine
model
inputs
outputs
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1614245.7A
Other versions
GB201614245D0 (en
Inventor
Mitchell Howe Robin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1614245.7A priority Critical patent/GB2553503A/en
Publication of GB201614245D0 publication Critical patent/GB201614245D0/en
Publication of GB2553503A publication Critical patent/GB2553503A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

A machine or component thereof executing some autonomous behaviour connected in parallel with one or more external sensory inputs to the second part comprising a learning device that models and subsequently predicts the environment observed at its inputs in order to develop one or more outputs that best promote its preservation in that environment. The model subsequently and necessarily infers the machine (itself) as an autonomous agent acting in its external environment just as it represents any other agent in that externally sensed world. A further aspect of the invention provides sensory-emulating feedback of the model's relatively stable perception that is indiscernible from real external sensory input such that the machine effectively senses itself in its environment and is therefore rendered "self-aware".

Description

(54) Title of the Invention: Self-referential machine architecture Abstract Title: Self-referential machine learning architecture (57) A machine or component thereof executing some autonomous behaviour connected in parallel with one or more external sensory inputs to the second part comprising a learning device that models and subsequently predicts the environment observed at its inputs in order to develop one or more outputs that best promote its preservation in that environment. The model subsequently and necessarily infers the machine (itself) as an autonomous agent acting in its external environment just as it represents any other agent in that externally sensed world. A further aspect of the invention provides sensory-emulating feedback of the model's relatively stable perception that is indiscernible from real external sensory input such that the machine effectively senses itself in its environment and is therefore rendered self-aware.
Figure GB2553503A_D0001
Figure 6
The reference to figures 16-20 of the drawings in the printed specification areto be treated as omitted under section 15(5) or (6) of the Patents Acts 1977
1/16
Figure GB2553503A_D0002
Figure 1/20
2/16
Figure GB2553503A_D0003
Figure 2/20
3/16
Figure GB2553503A_D0004
Figure 3/20
4/16
Figure GB2553503A_D0005
Figure 4/20
5/16
Figure GB2553503A_D0006
Figure 5/20
6/16
Figure GB2553503A_D0007
Figure 6/20
7/16
Figure GB2553503A_D0008
Figure 7/20
8/16
Figure GB2553503A_D0009
Figure 8/20
9/16
Figure GB2553503A_D0010
Figure 9/20
10/16
Figure GB2553503A_D0011
Figure GB2553503A_D0012
Figure 10/20
11/16
Figure GB2553503A_D0013
Figure 11/20
12/16
Figure GB2553503A_D0014
Figure 12/20
13/16
Figure GB2553503A_D0015
Figure 13/20
14/16
Figure GB2553503A_D0016
Figure 14/20
15/16
Figure GB2553503A_D0017
Figure 15/20
16/16
Figures 16-20 to follow
Self-Referential Machine Architecture
Background to the Invention
Most electronic computers employ sequential Boolean logic [1] whilst human and other mammalian nervous systems utilise networks with a profoundly different architecture and mode of operation [2]. Consequently electronic machines that develop human-like properties such as selfawareness remain conspicuous by their absence. The invention comprises the fundamental architectural elements necessary for electronic machines from which the aforementioned non-ethereal property emerges.
Summary of the Invention
The invention comprises on the first part a machine regulatory system executing some predetermined, apparently autonomous behaviour connected effectively in parallel with one or more external sensory inputs to the second part comprising of a learning device that models and subsequently predicts the world sensed at its inputs in order to develop one or more outputs that best promote the machine's preservation in its environment according to its prior learned knowledge.
The model in the second part develops an invariant representation of the first part of the machine, its autonomous behaviour and its apparent agency in the same manner as it represents any object in the externally sensed world. The further association of behaviours emanating from the model with the previously embodied agent is reflected in sensoryemulating feedback of the model prediction to the machine inputs so rendering the machine self-aware.
Whilst the two parts of the invention are well known separately, or at least components thereof, their purposeful connection as described is a novel step required for a machine to become self-aware. Self-awareness emerges as an illusory property derived because of the necessarily high statistical likelihood apportioned to the autonomic model of the first part of the machine machine that subsequently dominates the inferences developed in the model prediction of the external world.
The machine design also suggests a viable explanation for the phenomenon of human consciousness (addressing the basis of both the easy and hard problems simultaneously [3]). The model in the second part bears a strong resemblance to (at least part of) the operation of the neocortex and thalamus and the first part bears a strong resemblance to the operation of parts of the spinal chord, brain stem and limbic system: Human consciousness therefore appears to be a similar illusion.
Nevertheless the machinated self-awareness that is the subject of this invention is a real-world property and is therefore physically measurable - indeed the means to monitor the machine's perception of itself in its environment that would otherwise be a private function is a facet of the invention that is detailed herein. The manifestation of self-awareness in humans, however, remains an untestable phenomenon and no presumption or account of the human condition is therefore implied or intended.
Advantages of the Invention
The invention enables the construction of a machine that models its own existence and behaviour in its environment, that is, it is or becomes self aware. The invention further enables multiple (but not necessarily identical) physical instances of a machine where the model is endowed with a shared awareness of those multiple instances by utilising a single shared model which may subsequently endow the machine with accelerated learning capabilities.
The invention is able to provide the basis for whole or part simulation of the behaviour of an individual human (or other mammal) or a larger number or a number of groups thereof. Such a machine whether real or simulated is therefore eminently suitable for modelling systems that have proven difficult to simulate and forecast accurately via traditional analytic means, such as economics, financial markets and transportation systems, for example.
Another feature of such a self-aware machine is the ability to provide access to its perception of the external world that would otherwise be a private function. This feature has applications in human and mammalian research models (whether real or simulated) or can provide fast access to information a machine is designed to extract, such as where a machine are employed to identify a face in a crowd, for example, so bypassing more normal communication channels.
Furthermore when coupled with suitable sensory inputs and additional motor outputs or emulators thereof the generation of a response that is (or at least resembles) empathetic with human (or other mammal) behaviour is enabled. A machine encompassing the invention is well-placed to pass the so-called Turing Test [4] and therefore offers the potential for improved machine-user interaction by understanding and communicating in natural human language .
Prior Art
A machine regulatory system that compromises the first component of the invention is commonplace and can be engineered as part of a machine or (as in the example given later) be a whole machine in its own right whose inputs and outputs are adapted to interface with the second component part of the invention. No claim of invention is made or implied with this first component part of the machine or indeed the precise function of this part of the machine.
The second component part of the invention is the platform on which the model of the external world is learned which serves to predict at its outputs the actions that best promote survival of the machine within its environment based upon its prior (learned) knowledge of that environment. Since there exists a plurality of known means to create a machine with the necessary features (see [5] for example), no claims of invention are tendered or implied for this second component part of the machine.
One example of such a model describes a hierarchy of Kalman filters [6,7] and another adopts Bloom filters [8,9] both based upon a models of the mammalian neocortex. The prior art also contains a plurality of theoretical models describing the emergence of human consciousness as a whole - and therefore the human being as a machine with self-awareness. A brief summary of the differences with the machine described herein is presented below:
- Descartes [10] viewed human consciousness without external physical reference - or rather in a context in which any explanation of its occurrence was not externally verifiable. Several non-physical accounts of consciousness have emerged as a result. Chalmers [3], for example, describes consciousness arising out of and proportional to system complexity without any (known) physical foundation, which has nothing in common with the invention described herein.
Crick [11] supposes consciousness emerges from the synchronisation of neural activity across the neocortex. No such causal feature is present in the invention described herein although the machine is likely to generate synchrony in parts of the model it utilises that is symptomatic of its self-awareness.
Greenfield [12] identifies bursts of neocortex activity (subsequently termed neuronal assemblies) as correlates of the extent of selfawareness although no causal means for the emergence of selfawareness is described. No such causal feature is present in the invention described herein although the machine model suggests such neocortex activity is symptomatic of the extent of self-awareness.
Edelman [13] supposes consciousness emerges from some orthogonal dimension from bursts of activity in areas of the neocortex that are the resultant of recurrent neural network connectivity. The connectivity does cause neuronal assemblies although once again a mechanism by which recurrent neural networks cause self-awareness is not described.
Orpwood [14] suggests a recurrent multi-level model like that of Edelman [13] but where attractor states or limit cycles serves as the basis for consciousness. Once again a mechanism by which selfawareness emerges from the attractor states is not described. Whilst the machine described herein may utilise attractor states, the overall system architecture prevents their long-term continuation.
Grossenberg [15] describes detailed simulations of neural circuitry in which resonances due to attractor states give rise to the selfawareness - although once again the mechanism by which self-awareness is rendered is not apparent. The machine disclosed herein again suggests such resonances are symptomatic not causal of self-awareness when rendered in the human nervous system.
Harth [16] suggests a model similar to that of Orpwood [14] except where consciousness emerges due to some orthogonal representation of oscillation from positive feedback loops. No such causal feature is present in the invention described herein and although the invention describes a means by which positive feedback modulates the machine output, the positive feedback is bound by negative feedback.
Ramachandran [17] and Rosenfield [18] suggest consciousness arises from a representation in the neocortex that embodies the physical manifestation of the self acting in the external world. The current invention described herein infers such a model but by itself the models of Ramachandran and Rosenfield are insufficient for selfawareness to emerge and do not undergo the causal development described in the current invention.
Damasio [19] suggests the seat of consciousness is not restricted to such a the model in the neocortex but includes the Reticular Activating System (RAS) in the brain stem. The current invention described herein utilises a first component part that resembles the RAS but by itself and devoid of the further physical features of the invention described herein the model of Damasio is insufficient for self-awareness to emerge.
- Somewhat on a different theme, Penrose [20] applies Godel's proof of the incompleteness of logical systems to show consciousness is not possible in a logical (Turing) machine, where as the invention disclosed herein deliberately employs logical incompleteness in the recursive self-reference that makes its self-awareness possible.
It is emphasised again that no claim is presented for the learning model used in the second part of the machine and that none of the models apparent in the prior art describe the machine architecture disclosed herein. In particular the invention requires no workspace in which its perception is assembled - rather its perception is manifested at the sensory inputs where a stable perception subsequently renders selfawareness that is further physically measurable.
In the invention neuronal assemblies [12] are symptomatic of attractor states [14,15] in recurrent networks [13] that emerge as symptomatic of the convergence of input data with prior learned knowledge that in turn lead to a perception at the sensory inputs that may incorporate a model of the self [17-19] that is well-defined against relatively noisy input data. Such a sensed-perception also provide the foundation for solving the so-called hard problem of consciousness [3].
The extent of the prior art also encompasses readily-available sources of the code required for implementing the invention with relatively minor changes. A code base employing Bloom filters [21] (available to the general public under an appropriate user license) provides software for implementing the models described in [9]. An extensive code base is also documented in [22] that is sufficient for developing the other components described in the current invention.
Statement of Invention
According to a first aspect of the invention there is at least one component of a machine that has:
- at least one behavioural feature that appears to act autonomously with respect to the machine's external environment that is sensed via some machine inputs;
- at least one behavioural feature that controls some part of the machine's behaviour (an output) independently from a third aspect of the machine (described below);
- at least one further behavioural feature (an output) whose control is shared to some degree or other with that third aspect of the machine (described below).
According to a second aspect of the invention there exists a connection between the first aspect of the invention and a third aspect of the invention (described below) that causes the behaviour of the first aspect of the invention to appear effectively in parallel with other behaviours due to objects and their interactions sensed in the external world by that third aspect of the invention and subsequently processed in the same manner as objects in the external world in that third aspect.
According to a third aspect of the invention there is at least one component of a machine that has:
- the means to develop a learned model from patterns identified within the information appearing at its inputs;
- where those inputs are sourced from sensors connected to the external world and from the outputs of the first aspect of the invention as described above, and further where all the inputs are presented effectively in parallel and thereby treated in the same manner by the modelling process;
- where the outputs of the model and therefore the machine nominally best support the survival of the machine in its environment and particularly its ability to complete the task at hand according to the model's prediction (output) of the near future based upon its prior learned knowledge;
- at least one output from the model that effects an output from the machine whose control is shared to some degree or other with the first aspect of the machine and where the output is not one of those defining the autonomous behaviour of the first aspect of the machine.
According to a fourth aspect of the invention there is the means by which (the sensory) part of the model prediction (an output) in the third aspect of the invention is fed back to the inputs of the machine and is indiscernible from those inputs such that the machine is endowed with the ability to perceive the sensory-emulating model outputs and perceive therefore itself as an autonomous physical agent acting in the external world: That is, the machine is rendered self-aware.
According to a fifth aspect of the invention there is the means by which the the sensory part of the model prediction in the third aspect of the invention or the summed input thereof is also used as an external output so that the perception can be monitored by an external user. An inverse sensory transform, such as using a computer screen to relay the pixel-bypixel information perceived at a camera input, for example, allows direct observation by an external user.
For clarification, no claim is made that the component parts describing the first and third aspects are the subject of the invention disclosed herein. Nor is it implied that these components as described herein are the only means by which the subject of the invention can be developed. The invention concerns the connection of the components (or their equivalents) as described in the second, fourth and fifth aspects and the ensuing interaction by which machine self-awareness becomes apparent.
Introduction to the Drawings
Figure 1 illustrates a simple machine
Figure 2 illustrates the development of machine self-awareness stage 1: Model objectification of the first part of the machine
Figure 3 illustrates the development of machine self-awareness stage 2: Apportionment of autonomy of the first part of the machine
Figure 4 illustrates the development of machine self-awareness stage 3: Identification of agency of the first part of the machine
Figure 5 illustrates the development of machine self-awareness stage 4: Association of agency of the second part of the machine
Figure 6 illustrates the development of machine self-awareness stage 5: Perception of agency from sensory-emulating feedback
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure illustrates a simple example of a model for use in the second part of the machine to generate sensory-emulating feedback illustrates a hierarchical model element for use in the second part of the machine to generate sensory-emulating feedback illustrates a simple hierarchical model for use in the second part of the machine to generate sensory-emulating feedback illustrates a nested hierarchical model for use in the second part of the machine to generate sensory-emulating feedback illustrates a machine with multiple physical instances and a singular self-awareness illustrates a machine with multiple physical instances and a shared self-awareness illustrates an example application of a self-aware machine with multiple physical instances for use in a household illustrates the first part of the example machine illustrates the second part of the example machine illustrates examples of code elements for converting binary signals to unary signals and vice versa illustrates code elements for implementing and maintaining the predictive model illustrates code elements for adding newly identified patterns to the predictive model illustrates code elements for purposeful degrading and pruning of the predictive model illustrates code elements for perception monitoring
Detailed Description of the Invention
A machine 1 operates in some environment 2 (Figure 1) wherein its sensory inputs 3 cause motor outputs 4 that in turn effect some change in the environment 2. The function and purpose of the machine are not relevant to this description. The machine also requires a power source that for the sake of clarity is omitted from further discussion but is assumed to be present unless stated otherwise. The power source(s) may be permanent (online) or rechargeable (offline).
The invention comprises on the first part a machine or part thereof executing some predetermined behaviour (typically that best ensures the preservation of the machine) connected effectively in parallel with one or more external sensory inputs 3 to a second part that comprises a learning device that models and subsequently predicts the world sensed at its inputs in order to develop one or more outputs 4 that best promote its preservation in the wider context of its local environment 2.
The predetermined behaviour in the first part is not restricted to a regulatory control system but ideally should be an ever-present and (apparently) autonomous behaviour when sensed in the second part: An ever-present non-regulatory system will likely be somewhat superfluous to typical machine action; A much less than ever-present behaviour will not be sufficiently common to promote the perception of the machine's agency and subsequently its self-awareness.
The second part of the machine can employ a hierarchical and/or nested pattern identification system or other system providing the functions described later. By employing the machine architecture described in this invention, the first part of the machine causes patterns to be learned in the second part just as for another object and/or its behaviours detected in its external sensory inputs: The model in the second part develops a model object that forms the basis for the emergence of self-awareness.
Critical in the invention is the effective parallel connection between the first and second parts of the machine. The connection ensures that the apparently autonomous behaviour of the first part of the machine is modelled in the second part of the machine as the invention requires. The parallel connection is further expanded to include the motor outputs of the first part of the machine (or signals derived thereof) in order to impart agency to the modelled object.
The invention does not concern some ill-defined ethereal self, rather the autonomous system imparts the seed for the machine's self-awareness and the appropriate connections provide the means by which self-awareness is manifested as a product of the model of the world it develops. The perception of self-awareness and further properties of the machine such as the realisation of empathetic behaviours emanate from the architecture described by this invention.
The first part of the machine can range from a simple power supply regulation system to a complete machine in its own right where the second part is appended. For example, the first part of the invention could include the control systems that best ensure a mobile machine does not fall over whilst the second part of the machine may endow the machine with ability to avoid moving to a location that would place the machine in a position where it might fall over.
Essentially the first part of the invention can encompass a machine with behaviours that best serve its intended purpose in a neutral environment (whether pre-programmed or via a learning process). The second part of the machine provides the means to augment or over-ride the outputs from the first part of the machine according to its ever-adapting model of itself in a non-neutral environment: It is therefore capable of modifying its behaviour according to the demands of a new environment.
It is possible that first part of the machine can be an existing machine in its own right whereby the second part can be added retrospectively whether the first part was intended for such an application or otherwise. In such cases it is likely that the effectively parallel nature of the connection required between the two parts of the machine will impose the need for modifications to an existing machine for it to behave as the invention requires, however.
The development of a generic self-aware machine is now described strictly by means of example since the fundamental invention can be embedded in a plurality of different machines, including those with multiple physical instances that are imparted with a shared self-awareness. The steps defined in this example can occur in a less well-defined and ordered manner than is presented herein. The description has been engineered specifically to highlight aspects salient to the invention disclosed.
The machine model objectification stage is illustrated in Figure 2 where a machine 1 is disconnected from its external environment 2. Device 5 serves as the first part of the invention and is assumed to exhibit the required predetermined behaviour and connected to the second part 6 via the connection 7. In a practical machine the learning process may be enhanced by powering of the machine with the inputs 3 and possibly outputs 4 disconnected from the external world.
The learning device 6 is programmed to identify repeated patterns of behaviour appearing at its inputs and arrange them in some well-organised order (described in detail later) . With only the inputs of device 5 connected to model, the continually adapting model in device 6 will converge on an apparently invariant model representation of device 5 henceforth referred to as the self-object and to which a high degree of certainty will be inferred due to its ever-presence.
When the sensory inputs 3 are connected and included in an expanded 7 (Figure 3), device 5 persists in the learned model as the invariant selfobject linked with its associated behaviours that emerge alongside other external objects and their respective behaviours that are subsequently identified in the external environment 2. The learning device 6 simply continues its pre-programmed task of identifying patterns of behaviour appearing at its inputs via 7 and arranges them in some ordered model.
With each instance of device 5 exhibiting behaviour that is (largely) uncorrelated with the external world and ever-present, the autonomy attributed to the self-object remains entrenched. Of note is that there may be other outputs from device 5 that do not form part of 7 and the entire set of inputs 3 need not necessarily be replicated at the inputs of both device 5 and device 6: The invention permits device 5 to function in part independently from the inputs modelled in device 6.
Device 5 may also possess its own learning capabilities that refine its behaviour to best serve regulation of the machine in its immediate environment and these behaviours will also be modelled in the learning device 6 and linked with the self-object. Further correlations between sensory inputs such as contact pressure (from the surface of the machine) and video, for example, serve to embody the self-object within the modelled physical bounds of the machine in its environment.
The term embodiment warrants further explanation: In this particular case embodiment concerns the modelling of the external physical bounds of the machine and its motion when sensed by itself. Different physical references are possible such as required for reflections and media images all of which are attached to the self-object in the model as appropriate. Further perceptions of the machine are produced as the machine predicts the results of third parties observing itself.
The sensed physical bounds of the machine also impart an internalised physical space such as apparent for a processor temperature sensor that can comprise part of the autonomous behaviour of device 5 or form part of a cooling fan servo control system where device 6 is able to pre-empt its operation, for instance. The internal volume is not necessarily sensed directly but inferred by the high statistical likelihood apportioned to its physical encompassing by the bounds of the machine.
An embodiment can also be inferred for objects and behaviours without physical bounds but nonetheless centred upon an apparent focal point of the sensors through which they are perceived. Any objects or behaviours perceived in the absence of physical dimensions will be inferred as having such an embodiment with the same high statistical likelihood as is apportioned to the self-object that permeates throughout much of the machine's perception as is detailed later.
Where device 5 possesses one or more outputs able to effect the external world in some manner via 4 and where those outputs are additionally bundled in the connection 7 (Figure 4), the self-object in the learned model within device 6 will appear to be the cause of those effects: Via such (errant) modelling of causality the self-object is imparted with the property of (free) agency acting independently of the external world and embodied by the physical bounds of the machine.
Since device 6 senses both the inputs and outputs of device 5, the model in device 6 is able to identify the invariant behaviours of device 5 and those that are correlated with changes in the local environment. In essence the invariant behaviours of device 5 are modelled with a high statistical likelihood based on the Bayesian Inference implicit in the model in 6 [23,24] and depend upon device 6 having no control of those particular behaviours emanating from device 5.
Where device 6 also possesses the ability to drive the one or more motor outputs 4 via the addition of device 8 (Figure 5), outputs and behaviours emanating from device 6 will be similarly modelled as part of the same self-object: By this physical association, the behaviours deriving from the model in device 6 will be imparted with the same embodiment and agency as those modelled as emanating from the apparently autonomous device 5 and with the same high statistical likelihood described above.
Device 8 can be integrated within device 5 or device 6, be integrated into the machine motor outputs 4 or be a separate device in its own right as shown in the example in Figure 5. Device 8 can implement a simple union of the outputs of device 5 and device 6 or some arrangement whereby one output inhibits the other. If device 6 incurs additional processing time then device 5 outputs can provide for some default response such as to protect the machine in an immediate emergency, for example.
The self-object then has the properties of awareness (of device 5) , embodiment (within the sensed physical bounds of the machine), and agency (directly for the behaviour of device 5 and by physical association for the behaviour of device 6) . These properties describe a model of the machine that acts as an independent agent located within its physical bounds (though notably device 5 and/or device 6 need not reside in these bounds but only be connected to the machine inputs 3 and outputs 4).
To be rendered self-aware the machine then requires to perceive the selfobject in its sensory inputs (although it is emphasised the machine is then sensing a perception of the external world with an errantly presumed autonomous self). This perception is developed by feedback that acts to minimise the differences between the sensed external world and the priorlearned model of that external world in device 6 in which itself is a well-established and often evoked component.
The perceptual, sensory emulating feedback shown in Figure 6 completes the basic machine architecture wherein a feedback path is established via device 9. Device 9 can be implemented as a simple union (in this simple form) and the feedback includes one model-generated signal for each sensory input signal 3 - that is, it comprises the remainder of the outputs from the model that are not driving motor outputs 4. Notably the feedback embeds the behaviours attributable to device 5.
Just as with device 8, the additional component 9 can be integrated within device 5 or device 6, be integrated into the machine sensory inputs 3 or be a separate device in its own right as shown in the example in Figure 6. Unlike device 8, however, device 9 is required to form the feedback summing nodes of all the appropriate outputs from device 6 even in cases where the sensory inputs 3 are only active in part - hereafter referred to as sparsely defined.
Critically the perceptual feedback drives the machine towards a settled state wherein a complete perception of the world is apparent - even in cases where the actual sensory input is sparsely defined and in cases where the model in 6 has failed to identify any difference between its perception and its real external environment: One such important difference is the appearance and therefore the sensory awareness of the internally-embodied agent defined by the modelled self-object.
The physical process from which self-awareness emerges is summarised as follows :
- model objectification of the autonomic behavioural elements emanating from device 5 (Figure 3 and also possibly Figure 2);
- attribution of physical agency to the model of device 5 acting in its external world (Figure 4);
- apportion of a shared agency for behaviour emanating from device 6 according to its physical association with the behaviours emanating from (and therefore due to the agency of) device 5 (Figure 5);
- emergence of self-awareness from the sensory-emulating feedback of the model perception that includes the autonomic agent representing the machine itself (Figure 6)
An important property of the architecture is that the sensory emulating feedback conveys a stable perception of the external world derived from the model in device 6 that is rendered against relatively noisy inputs. Without a stable perception no object will be discernible in the external world and sensory input will average to zero. Self-awareness like other perceptions is therefore produced only after a modelling delay and is likely to persist in discrete periods until the perception changes.
As will be discussed below, the nature of the model employed to generate the perceptual feedback has a further bearing on when self-awareness is apparent: An efficient and optimised learned model will tend to reduce the frequency of self-awareness emerging and it should not therefore be regarded as an ever-present property. Without the architecture described or equivalent thereof, however, a learned model alone will not be capable of producing self-awareness at any time.
Of special note is where the autonomic output of device 5 includes a clock signal (or representation thereof) such that the high certainty afforded to its presence leads to an effective logical AND operation [1] with any learned pattern in the model in device 6. Thus without this time-based reference any learned patterns will unlikely form part of the assembled prediction in the (necessarily self-centric) model output and self-awareness is in turn unlikely to be generated.
A model to generate perceptual feedback can be realised via a plurality of means: An array of Kalman filters [6] and Bloom filters [8] configured to emulate (as best as is known) the neocortex have been developed in the prior art. Only the features salient to the invention disclosed herein are therefore described below: The overriding requirement is that sensory-emulating feedback from the model is not discernible from (real) sensory awareness of the external world 2.
It is convenient but not essential to represent patterns as unary sequences in time: Information can be represented via specific unit timing, unit density or a combination thereof. Spatial information can be coded in time by encoding delays such as incurred when scanning an object, for example. Unary sequences can be added by a simple union so that retrieving information (in whole or part sequences) from unified sequences is also relatively fast compared to other known means.
The union operation of unary sequences has a special property relevant in device 8 and/or device 9 as described previously and possibly in other components of the machine: Where input sensory signals present relatively sparse input streams, for example, where a camera focusses on just a specific object, the union with a complete perception yields the sum of the perception and any differences (or errors) in that perception in spite of the incomplete data set represented by the sensory inputs.
The emergence of self-awareness can be envisaged from a simple model for use in device 6 (Figure 7). The inputs to device 6 are presented at node 10 and the appropriate outputs are sent to device 8 and device 9 from node 11. The feedback loop apparent around device 12 formed with the differencing device 13 strives to minimise the residual modelling error 14, that is to ensure the output of device 12 best predict the input according to the prior-learned knowledge acquired by the model.
In minimising the difference signal 14 the perception will likely include the self-object and its associated autonomous behaviours. A perception is therefore created that includes this self-object and its associated agency even though it does not exist in the actual environment that is modelled in device 6. As the perception fed back is indiscernible from external inputs, a perception of the environment is sensed by the machine that includes itself as a free agent.
It is re-iterated that the fed back perception requires a relatively stable projection relative to the inputs from the external world. The nature of such a recursive neural network is that predictions will converge on known (previously learned) states termed attractor states [25,26]. It is then attractor states that provide the relatively stable components in the perception as a complete prediction is assembled and persists until an alternative attractor state is forced.
A perception of the whole external world is then also available even where sensory input is limited in its extent or unidentified where the model is limited in its extent. Whenever the machine invokes the selfobject in a perception it is subsequently rendered self-aware (or as would be fitting of a description of the like human condition, conscious of its perceived existence as an agent acting in its external physical world).
It is re-iterated that the property of self-awareness emerges from the machine architecture wherein the perceptual feedback relays an errantly learned model of the self-object's agency such that it is sensed in the perceived environment. This stems in turn from the errant derivation of causality and the perception of an autonomous agent. (Such incorrectly resolved physicality and agency may also be considered the basis for our perception of an ethereal human self).
Perceptions generated by the adaptive models are governed by the principle of Bayesian Inference [23] and there is no guarantee that any perception (or component thereof) is an accurate representation of the external world - only that the perception best matches the model of the external world in device 6: A property or an object (and indeed the machine's perception of its agency) occupying some non-physical space represents the best solution that the modelling process affords.
An errant model promoted by Bayesian Inference is the foundation of the machine's self-awareness. The certainty afforded to the autonomous agency within device 5 as modelled in device 6 and the persistence of this model component allows the continued perception of an agent existing inside the machine without alarm. Any impetus afforded to remodelling such a model of reality is highly probable to be scuppered by the inherent certainty afforded to the Bayesian-inferred and ever-present self-object.
By contrast anything that happens to the self or to which the the self responds will invoke the self-object so further reinforcing the certainty afforded to the existence of the self. (Even reading this treatise will not change the notion of the self possessed by the reader - rather reading and even (bizarrely) understanding this text in a human context is a task that serves to further emphasise the certainty afforded to the existence of oneself, for example).
Where a perception evokes an object property, such as colour, but where the perception is devoid of an object to which that property belongs, the perception is resolved using the same learned model as that developed to represent the machine's inferred agency: The unattached property is internally focussed and perceived devoid of any link to the external physical world (as per unquantifiable aspects of our human perception of certain qualia).
Device 12 is required to identify patterns of behaviour appearing at its inputs from its prior-learned experience and select (one or more) learned outputs that best predict the immediate future based on that prior experience. The residual error difference signal 14 can serve to drive the prediction selection in device 12 via further nested local feedback action such as in a Kalman estimator [6] where a selection criteria is to minimises the total energy in the residual error.
An efficient means to implement device 12 is instead to adopt a parallel set of Bloom filters [8] - something that is especially relevant when processing unary signals. The Bloom filters act by recognising possible matches of input patterns to one or more from a set of prior learned patterns. A match then triggers the predicted output which in this case could be the remainder of the matching pattern or another stored pattern stored in a look-up table or equivalent thereof.
The modified Bloom filter is equivalent to a number of parallel models representing different prior-learned input patterns. The (one or more) output from device 12 is selected as the model which best represents the input and the meter of error 14 serves to inhibit the outputs from less well-matched filters via nested local feedback embedded in device 12 as indicated in Figure 7. Ideally the output will then converge to the input since the patterns are known from prior experience.
If Z[i] represents the output i of n! Bloom filters and W[i] represents the input at device 13 then the error e[i] is e[i]=Z[i]-W[i] where if Ψ{ } is the operator defining the operation of device 12 the next predicted output is
Ζ[ί]=Ψ{θ[ί]} where i=l,2,...n1 and the value of i represents a currently active stored filter. In the case of a Kalman predictor the operator can be replaced by a linear expression; For the case of a Bloom filter its inherent nonlinearity prevents the adoption of such convenient notation. Also to note is that to be consistent with prior art and further models described herein the predictor sense (phase) must be inverted for the case of this simple model.
The feedback loop in Figure 7 is causal with a start-point - the signal appearing at 10 - and an end-point - the signal appearing at 11. The Bloom filter in device 12 imposes this causality requiring first an input to produce an output. Whilst times indices have been omitted from the equations for the sake of clarity, the conditions for loop causality (and stability [27]) are assumed and henceforth in other equation presented herein (even where limit cycles ensue from attractor states).
Where errors remain large, any new pattern presented to the filter will be apparent from harmonic content discernible from the otherwise randomlike noise. The harmonic content forms the basis of a new parallel Bloom filter element and the probability of adding a new such element increases proportionally to some degree to the frequency of such content occurring: Often-repeated slightly variable patterns are also then likely to produce a higher density (resolution) of Bloom filter elements.
Completing the simple model of Figure 7 are the translators required to interface the model to the n0 inputs and outputs of device 6. The multiplier formed by device 15 multiplies the input X[j] at node 10 by the appropriate weightings at node 16 denoted by a[i,j] to form the predictor inputs W[j] where
n.
w[j]=Xa[i , j] . X[i] i = l and 3=1,2,...¾. Conversely the multiplier formed by device 17 multiplies the predictor outputs Z[j] by the weightings 3[j,i] at node 18 to form the n0 outputs of device 6 where
Υ[ί]=Σβ[ί, i] - Z[ j] j = l and where i=l,2,...n0. It is further evident that α[η0, η1]=β[η1, n0]T
The model motor outputs to device 8 from device 6 are formed in the exact same manner as the model outputs that drive the sensory-emulating feedback to device 9. The prediction presented by the model does not distinguish between minimising differences in sensory perception and minimising actual differences in the external environment 2. Furthermore complementary inputs and outputs at nodes 10 and 11 can interface directly with servo motor sensing and driving signals respectively.
The model in Figure 7 is highly unlikely to be useful in a practical machine, however, since it requires a complete model environment for every combination of learned events, objects and behaviours. A more efficient model utilises a hierarchical breakdown of patterns in the external world whereby substantial parts can be re-used. Indeed such a hierarchical model provides for the most efficient model commensurate with the findings of information theory [24,28].
An example of a hierarchical model element is illustrated in Figure 8 resembling somewhat Figure 7 except for the additional input node 19 and differencing device 20. Identical such elements can be cascaded to form a hierarchical array wherein the element in Figure 8 forms a four-port network 21 as shown in Figure 9. Stable hierarchical stages then result in a stable global feedback system formed from the sensory-emulating loop via device 9 and external feedback paths via device 8.
In this and any other such hierarchy, residual modelling errors 18 pass up the hierarchy until a minimum error is apparent (whereby no further errors can be passed on to the next stage). The corresponding perception is assembled via the additional input node 15 as the predicted patterns flow back down the hierarchy 19 before the final complete perception is available at the outputs of device 6 as is required to produce the desired sensory-emulating feedback.
It is noteworthy that the network component in Figure 8 bears a strong resemblance to the simple model presented in Figure 7: This similarity is not unexpected since the hierarchical model optimisation leads to a fractal-like breakdown of the machine input and output patterns that stem from the often fractal-like structure observed in the real world. Each further hierarchical level can then be thought of as another level of fine detail resolved in the external environment.
It is also noted that the local feedback in each stage forms an attractor network and a stable prediction will be the sum of one or more attractor states. Residual errors in the hierarchy will then be relative to the current attractor states such that even an errant prediction will be the best prediction of which the learned model is capable. Any residual harmonic content still remaining will form the basis for the learning of a new filter component and therefore attractor state (described later).
If Xm-i[nm_!] and Yh-Ja-J represent the nm_! input and output signals at nodes 10 and nodes 11 respectively for the hierarchical element level m, then the signals Xm[nm] and Ym[nm] at nodes 14 and 19 respectively link to the respective input and output signals at nodes 10 and 11 of the next hierarchical level m+1. The n0 machine inputs and outputs are then the base (zeroth) hierarchical level signals X0[n0] and Y0[n0] . The hierarchy is assumed to operate over an arbitrary M levels.
If the signals Wm[nm] and Zm[nm] represent the internal signals in each model element as previously defined for Figure 7 then the operation of each hierarchical element shown in Figure 8 is described firstly via the operation at device 13 by
Xm[i]=zJi]-Wm[i] and secondly via the operation of device 16 and device 12 by Zm[i] = Vm{Ym[i]-Xm[i]} where once again i=l,2,...nm and m=l,2,...M. Furthermore for the simple hierarchy shown in Figure 9 if n^] and 3m[nm,nm+1] denote the filter weightings applied to the inputs at device 15 and device 17 respectively then at device 13
Wm[j] = S «m[i , j] · Xm-i[i] i = l where j=l,2,...nm and at device 16
Ym-l[i] = Z βη[ΐ f i] · zm[ j ] j=l where i=l, 2, . . . nm_! and in both cases where m=l,2,...M.
Less formally, the essence of each filter represented by the weightings OCm [nm, nm-i] at device 13 is to identify the level of correlations between different parts of the model and each filter at device 16 represented by the weightings 3m[nm,nm+1] can be thought as establishing the statistical certainty of the respective predictions being correct (as far as the learned model is capable of inferring). These weightings form essentially orthogonal filters to the Bloom filters represented by device 12.
The ability to set weightings to zero also provides a means for shaping the network and increasing efficiency. It is also feasible that after some period where learning is deemed to be sufficient, further efficiency savings are possible by zeroing (deleting) connections where weightings are below some threshold value (replicating pruning in the neocortex, for example). A regularly shaped network is then unlikely and some degree of task specialisation is to be expected in parts of the network.
The partitioning of the model network in device 6 also permits multiple occurrences of device 6 linked at one or more of the summing nodes. Each occurrence of a device 6 can be equipped with different learning acuities, different sensory inputs or motor outputs and be a complete or part model. With the outputs summed at some node the machine perception will be of a unitary model, however. Partitions in the model can also be switched on or off according to demand.
Just as with Figure 7, for a purely sensory input, motor output 11 is not connected. Similarly for a simple motor output, input 10 need not be present - that is effectively set to zero. A better motor control is apparent, however, by exploiting servo control that utilises the a motor output 11 with the corresponding input 10 for its motion sensing. At the farthest extents of the multi-level hierarchy, input 19 will also not be required - essentially, that is, set to zero also.
The simple hierarchy illustrated in Figure 9 can be improved by adopting the complex hierarchy in Figure 10 where connectivity between the network elements is increased accordingly. In the complex hierarchy in Figure 10 frequently active pathways can be effectively shortened by omitting network sections (zeroing the appropriate weightings) and minimising the number of active elements. The network can then respond more quickly and more efficiently as unused pathways are degenerated (explained later).
The more complex model hierarchy requires extending the inputs and Kalman filter coefficients at each node 10 (barring the first level) and node 14 (barring the last level) . For M total hierarchy levels and where the expanded matrices η^,ιη-Ι] and βη[nm, nm+1,M-m] assume the required dimensional increase, the filter operations are described at device 13 by m —1 nk
Wm[j] = EE«m[i<j< k]. Xk[i] k=li=l for l<m<M and where j =1,2 , . . . nm, and at device 16 by
M nk
Ym-i[i] = Z ΣβΚύ k]. Zk[j] k=m3=1 for l<m<M and where i=l, 2 , . . . nm_!.
Despite the three-dimensional arrays OUtn^η,^,ΐΐΐ-Ι] and 3m[nm_!, nm,M-m] there is no related limit to the number dimensions modelled by the coefficients themselves. The ability to shape the network and create arbitrary partitions endows the model with complete scope for multidimensional model datasets. The only limit is that of the reference to time recorded in the Bloom filters in device 12 to which the coefficient weightings are effectively orthogonal and abstract.
Evident from Figure 10 is that freguently encountered behaviours that involve the self-object can be bypassed such that the self-object will no longer be evoked. The machine will therefore be capable of continuing well-learned tasks and behaviours with increased processing speed and efficiency without evoking its self-awareness. Only with an unexpected occurrence is the resultant new pattern search likely to evoke the selfobject and the machine's self-awareness.
Furthermore a multilevel hierarchy allows many well-learned operations to occur without errors passing far up the hierarchy. If the residual error is reduced to (approximately) zero in too few a number of levels the self-object may not be invoked. Such frequently repeated tasks can therefore also be carried out without invoking the machine's selfawareness: A self-aware machine is not necessarily self-aware at all times or for all the tasks being undertaken.
The ever-present self-object serves as a reference point from which other less-frequently encountered and possibly less-certainly inferred objects, behaviours, interactions and combinations thereof can be modelled: Movements and other behaviours can be represented relative to the machine and common factors can be stored efficiently at appropriate hierarchical levels. Without invocation of the self-object, however, the perception developed by the machine will be devoid of self-awareness.
Where the flow of errors up the hierarchy is minimised, the reciprocal flow of perceptual components back down the hierarchy serves to minimise the difference between the sensory inputs and the prediction of those inputs from the prior-learned model of the external world. The perceptual model therefore strives to produce outputs that best match the sensory inputs. A perceptual model is undetectable from the external world it represents, however, regardless of its accuracy.
The culmination of the hierarchical pattern identification is firstly the creation of invariant objects at the top of the model that represent (apparently) invariant objects sensed in the environment. As residual errors flow up the hierarchy for identification so the best-fitting learned predictions of components of the external world flow back down and their union culminates in a complete and relatively stable perception of that world - even where real sensory input is absent.
The nature of the Bayesian inferred model means that any great certainty afforded to a perception may cause a real and contrasting sensory input to be ignored. A significant change in perception may then be apparent as evidence accrued overrides the previous model output: Interpreting fork handles as four candles [29], for example. Combined with an input that is likely not constant the model can be expected to produce ever-changing frames of perception as different details are drawn into focus.
The recursive nature of the perception due to the feedback path via device 9 may lead to sustained activity in device 6. Furthermore any function embedded in device 5 may also then serve to sustain activity in device 6 such as when a mobile machine searches to find a recharging point, for example. Self-awareness can also then be sustained and the recursive nature of the feedback via device 9 can develop a continuous narrative augmented by the external world only where necessary.
Combined with the ability to nest objects and behaviours in new filters (that is to represent them symbolically as objects and behaviours) allows the simulation of high-level strategies based around prior learned knowledge - that is to construct a narrative entirely from perception even without any final demand to execute of such a strategy. The learning and use of language in speech, reading or writing, are examples of this means of symbolically sequencing objects, behaviours and nests thereof.
A further property that emerges from developing strategies from perception is that of free will. A narrative that evokes the selfobject will infer its agency is responsible for that narrative and therefore also any choice that results from its construction. Like selfawareness therefore free will emerges as an illusion as the machine perceives itself in action (or otherwise). (A similar condition likely produces the illusion of free will in our human minds).
Another consequence of the bidirectional hierarchical model is that sensing the behaviour of a like object evokes behaviours in the hierarchy as predictions flow back down such as mimicking motion (although actual outputs are typically minimised). Such mimicry in turn evokes sensory responses from these predictions that are likely representative of the behaviour being exhibited by the like object: The machine's subsequent output will then be empathetic to the source of its response.
Where the residual modelling error remains large there exists the possibility to relax the bounds of probability applied in determining the prediction. Where a Kalman filter is employed this is a simple matter of increasing the Kalman Gain of each filter [6]; For parallel Bloom filters the local inhibition criteria in each device 12 needs relaxing such that the output is formed by the union of a number of possible predictions rather than just one.
By relaxing the statistical constraints in the estimation process a greater number of possible pattern matches is possible - that is the scope of pattern searching is increased and alternative predictions can be obtained, sometimes leading to a more accurate overall prediction. For a learning machine, a tighter statistical specification in prediction delivers more predictable behaviour and processing speed but less opportunity to learn new things.
The statistical constraints applied in device 6 can also be modulated in a global fashion, for example, by some average of the processing power employed by device 5: The scope of pattern identification and therefore the power of employed by device 6 can be increased when the behaviour of device 5 is not sufficiently successful in meeting current demands. Such modulation allows for power saving in device 6 when its full capacity is not required for effective machine operation.
The modulation of local network gains is effectively to tend towards positive feedback until a new attractor state becomes apparent and the modulation is reduced according to the over-riding negative feedback. Whether applied locally in parts of the model in device 6 or globally across the model from, for example, signals derived from device 5, the model exploits the same means to begin and widen the search for a better matching set of attractor states where residual errors remain.
Where the residual error still fails to converge to a minimum the error ripples up the hierarchy to the top level that would otherwise have no signal present. Any harmonic content discernible from random-like noise at the last hierarchical node is the result of unidentified patterns and the cue therefore for their learning. The number of new filters added is a compromise between usefulness and efficiency - prioritising the most frequent is therefore a worthwhile endeavour.
One advantage of the complex hierarchy is that new Bloom filters can be added at the top of the hierarchy and the subsequent optimisation of the filter weightings (4,(1, j,k) and 3m(i,j,k) will result in connecting the new filter in its optimum position(s) within the hierarchy. The constant adjustment of the filter weightings also allows each Bloom filter (and its associated components) to be re-used in the array - such as required for transference of learned skills, for example.
There is no need therefore to limit the number of hierarchical levels arbitrarily and the model described endows maximum adaptability. Commensurate with the principles of information theory [28], the learning of new patterns in such an array is known to produce a hierarchy of orthogonal patterns at each level and subsequently to maximise the efficiency of the model. The hierarchical model remains highly plastic despite the longevity of certain features it models, however.
For the learning of new patterns, a skeletal Bloom filter is required at the top of the hierarchy with the weightings (4(i,j,k) and 3m(i,j,k) set to some arbitrary constant. Auto-associative memory [30] provides for new pattern/sequence learning and the emphasising of repeated elements. Some time constant can be added to the memory that allows its purposeful degradation such that only frequently repeated elements remain: Noise and residuals from resolved searches are therefore attenuated.
Further prioritising of learned patterns can be included such as in proportion to some modulating factor from device 5 where device 5 contains additional information on the relevance of the newly identified pattern. A sudden change in perception in the model can also be used to emphasise the learning of certain patterns. It is possible that the new recorded sequences can be used as a source for model optimisation while device 6 (at least) or part thereof is offline.
The weightings (4(i, j , k) and 3m(i,j,k) are set to increase or decrease proportionally to some degree to the frequency of their occurrence [30]. For each specific value of j, k and m pointing to a currently selected Bloom filter it is arranged that C4 ( i, j , k) =μ.γ.(4 ( i, j , k) for i=l, 2 , ...nk_i and pm(i, j , k) =μ.γ.βπι(1, j , k) for i=l,2,...nk where γ is the emphasis factor and μ is some other optional modulation factor from device 5, for example, allowing certain patterns to be learned more quickly.
Purposeful degeneration of network links is a vital part of maintaining the efficiency of pattern recognition as well as optimising the connections of new Bloom filters. Degeneration is simply a matter of multiplying the translation coefficients by the reciprocal of the emphasis factor γ so that am(i, j , k) =(4(i, j , k)/γ and βη(1,j , k) =βη(1,j , k)/γ. The degeneration occurs for ALL network connections, however, rather than specific connections in use.
The normal, regular degradation of the network should be conducted over the long term in the order of at least several learning phases - most likely substantially more. The degradation applied to the learning of new patterns can be conducted over a shorter period, however - that is where the time constant is much shorter than that imposed by γ. For new filters μ can also be made large enough to give a suitable boost to the process of finding their optimal location in the hierarchy.
An additional feature of the machine is that if device 5 is switched off temporarily or at least the part of device 5 that is responsible for the apparently invariant behaviour that has been modelled in 6 then device 6 can continue to function without self-awareness. Provided the self-object is sufficiently well entrained within the model in device 6, the model can still be exposed to remodelling stimuli such as newly identified patterns that are stored elsewhere, possibly even in device 5.
With automated processes programmed in device 6 for learning and adding new filters to the network and purposefully degrading those already learned, the network can be left to expand and decrease as needs require and organise itself. No further user input is therefore required. It is also possible, however, that some level of learned model can be copied to a blank model or even appended to or replace an existing model to avoid lengthy early-learning phases.
A further novel feature of the machine is the ability to monitor the machine's perception or any component thereof that would otherwise be a private function. Any particular node in the model or collection thereof can be monitored: Where Figure 8 represents the first hierarchical level monitoring signals at node 11 (the sensory inputs) will reveal the machine's perception. Higher level nodes will likely be less useful without an equivalent lower level hierarchy to interpret them.
Some form of inverse transform of the monitored data may also be useful in order that it be more easily understood by the user. The transform may involve data processing or in the case of a camera input, for example, where the signal represents k sampled pixels, an unprocessed feed of the appropriate k signals at node 11 of the model input to a screen allows the inverse transform to be applied directly. Other transforms can allow useful representations of the other data via conventional peripherals.
The perception can be represented explicitly by first specifying the machine output, namely the m motor drive signals Σο (m) such that
Σο(ιη) = α(ιη) + β(ιη) where a(m) are the motor outputs from device 5 and β (m) are the motor outputs from device 6. If the presence of external world 2 is represented by a linear operation derived from the motor outputs (a suitable approximation for the purposes of this part of the analysis) that is denoted by the matrix T(m,s) and if Ei(s) represents the s external sensory inputs then
Zi(s)=V(m, s). Σο(η)
The n model inputs described by X0[n] can then be expressed as an appropriately partitioned matrix with n=s+x elements where
X0[n] = (Zi(s)|ao(x)) and where α0(χ) represents the x autonomic (non-external motor) outputs from device 5. Substituting for Ei(s) then yields
X0[n]=(v(m)(s) . 6(m) + V(m)(s) . a(m)|ao(x)) which conveniently expresses the development stages for self-awareness described previously where on the right-hand side:
- the term a0 (x)represents the objectification of the autonomic part of device 5 illustrated in Figure 3 (and Figure 2);
- the term Ψ (m, s) . a(m) represents the modelling of the embodiment of the agency apportioned to device 5 illustrated in Figure 4;
- the term T(m,s).3(m) represents the modelling of the associated agency apportioned to device 6 illustrated in Figure 5.
The left-hand term X0[n] is model input of which the perceptual feedback must (in part) emulate (Figure 6). By further partitioning Ei(s)
X0[n] = (<Em(m)|<I>s(s-m)|ao(x)) where <I>m(m) represents the m (servo) sensory feedback signals directly attributable to each of the m motor outputs β(ηη) and <I>s(s-m) represents the other s-m non-motor-based sensory inputs. The model output Y0[n] must therefore tend to converge on the signals defined by
Υο[η]=(β(ιη)|Φ2 (s- m)|ao(x)) where the perceptual feedback P[n-m] is the non-motor component of Yo [n] P[n-m]=(0s(s-m)|ao(x))
Once the errors in the resultant motion are minimised (specifically in the analysis reduced to zero) then what remains of the model input (that is what remains to be modelled) are the patterns created due to the selfobject OC0(x) and any other invariant patterns representing external objects included in T(m,s). A more detailed analysis involving such a model of the external world is not tendered herein as it reveals little more than is apparent in this simple treatise.
Another novel feature of the machine is its extension to host multiple physical instances that may also be physically distinct from device 5 and/or device 6. Where a single device 5 is shared amongst the physical instances - that is the motor 4 and sensory 3 parts of the machine - as illustrated in Figure 11 the result will be a singular self-awareness despite the multiple, distributed parts. Each of those parts can be regarded as being simple peripherals to the machine's self-aware hub.
The situation is quite different from that illustrated in Figure 12 where each of the physical instances is equipped with a device 5 and is therefore capable of independently managing itself. If connection 7 encompasses all the instances of device 5 and the machine interfaces and the perceptual feedback similarly encompasses all of the sensory inputs then the machine will share the same learned model in device 6 and therefore a shared awareness of each of the physical instances emerges.
A machine with multiple instances is initiated in exactly the same manner as for a single instance wherein the first stage objectification of the autonomous behaviour of each device 5 occurs with the model in device 6. In such a case the self-object will likely emerge as a sum of models from each of the physical instances. Its perception, however, may not be so well compartmentalised and will likely require more selective monitoring to be useful to human enquirers.
Whilst experiencing a shared awareness of multiple physical instances is difficult to imagine, there are nevertheless several scenarios in which the benefits of such a shared property are more easily understood. The example described later is of devices around a household (or maybe a work environment) where the interactions of multiple people in different spaces can be readily assimilated. A security system monitoring people moving through crowded spaces is another useful example.
It is also worthy of note that the machine as described herein is not restricted to real environments. It is entirely plausible that a selfaware machine can exist in an entirely virtual world (although not then self-aware in the real world) . It does not matter that the physical substrate of the machine's reality is virtual: The machine needs only be capable of sensing its errantly-derived autonomy and virtual embodiment in its artificial environment.
Speculative research projects are currently investigating the recording of the brain state of a human for uploading into an electronic machine to extend the life of the conscious being beyond that of the life of their body. Even if the significant challenge of recording the state of an entire brain - or at least those components distinct from the body's physical influence - is overcome, the machine architecture described herein would be vital for any such machine to be self-aware.
Human cognition naturally embeds significant empathy (although to a varied extent throughout the population). Similar empathy produced by a machine is possible but requires the gamut of human sensations and motor responses to approach the feeling of a human self. Whilst the extent to which human beings experience like feelings remains a subject for debate, the machine architecture disclosed herein nevertheless provides the necessary framework for any design that aspires to the human condition.
Some final comment is still necessary regarding the experience of the self that has been largely absent in this account of self-awareness in an appropriately designed (or evolved) machine. Despite the means to remove the privacy in perceptual experience disclosed above, one self bearing witness to another's perception does not replicate the experience: Only where two machines or beings are identical and driven by identical inputs can such an experience be represented in the same way.
The majority of sensation emerges from sensory-emulating feedback as a perception of reality forms from a stable expectation of the external world based upon prior experience and augmented only by reality where differences become apparent. Such a stable perception further enables subsequent modulation of pre-programmed or learned motivational factors in device 5 from which machine learning rewards can be developed. Shortlived, noisy input states are unlikely to deliver such feedback.
It is then possible whether emulating some mammalian response or otherwise to embed within device 5 motivations and rewards that promote conditioned learning within device 5 or device 6. Power supply level can be used as a non-linear sensory source that increasingly distracts the machine from the task at hand until the power supply is recharged, for example, and a reward can be engineered by globally lowering loop gains so lowering system noise after device 5 detects recharging.
The description herein has avoided discussion of such machine complexity and disclosed only fundamental architectural building blocks around which complexity is assumed to be designed according to requirements. The emulation of a human being is indeed a vast undertaking at present but the notion of device 5 embedded in or embedding some emotive and reflexive elements and device 6 embedding cognitive elements paves the way for such machines based around the core of this invention.
It is re-iterated finally that the property of self-awareness described herein is created by an perception generated at its sensory inputs that is indistinguishable from externally generated sensations. That is the machine uses its sensory inputs to render its stable perception of the world against the sparse, noisy and unresolved externally-sourced inputs rather than create some other representation - ethereal or otherwise - at some other point(s) in the machine or model therein.
Application of the Invention
The number of applications of the invention are so large as to make a full and proper description of them all impossible or even predictable. Instead a simple implementation of a self-aware machine is disclosed that can be constructed with existing technology. This machine illustrates the salient features of the invention that will necessarily be common to more complex future implementations regardless of the domain in which they are implemented and self-awareness is rendered.
Whilst the disclosed machine is realised using electronic devices based upon Boolean logic and memory created from transistors on a silicon substrate or such like, the invention is in part or in whole also eminently suitable for implementing in a synthetic, simulated or other neural network. The machine description herein does not limit the application of the invention to an electronic manifestation: The machine disclosed in this section is strictly by way of example.
The machine disclosed consists of one or more tablet computers 24 or mobile phone-sized computers 25 for managing household appliances and tasks and a single server computer 26 connected via a wireless interface 27 to each of the tablets/phones (Figure 13). Device 5 is housed in each of the hand-held computers and device 6 is housed within the local server. A single local server enables multiple tablets/phones in a household to exploit a shared awareness.
Each of the tablets/phones has an identical function in this example and share the same architecture but this is not a requirement in the design. Indeed the shared self-awareness could be extended to other household appliances 28 even where they are non-portable and/or immobile if their connection and interaction with the local server is as described for the tablets/phones. Wireless connections provide for convenience but wired connections can be used if necessary.
Each tablet/phone (Figure 14) employs a 'central' processor 29, temporary memory 30, non-volatile memory 31, wireless communications 32, clock 33, microphone 34, loudspeaker 35, video screen 36 with integrated touchsensitive membrane 37, camera 38 and gyroscope 39. A rechargeable power supply 40 is also housed within the tablet. A second rear-mounted touchsensitive membrane 41 without the screen and like-mounted camera 42 are also useful in this application but not necessary.
Each tablet/phone is able to serially process instructions and data according to that stored in its non-volatile memory or downloaded from its wireless connection such as to impart some function to the tablet/phone. The clock acts to synchronise this processing and the clock rate may be modulated by processing demand. Each device also contains a rechargeable power supply that includes electronic means to measure the status of that supply.
Each tablet/phone is assumed to incorporate all the functionality of a typical such computer - that is each instance in this part of the machine can can work as a standalone machine in its own right with no involvement of the server. The processes that operate independently of the server that is the processes of which the server has no control of the output can be deemed to be part of the autonomous function of device 5: What processes are autonomous will vary with machine design.
For example, if the tablet/phone relays its inputs to the server, such as when a user changes multimedia content, the model in the server has the capacity to forecast the likely intent of the particular user based upon it prior learned knowledge of them and their interactions with multimedia sources. The server model can issue commands to the tablet/phone that pre-empts the user's likely desired selection: The model in device 6 is thus fulfilling its role as a predictor of the future.
In the disclosed machine, the tablet/phone battery voltage level, the power supply management system, such as is able to reduce screen brightness to save power as the battery capacity reaches a low level, for example, and the processor clock rate are used as the basis of the autonomous, ever-present behaviours demanded of device 5 for selfawareness to emerge. This information is bundled with all the input information from each tablet/phone and sent to the server (device 6).
The computer server (Figure 15) also employs a 'central' processor 29, temporary memory 30, non-volatile memory 31, wireless communications 32, clock 33 and most likely a video screen 36 and keyboard 43. A power supply 44 is essential and may well include some rechargeable offline capability to enable it to shut down properly in the event of a break in the power supply. The power supply can can also incorporate status monitoring and form a device 5 in its own right if desired.
Information sent between each phone/tablet and the server is necessarily serialised for broadcasting in a wireless channel. The power supply and clock rate data is required to be as ever-present as the tablet/phone' s operation and a data multiplexer sends the relevant data to the local server. When a tablet/phone is powered off, its self-object is effectively muted in the modelled behaviour and the property of selfawareness of that tablet/phone is prevented.
Boundary Scan technology (such as JTAG [31]) likely embedded in the processor in each tablet/phone provides one readily-accessible means to pass the processor data from each tablet/phone to and from the server. Such a method (where speed allows) builds the model in device 6 using raw sensed data. It is also possible to use higher level abstractions of the data available at a user level but this may well increase the hierarchical model complexity and decrease its efficiency.
Since perception is both retrospective (that is requires model convergence on attractor states) and asynchronous to the sensory inputs (that is occurs on convergence rather than changing inputs), interface speed requirements can be somewhat relaxed. This is particularly the case where device 5 contains machine functionality designed to cope with emergencies or alarming events and the cognitive, perceptive elements endow the machine with some other, additional advantage.
The code for generating the model in device 6 is contained within the prior art and can be compiled on standard computer platforms: In this example a standalone rack-mounted computer is adopted for device 6 that also hosts the software development tools. Several possibilities exist for software development and deployment such as the MATLAB-Simulink Integrated Development Environment [32] that readily accepts pre-written C++ code elements from the third-party sources [21,22] .
For the model in device 6 example the data can be input from and output to the tablets/phones via software registers and the model left to organise itself. Model optimisation will by lengthy, however, and time can be saved by copying a part-formed model recorded from a previous machine implementation or the model that could also be subject to some purposeful design intervention. The best targets for such interventions are the low-level stages where a structure is most obvious.
An example of purposeful design is where camera data can be presented in small captured areas to a first hierarchical level that combines to drive a second hierarchical level that sums those smaller areas. For the case of multiple tablets/phones a third hierarchical level could be added to represent the sum of all the camera inputs and therefore a combined video perception capability. Microphone inputs can be similarly combined to create a combined audio perception capability.
Grouping like inputs together and applying increased weightings in proportion to the difference in indices as the model is traversed is one means by which task specialisation and model learning efficiency can be increased. Optimised hierarchical model levels can represent distinct, orthogonal derivatives with respect to time or space, culminating in the invariant objects sensed in the external world such as the self-object: Optimal design is therefore not always obvious.
For the case of multiple tablets/phones, where inputs can be summed at a number of hierarchical levels, the form of an optimal hierarchy is not clear. It is suggested, however, that all the inputs of a particular sense are summed together at the earliest possible stage such that a combined perception is created. No further guidelines are tendered as the hierarchical model can largely be left to develop and organise itself optimally in any case given sufficient time.
Nevertheless the extent of the description tendered will enable a suitably skilled engineer to implement the invention fully in their chosen hardware and/or software platform. Coded elements can be developed from software sources such as [21,22] and suitable hardware is readily available wherein any modifications necessary should be trivial to the qualified engineer. Any chosen platform, however, may require specific adaptations that are beyond the scope of the description given herein.
For extra clarification, several critical code elements are supplied herein in a generic form. Since each different tablet/phone will employ specific software drivers and different operating systems, specific code examples are not likely useful. The coding required can operate on top of the existing software or replace/be integrated into the operating system and software drivers as required. The latter option requires much greater programming resources but offers higher performance.
Normally in phone/tablet computers data input and output is in some form of multi-bit pulse code modulated (PCM) data and therefore either dictates the data format in the server or requires conversion to/from the native server format. For the sake of example, Figure 16 provides the generic code elements for converting multi-bit PCM data (in this case audio data) into a single unit pulse position modulated signal and vice versa should either such conversion types be required.
For the server, generic code elements have been written that assume the use of unary time-based data and implement modified Bloom filters as described previously. The code elements included are:
- Figure 17 provides generic code for generating model predictions and learning by reinforcement of frequently used patterns;
- Figure 18 provides generic code for identifying patterns and adding new Bloom filters to the network;
- Figure 19 provides generic code for purposeful degradation of model interconnections and for interconnection pruning when required;
- Figure 20 provides generic code for perception monitoring.
Regardless of the precise implementation the novel aspects of the invention concern the machine components that form device 5 and device 6, their effective connection 7 and their ensuing interaction in the presence of the sensory-emulating, perceptual feedback. For the application of the invention disclosed that process is as follows:
- firstly the tablet/phone (device 5) is modelled in the local server (device 6) as an invariant object acting autonomously and with free agency within the external world;
- secondly correlations between touch sensory devices, camera images and microphone inputs from the tablet/phone serve to embody the agency of device 5 within the physical bounds of the tablet/phone;
- thirdly the behaviours emanating from device 6 and acting in the tablet/phone are associated with behaviours of the tablet/phone due to device 5 such that their modelled embodiment is shared within the same physical bounds of the tablet/phone;
- fourthly the machine is rendered capable of perceiving itself as an autonomous agent in the external world by the sensory-emulating feedback around the resulting attractor network in device 6: The machine is therefore endowed with the property of self-awareness.
Whilst the machine exhibits awareness of itself in its environment, that awareness will be different to that of our human consciousness. Different sensory inputs and even the lack of stereophonic audio and stereoscopic video sensing will yield a different perception of the external world. The shared awareness of multiple tablets/phones each equipped with audio and visual sensing is not easy to envisage - nor is it obvious therefore to forecast exactly what advantages such perception will enable.
Despite being different to human perception, the machine is still capable of exhibiting an empathetic response to its users. For example, where the machine described discerns a lack of audio intelligibility at any of its physical instance (measured as a lack of weighted signal-to-noise ratio), it can identify the one or more connected media sources that are the cause of the problem and decrease their output levels accordingly - and possibly all the active sound sources in the household.
A further example of an empathetic response can be generated where the model in device 6 has learned how to manage the resources it controls in respect of the power saving measures embedded in device 5. The learned response can emulate tiredness or hunger amongst the observed human population and serve as a driver to either awaken or relax users, such as by an appropriate media selection and presentation, as the situation demands, for instance, waking in the morning and relaxing at night.
References [1] Boole, G. (1854) An Investigation of the Laws of Thought, Prometheus Books, ISBN 9781591020899 [2] McCulloch, W.S., Pitts, W. (1943) A Logical Calculus of Ideas Immanent in Nervous Activity, Bulletin of Mathematical Biophysics, number 5(4), pages 115-133 [3] Chalmers, D. (1995) Facing Up to the Problem of Consciousness, Journal of Consciousness Studies, number 2(3), pages 200-219 [4] Turing, A.M. (1950) Computing Machinery and Intelligence, Mind, number 59, pages 433-460 [5] Grenander, U. (1992) General Pattern Theory, Chapman and Hall, ISBN 9780412388101 [6] Kalman, R.E. (1960) A New Approach to Linear Filtering and
Prediction Problems, Transactions of the ASME, Journal of Basic Engineering, number 82, pages 35-45 [7] Rao R.P.N. Rao, Ballard D.H. (1995) Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex, Technical Report 96.2 (revision of 95.4) National Resource Laboratory for the Study of Brain and Behavior, University of Rochester [8] Bloom, B. (1970) Space/Time Trade-offs in Hash Coding with Allowable Errors, ACM Communications, number 13(7), pages 422-426 [9] Hawkins, J. (with Blakeslee, S.) (2004) On Intelligence, Holt Paperbacks, ISBN 9780805078534 [10] Descartes, R. (1637) Discours de la methode [11] Crick, F. (1994) The Astonishing Hypothesis: The Scientific Search for the Soul, Touchstone, ISBN 9780671712952 [12] Greenfield, S. (1995) Journey to the Centers of the Mind: Towards a Science of Consciousness, W.H.Freeman & Co, ISBN 9780716727231 [13] Edelman, G. (1989) Neural Darwinism: The Theory of Neuronal Group Selection, Oxford University Press, ISBN 9780192860897 [14] Orpwood, R. (2013) Qualia Could Arise from Information Processing in Local Cortical Networks, Frontiers in Psychology, number 4(121) [15] Grossberg, S. (2013) Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world, Neural Networks, number 37, pages 1-47 [16] Harth, E. (1995) The Creative Loop: How the Brain Makes a Mind, Helix Books, ISBN 9780201489385 [17] Ramachandran, V.S., Blakeslee, S. (1998) Phantoms in the Brain, Fourth Estate, ISBN 9781857028959 [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
Rosenfield, I. (1993) The Strange, Familiar and Forgotten: An Anatomy of Consciousness, Alfred Knopf, ISBN 9780679402596
Damasio, A. (2010) Self Comes to Mind: Constructing the Conscious Brain, Pantheon Books, ISBN 9780307379498
Penrose, R. (1994) Shadow of the Mind:, Oxford University Press, ISBN 9780198539780
NuPIC, http://www.numenta.org
Pandya, A. (1995) Pattern Recognition with Neural Networks in C++ CRC Press, ISBN 9780849394621
Bayes, T., Price, R. (1763) An Essay towards Solving a Problem in the Doctrine of Chances by the Late Rev. Mr. Bayes, F.R.S., Communicated by Mr. Price, in a Letter to John Canton, A.M.F.R.S. In Phil. Trans. 53
Mackay, D. Algorithms (2003) Information Theory, Inference, and Learning Cambridge University Press, ISBN 9780521642989
Hopfield, J. J. (1982) Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proceedings of the National Academy of Sciences, number 79, pages 2554-2558
Amit, D. (1989) Modeling Brain Function: The World of Attractor Neural Networks, Cambridge University Press, ISBN 9780521421249
Black, H.S. (1934) Stabilized Feedback Amplifiers, Bell System Tech. J. (American Telephone & Telegraph), number 13(1), pages 1-18
Linsker, R. (1988) Self-organization in a perceptual network, IEEE Computer, number 21(3), pages 105-17
Wiley, G. (Barker, R.) (1976) The Two Ronnies, BBC TV, Series 5,
Episode 3
Hebb, D. (1949) The Organization of Behavior: A Neuropsychological Theory, Wiley & Sons, ISBN 9780415654531
Standard Test Access Port and Boundary-Scan Architecture, IEEE Standard 1149.1-1990
The MathWorks Inc., http://uk.mathworks.com/products/matlab/

Claims (3)

  1. Claims
    I Claim,
    1. A machine distributed or otherwise consisting of one or more inputs, one or more outputs and at least one or more instances of each of two further constituent parts wherein a first part of the machine performs in isolation from the second part of the machine some regulatory function and further produces one or more motor outputs in response to the machine inputs or its previously defined isolated function that are shared with (such as formed from the union of) outputs from a second part of the machine and a second part of the machine contains means to model sequences formed from the signals appearing at its inputs and produce outputs that predict its future inputs and outputs (that are connected to the machine outputs) such as to best promote the success of the machine in its function and its survival in its environment and where the inputs to the second part of the machine are formed from the effective parallel connection of the machine inputs from its external world, outputs from the first part of the machine that represent its isolated behaviours and direct feedback signals generated from the machine outputs whereby the learned model on account of the parallel connection and the nature of the signals emanating from the first part of the machine apportions a high degree of certainty to the inference of itself (the machine) acting as an autonomous agent in its external world that further serves in developing a selfreferenced model of the external world and where the learned model outputs that form a relatively stable prediction of it inputs are fed back to the appropriate machine inputs such as to be indistinguishable from the sensed inputs from the external environment so that the machine senses its perception of its external world and is therefore capable of sensing itself as an autonomous agent acting in its external world with a high degree of certainty and where that perception can in whole or in part be monitored externally and possibly but not necessarily via some process that implements the inverse transformation of that due to the one or more appropriate input sensors.
  2. 2. A machine according to claim 1 wherein multiple but not necessarily identical physical instances each equipped with sensory inputs and possibly motor outputs and possibly each with apparently autonomous behaviour as commensurate with the first part of the invention are connected effectively in parallel as described in claim 1 to at least one second part of the invention such that the model in that second part is endowed with a shared awareness of the multiple physical instances and a single shared model of the external world sensed by those instances.
  3. 3. A machine according to claim 1 or claim 2 that is comprised of an existing machine which forms the first part of the invention to which the appropriate input and outputs are connected according to the requirements in claim 1 and/or claim 2 and to which the second part of the machine according to claim 1 and/or claim 2 is added retrospectively.
    A machine according to claim 1, claim 2 or claim 3 wherein the learned model in the second part of the machine is comprised of one or more separate partitions that operate effectively in parallel over all or some of the hierarchical levels defined in the model and that are joined at at least one of the hierarchical levels (summing nodes) and where at least one of the partitions is connected to the model inputs and outputs.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein instances of the first part of the invention or the second part of the invention can be included in the machine in parallel with existing parts to suit the current requirement whether those parts are physically separable or switched in/out of the machine.
    6.
    9.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein an initial learning phase is employed where external sensory inputs are muted to some extent or other to establish a high statistical likelihood of the model in the second part of the invention including one or more objects that represent the autonomous behavioural elements that are the source of the self- or shared awareness of the machine.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein filters that form the interfaces between hierarchical levels or segments of the model in the second part of the invention have certain coefficients set to zero to effect purposeful shaping of the model so accelerating the initial learning phase and preventing the usage of the inhibited pathways in ongoing or later model optimisation.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein the objects and behaviours within the model in the second part of the invention due to the one or more physical instances comprising the first part of the invention are first copied from a previously recorded state from a similar machine and secondly written to the current machine so that any early learning phase can be omitted and the machine be made ready for installation in its external world.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein the power supply status in each instance of the first part of the invention provides at least part of the apparently autonomous behaviour of the first part of the invention as required in claim 1, claim 2, claim 3 and claim 4.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein the clock rate or multiple thereof that synchronises a processor forming at least part of each instance of the first part of the invention provides at least part of the apparently autonomous behaviour of the first part of the invention as required in claim 1, claim 2, claim 3 and claim 4.
    A machine according to claim 1, claim 2, claim 3 or claim 4 wherein the voltage level or remaining capacity level of the power supply that powers at least part of each instance of the one or more first parts of the invention is used within the model in the second part of the invention to derive an empathetic response to the desire for feeding or sleeping in human or other mammalian subjects sensed in its external world.
    12. A machine according to claim 1, claim 2, claim 3 or claim 4 intended to simulate a human or mammal in a simulated world or emulate a human or other mammal in the real world or create a new being in either the real world or a simulated environment where one or more first parts of the invention described in claim 1, claim 2, claim 3 and claim 4 embeds or is embedded within the emotive responding part of the machine and the one or more second parts of the invention described in claim 1, claim 2, claim 3 and claim 4 embeds or is embedded in the cognitive part of the machine.
    13. A machine according to claim 1, claim 2, claim 3 or claim 4 providing in whole or in part a model representing the behaviour of a group of human or mammals wherein one or more parts of the first and/or second parts of the invention as described in claim 1, claim 2, claim 3 and claim 4 may be equipped with different attributes that produce a range of responses and possibly where useful outputs may be obtained by some appropriately weighted average of that range of responses .
    14. A machine according to claim 1, claim 2, claim 3 or claim 4 designed to exist exclusively or otherwise in a virtual environment where the model in the second part of the machine is rendered selfaware in that virtual environment and/or that machine is based upon or copied from a nervous system recording from a human or animal source in the real world.
    15. A machine according to claim 1, claim 2, claim 3 or claim 4 where multiple physical instances of at least the first part of the invention yield a shared awareness wherein at least one first part of the invention is permanently connected and the remaining first parts of the invention can be added or removed as required.
GB1614245.7A 2016-08-20 2016-08-20 Self-referential machine architecture Withdrawn GB2553503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1614245.7A GB2553503A (en) 2016-08-20 2016-08-20 Self-referential machine architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1614245.7A GB2553503A (en) 2016-08-20 2016-08-20 Self-referential machine architecture

Publications (2)

Publication Number Publication Date
GB201614245D0 GB201614245D0 (en) 2016-10-05
GB2553503A true GB2553503A (en) 2018-03-14

Family

ID=57045711

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1614245.7A Withdrawn GB2553503A (en) 2016-08-20 2016-08-20 Self-referential machine architecture

Country Status (1)

Country Link
GB (1) GB2553503A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2568030A (en) * 2017-10-24 2019-05-08 Zyzzle Ltd Artificial intelligence systems and autonomous entity apparatuses including artificial intelligence systems
WO2021165996A1 (en) * 2020-02-21 2021-08-26 Rn Chidakashi Technologies Pvt. Ltd Adaptive learning system for localizing and mapping user and object using an artificially intelligent machine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2568030A (en) * 2017-10-24 2019-05-08 Zyzzle Ltd Artificial intelligence systems and autonomous entity apparatuses including artificial intelligence systems
WO2021165996A1 (en) * 2020-02-21 2021-08-26 Rn Chidakashi Technologies Pvt. Ltd Adaptive learning system for localizing and mapping user and object using an artificially intelligent machine

Also Published As

Publication number Publication date
GB201614245D0 (en) 2016-10-05

Similar Documents

Publication Publication Date Title
Tononi The integrated information theory of consciousness: an updated account
Schmidhuber Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts
Zhang et al. Chaos synchronization in fractional differential systems
Bornstein Is artificial intelligence permanently inscrutable
Schmidhuber Simple algorithmic principles of discovery, subjective beauty, selective attention, curiosity & creativity
US11468308B2 (en) Architecture for a hardware based explainable neural network
Zhao et al. A framework for the general design and computation of hybrid neural networks
Csapo et al. CogInfoCom channels and related definitions revisited
Csapo et al. The spiral discovery method: An interpretable tuning model for CogInfoCom channels
Stramandinoli et al. Making sense of words: a robotic model for language abstraction
GB2553503A (en) Self-referential machine architecture
US20210168223A1 (en) Biomimetic codecs and biomimetic coding techniques
Safron Integrated world modeling theory (IWMT) expanded: implications for theories of consciousness and artificial intelligence
Fields et al. The free energy principle induces neuromorphic development
Isomura et al. Reverse-engineering neural networks to characterize their cost functions
Hoffmann On modeling human-computer co-creativity
Xu et al. Reversible graph neural network-based reaction distribution learning for multiple appropriate facial reactions generation
Tononi On the irreducibility of consciousness and its relevance to free will
Alhafidh et al. FPGA hardware implementation of smart home autonomous system based on deep learning
Tantiongloc et al. An information and control framework for optimizing user-compliant human–computer interfaces
Fields et al. Control flow in active inference systems Part II: Tensor networks as general models of control flow
Berzish et al. Real-time FPGA simulation of surrogate models of large spiking networks
Yonemura et al. Network model of predictive coding based on reservoir computing for multi-modal processing of visual and auditory signals
Velik Why machines cannot feel
Lim et al. Neural multisensory scene inference

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)