EP3769168A1 - Processing a command - Google Patents

Processing a command

Info

Publication number
EP3769168A1
EP3769168A1 EP19719351.9A EP19719351A EP3769168A1 EP 3769168 A1 EP3769168 A1 EP 3769168A1 EP 19719351 A EP19719351 A EP 19719351A EP 3769168 A1 EP3769168 A1 EP 3769168A1
Authority
EP
European Patent Office
Prior art keywords
entity
environment
data
command
robotic system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19719351.9A
Other languages
German (de)
French (fr)
Inventor
Pawel SWIETOJANSKI
Ondrej MIKSIK
Indeera MUNASINGHE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Emotech Ltd
Original Assignee
Emotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Emotech Ltd filed Critical Emotech Ltd
Publication of EP3769168A1 publication Critical patent/EP3769168A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40411Robot assists human in non-industrial environment like home or office

Definitions

  • the present invention relates to processing a command.
  • Robots and other personal digital assistants are becoming more prevalent in society and are being provided with increased functionality.
  • Robots can be used in relation to other entities.
  • robots may be used to control other entities and/or to facilitate interaction with other entities.
  • Robots can be used in environments where multiple, for example many, other entities are located.
  • Robots may be controlled via commands issued by users. While some commands may be trivial for the robot to process, other commands may be more complicated for the robot to process, may be misinterpreted by the robot, or the robot may not be able to process such commands at all.
  • a robotic system locatable in an environment of processing a command, the method comprising:
  • a method of processing a command by a robotic system located in an environment comprising:
  • the first and second entity both have the first attribute; and only the first entity of the first and second entities has the second attribute;
  • performing the action in relation to the first entity based on the determining, wherein performing the action comprises transmitting a control signal to control operation of the first entity based on the received command.
  • a method of processing a command by a robotic system comprising:
  • a robotic system configured to perform a method according to any of the first, second or third aspects of the present invention.
  • a computer program comprising instructions which, when executed, cause a robotic system to perform a method according to any of the first, second or third aspects of the present invention.
  • FIG. 1 shows schematically an example of an environment in accordance with an embodiment of the present invention
  • Figure 2 shows schematically an example of a knowledge graph representing the environment shown in Figure 1 in accordance with an embodiment of the present invention.
  • Figure 3 shows a schematic block diagram of an example of a command received by a robot in accordance with an embodiment of the present invention.
  • An environment may also be referred to as a“scene” or a“working environment”.
  • the environment 100 comprises a plurality of entities.
  • An entity may also be referred to as an “object”.
  • the environment 100 comprises at least three entities, namely a first entity 110, a second entity 120 and a robotic system 130.
  • the robotic system 130 is an example of an entity.
  • the first and second entities 110, 120 are different from the robotic system 130.
  • the first entity 110 and/or the second entity 120 may be human.
  • the first entity 110 and/or the second entity 120 may be non-human.
  • the first entity 110 and/or the second entity 120 may be inanimate.
  • the first entity 110 and/or the second entity 120 may be animate.
  • the first entity 110 and the second entity 120 are both in the environment 100 simultaneously.
  • the first entity 110 and the second entity 120 are in the environment 100 at different times.
  • the environment 100 may comprise one or both of the first entity 110 and the second entity 120 at a given point in time.
  • a robotic system may be considered to be a guided agent.
  • a robotic system may be guided by one or more computer programs and/or electronic circuitry.
  • a robotic system may be guided by an external control device or the control may be embedded within the robotic system.
  • a robotic system may comprise one or more components, implemented on one or more hardware devices.
  • the components of the robotic system are comprised in a single housing.
  • the components of the robotic system are comprised a plurality of housings.
  • the plurality of housings may be distributed in the environment 100.
  • the plurality of housings may be coupled by wired and/or wireless connections.
  • the robotic system may comprise software components, including cloud or network-based software components.
  • a robotic system may be configured to interact with human and/or non human entities in an environment.
  • a robotic system may be considered an interactive device.
  • a robotic system as described herein may or may not be configured to move.
  • a robotic system may be considered to be a smart device.
  • An example of a smart device is a smart home device, otherwise referred to as a home automation device.
  • a smart home device may be arranged to control aspects of an environment including, but not limited to, lighting, heating, ventilation, telecommunications systems and entertainment systems.
  • a robotic system as described in the examples herein may be arranged to perform some or all of the functionality of a smart home device.
  • the robotic system 130 may comprise an autonomous robot.
  • An autonomous robot may be considered to be a robot that performs functions with a relatively high degree of autonomy or independence compared to non-autonomous robots.
  • a robotic system may be referred to as a“robot” herein, it being understood that the robotic system can comprise more than one hardware device.
  • the first entity 110 has first and second attributes.
  • the second entity 120 has the first attribute. However, the second entity 120 does not have the second attribute.
  • the second entity 120 may or may not have a third attribute, the third attribute being different from the second attribute.
  • the first attribute is a common attribute with which both the first and second entities 110, 120 are associated.
  • the first attribute is associated with an entity type.
  • the second attribute is a disambiguating attribute (which may also be referred to as a“distinguishing attribute”) with which only one of the first and second entities 110, 120, namely the first entity 110, is associated.
  • Indicators may be used to indicate the first attribute and/or the second attribute. Such indicators may be part of commands received by the robot 130, for example.
  • an indicator for the first attribute is substantival.
  • the indicator for the first attribute pertains to a substantive.
  • “Substantive” is another term for“noun”. As such, the term“substantival” may be referred to as“nounal”.
  • an indicator for the second attribute is adjectival.
  • the second attribute and/or the indicator for the second attribute pertains to an adjective.
  • a command received by the robot 130 may refer to a“desk light”.
  • “light” is substantival and indicates the first attribute
  • “desk” is adjectival and indicates the second attribute.
  • the command may refer to a“light next to the TV”.
  • “light” is substantival and indicates the first attribute
  • “next to the TV” is adjectival and indicates the second attribute.
  • the second attribute is referred to as a“relationship attribute”.
  • the relationship attribute is associated with a relationship between the first entity 110 and a reference entity.
  • the reference entity is the second entity 120.
  • the reference entity is different from the second entity 120.
  • the relationship comprises a location-based relationship.
  • the relationship comprises an interaction-based relationship.
  • the second attribute is an absolute attribute of the first entity 110.
  • the indicator“red” indicates an absolute attribute, rather than a relationship attribute.
  • the robotic system 130 comprises input componentry 140.
  • the input componentry 140 comprises one or more components of the robotic system 130 that are arranged to receive input data.
  • the input componentry 140 obtains environment data.
  • the input componentry 140 may receive environment data.
  • the robotic system 130 generates the environment data.
  • the environment data is a type of input data.
  • the environment data represents an environment in which the robotic system 130 is locatable.
  • the environment data represents the environment 100 in which the robotic system 130 is located at the point in time at which the environment data is received, namely the surroundings of the robotic system 130. “Surroundings” is used herein to refer to the physical space around the robotic system 130 at a given point in time.
  • the environment data represents the environment 110 at one or more points in time.
  • the first entity 110 and the second entity 120 are represented in the received environment data. Where the environment data comprises multiple time samples, a given time sample may represent both the first entity 110 and the second entity 120 and/or a given time sample may represent only one of the first entity 110 and the second entity 120.
  • the input componentry 140 comprises an interface.
  • the interface may be between the robotic system 130 and the environment 100.
  • An interface comprises a boundary via which data can be passed or exchanged in one or both directions.
  • the input componentry 140 comprises a camera.
  • the obtained environment data may comprise visual environment data received using the camera.
  • the environment data may comprise image data.
  • the input componentry 140 comprises a microphone.
  • the obtained environment data may comprise audio environment data received using the microphone.
  • the input componentry 140 comprises a network interface.
  • the network interface may enable the robotic system 130 to receive data via one or more data communications networks. Some or all of the received environment data may be received using the network interface.
  • the environment data could represent an environment in which the robotic system 130 is not presently located, in other words an environment in which the robotic system 130 is not located at the point in time at which the environment data is received.
  • the environment data is based on a virtual or artificial environment. As such, a camera or microphone, comprised in the robotic system or otherwise, may not be used to obtain the environment data.
  • an artificial environment may be generated by a computer, and a representation of that environment may be provided to the robotic system 130.
  • the environment data is based on an environment whose image is captured by an image capture device before the robotic system 130 is moved to or located in the environment.
  • the robotic system 130 also comprises a controller 150.
  • the controller 150 may be a processor.
  • the controller 150 can include a microprocessor, microcontroller, processor module or subsystem, programmable integrated circuit, or programmable gate array.
  • the controller 150 is communicatively coupled to the input componentry 140. The controller 150 may therefore receive data from the input componentry 140.
  • the obtaining of the environment data by the robotic system 130 comprises the controller 150 causing movement of at least part of the robotic system 130 to capture representations of different parts of the environment 100.
  • the input componentry 140 comprises a camera
  • a part of the robotic system 130 comprising the camera may move from an initial position to one or more further positions to enable the camera to capture images of different parts in the environment.
  • the environment data obtained by the robotic system 130 may comprise the captured images of the different parts of the environment.
  • a representation of a first part of the environment 100 may be at one point in time and a representation of a second part of the environment 100 may be at another point in time. In some examples, representations of both the first part and the second part of the environment 100 are at the same point in time.
  • Such movement may comprise rotation of the at least part of the robotic system 130.
  • the at least part of the robotic system 130 may rotate 360 degrees around a vertical axis.
  • images may captured for different angles of the rotation.
  • an image may be captured for each degree of rotation.
  • the robotic system 130 may be configured to move in other ways besides rotation. Images of the environment from the different perspectives may be stitched together digitally to form a 360 degree map of the environment. Therefore, a 360 degree spatial map of the environment may be obtained without multiple cameras positioned at different locations being used. In other examples, multiple cameras may be used to capture environment data from different perspectives. Cameras may be positioned at various locations in the environment 100, for example.
  • the controller 150 is configured to generate data representing knowledge of the environment 100.
  • the data representing knowledge of the environment 100 may be referred to as“knowledge data” or“a knowledge representation” of the environment 100.
  • the controller 150 generates the data representing knowledge of the environment based on the received environment data.
  • the controller 150 generates the knowledge data based on initial knowledge data and the received environment data.
  • the robotic system 130 may be pre-loaded with initial knowledge data representing knowledge of an initial environment and the controller 150 can modify the initial knowledge data based on the received environment data to more accurately represent the actual environment 100.
  • Pre-loading the initial knowledge data in this way may reduce latency in generating the model of the environment 100, compared to the robotic system 130 generating the knowledge data representing knowledge of the environment 100 without the initial knowledge data.
  • Modifying the initial knowledge data based on the received environment data also enables the accuracy of the knowledge data and/or the information represented therein to be evolved and/or improved over time.
  • the data representing knowledge of the environment is a conceptual or semantic data model of entities and/or interrelations of entities in the environment.
  • the knowledge data comprises a knowledge graph.
  • the knowledge graph is a topological representation of entities in the environment.
  • the knowledge graph may be referred to as a “scene graph”.
  • the knowledge graph may be represented as a network of entities, entity attributes and relationships between entities.
  • the knowledge data does not comprise a graphical or topological representation of the environment.
  • the knowledge data may comprise a model that can recognise and/or learn entities, entity attributes and relationships between entities, without generating an explicit graphical representation.
  • the knowledge data may comprise a statistical and/or a probabilistic model.
  • the knowledge data comprises a relationship model representing relationships between entities in the environment.
  • the knowledge data may be able to store relatively large amounts of complicated information or knowledge compared to more a more simplistic representation or recollection of the environment.
  • the generated knowledge data may be stored locally, for example in a memory of the robotic system 130, and accessed by the controller 150 at one or more subsequent points in time.
  • the controller 150 may generate the knowledge data independently of user supervision. As such, a burden on the user may be reduced compared to a case in which user supervision and/or input is used.
  • generating the knowledge data involves machine learning. Generating the knowledge data may involve the use of an artificial neural network, deep learning model and/or other machine learning models. The machine learning used to generate the knowledge data may be supervised, semi- supervised, weakly-supervised, data-supervised, or unsupervised machine learning, for example.“Generating” as used herein may refer to an act of creating new knowledge data or an act of updating or amending previously- created knowledge data. In some examples, generating the data representing knowledge of the environment is performed autonomously.
  • generating the knowledge data comprises determining an operating state of the first entity 110 and/or the second entity 120.
  • An operating state of a given entity indicates a state in which the given entity is currently operating.
  • the operating state is binary in that it may be one of two possible values.
  • An example of a binary operating state is whether the given entity is on or off.
  • the operating state is not binary, in that it may have more than two possible values.
  • the knowledge data may represent the determined operating state of the first entity 110 and/or of the second entity 120. Alternatively or additionally, the determined operating state of the first entity 110 and/or of the second entity 120 may be stored separately from the knowledge data.
  • generating the knowledge data comprises determining whether or not the first entity 110 and/or the second entity 120 is controllable by the robotic system 130.
  • the controllability of the first entity 110 and/or second entity 120 may be represented in the knowledge data.
  • the controllability could be determined in various ways. For example, the controllability may be determined based on an entity type, such as TV, an initial interaction between the robotic system 130 and the entity, a test signal, etc.
  • the environment data comprises representations of the environment 100 at multiple points in time.
  • generating the knowledge data may comprise analysing representations of the environment 100 at multiple points in time.
  • the environment data comprises a first representation of the environment at a first point in time and a second representation of the environment at a second point in time, later than the first point in time.
  • Knowledge data may be generated based on the first representation of the environment at the first point in time.
  • Such knowledge data may then be updated based on the second representation of the environment at the second point in time.
  • the first and second representations may comprise static images, or“snapshots”, of the environment taken at different points in time, for example.
  • the second representation is obtained in response to a trigger.
  • the second representation is obtained in response to the controller 150 determining that the position of the robotic system 130 has changed.
  • the controller 150 may be configured to determine that the robotic system 130 has moved to a different and/or unknown environment, or to a different location within a same environment.
  • the trigger may be a determination that the robotic system 130 has moved.
  • the controller 150 may be configured to determine that the robotic system 130 has moved based on an output from a motion sensor of the robotic system 130.
  • the trigger is the expiry of a predetermined time period.
  • the robotic system 130 may be configured to receive new representations of the environment and update the knowledge data accordingly once per day, once per week, or at other predetermined intervals.
  • the trigger is the receipt of a command, which the robotic system 130 recognises as being the trigger.
  • the controller 150 uses static scene analysis to generate the knowledge data.
  • environment data representing the environment 100 at a single point in time is analysed.
  • the environment data comprises visual environment data
  • the visual environment data may comprise an image representing the environment 100 at a single point in time.
  • the image may be analysed by itself.
  • Such analysis may identify one or more entities represented in the image, one or more attributes of one or more entities represented in the image and/or one or more relationships between one or more entities represented in the image.
  • environment data is discarded.
  • Such environment data can therefore be used to improve the knowledge data, but is not stored in the robotic system 130 after it has been analysed. This can enable the robotic system 130 to have smaller amounts of memory than may otherwise be the case.
  • the environment data is not discarded.
  • the controller 150 uses dynamic scene analysis to generate the knowledge data.
  • environment data representing the environment 100 at multiple points in time is analysed.
  • the environment data comprises visual environment data
  • the visual environment data may comprise sequence of images representing the environment 100 at multiple points in time.
  • the sequence of images may be in the form of a video, or otherwise.
  • Such analysis may identify one or more entities represented in the sequence of images, one or more attributes of one or more entities represented in the sequence of images and/or one or more relationships between one or more entities represented in the sequence of images.
  • first data representing the environment 100 at a first point in time is stored in the robotic system 130.
  • the first data is analysed using static scene analysis. However, in some such examples, the first data is not discarded following the static scene analysis.
  • second data representing the environment 100 at a second, later point in time is received.
  • the first and second data are subject to dynamic scene analysis.
  • the first and second data may be discarded following the dynamic scene analysis.
  • Dynamic scene analysis may capture additional information that cannot be extracted using static analysis. Examples of such additional information include, but are not limited to, how entities move in relation to each other over time, how entities interact with each other over time etc.
  • Dynamic scene analysis involving storing smaller amounts of data than static scene analysis, in some examples.
  • the controller 150 may generate the knowledge data using static scene analysis and/or dynamic scene analysis. Dynamic scene analysis may involve more data being stored at least temporarily than static scene analysis, but may enable more information on the environment 100 to be extracted.
  • the robotic system 130 receives a command to perform an action in relation to a given entity via the input componentry 140.
  • the command identifies a relationship attribute of the given entity.
  • the relationship attribute indicates a relationship between the given entity and a reference entity.
  • the received command may comprise a visual command received via the camera.
  • a user of the robot 130 may perform a gesture, which is captured by the camera and is interpreted as a command.
  • the received command may comprise a voice command received via the microphone.
  • a user of the robotic system 130 may speak a command out loud, which is picked up by the microphone and is interpreted as a command.
  • the controller 150 may be configured to process the voice command using natural language processing (NLP).
  • NLP natural language processing
  • the controller 150 may be configured to use natural language processing to identify the relationship attribute of the given entity from the received voice command.
  • the microphone is used to capture environment data representing the environment 100.
  • the controller 150 may be configured to determine attributes of entities in the environment based on received audio data. For example, the controller 150 may determine that a television is in an activated state based on receiving an audio signal from the direction of the television.
  • the command may be received via the network interface.
  • a user of the robotic system 130 may transmit a command over a data communications network, which is received via the network and is interpreted as a command.
  • the user may not be in the environment 100 when they issue the command. As such, the user may be able to command the robotic system 130 remotely.
  • the command identifies first and second attributes of the given entity.
  • the relationship may be a location-based relationship.
  • the relationship may be based on a location of the given entity relative to the reference entity.
  • the relationship may be an interaction-based relationship.
  • the relationship may be based on one or more interactions between the given entity and the reference entity.
  • the reference entity is the second entity 120. In other examples, the reference entity is a third entity represented in the received environment data. The third entity may be the robotic system 130. The third entity may be different from the second entity 120 and the robotic system 130.
  • the reference entity may be a person.
  • the reference entity may be a user of the robotic system 130.
  • the controller 150 determines that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute. The determining is performed using the knowledge data and the relationship attribute. In an example, the controller 150 searches the knowledge data using the relationship attribute. Where the knowledge data comprises a knowledge graph, the controller 150 may search the knowledge graph using the relationship attribute. Searching the knowledge graph using the relationship attribute may involve the controller 150 accessing the knowledge graph and querying the knowledge graph using the relationship attribute identified in the command. In other examples, the controller 150 is configured to infer from the knowledge data and the relationship attribute that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute.
  • the controller 150 is configured to use the received environment data to determine a location of the first entity 110 relative to the reference entity.
  • the controller 150 may also be configured to determine a location of the second entity 120 relative to the reference entity.
  • the relationship attribute relates to a location-based relationship
  • the determined location of the first entity 110 and/or the second entity 120 relative to the reference entity may be used to determine that the first entity 110 is the given entity.
  • the controller 150 determines that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute autonomously. In other words, in such examples, the controller 150 performs the determining without requiring specific user input.
  • the controller 150 is configured to perform the action indicated in the received command in relation to the first entity 110 based on the determining that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute.
  • the controller 150 is able to determine which of the first entity 110 and the second entity 120 is the given entity that is referred to in the received command, and react accordingly.
  • the command can therefore be processed and responded to with a greater accuracy and/or reliability compared to a case in which the controller 150 is not able to determine which of the first entity 110 and the second entity 120 is the given entity.
  • the knowledge data representing knowledge of the environment 100 enables the robotic system 130 to more accurately determine which of multiple entities is the given entity referred to in the command.
  • the robotic system 130 may be able to interact with a user in a more natural manner compared to a case in which the controller 150 is not able to determine which of the first entity 110 and the second entity 120 is the given entity.
  • the robotic system 130 may be able to interpret commands which may be too ambiguous for some other, known robotic systems to interpret.
  • Such potential ambiguity could be due, for example, to natural language used by the user when issuing the command.
  • the robotic system 130 is able to disambiguate such commands through the use of the knowledge data. This allows the robotic system 130 to infer user intention non-trivially without the need for more explicit, less ambiguous commands to be issued by the user, which would result in a less natural interaction with the robotic system 130.
  • the robotic system 130 may be able to handle new commands or commands that have not previously been encountered.
  • the knowledge data may store relationship data representing a plurality of relationships having a plurality of relationship types between different entities, before any commands have even been received. Such relationship data may be stored and updated via the knowledge data, and used to interpret subsequently-received commands.
  • the controller 150 is configured to generate at least part of the knowledge data before the command is received by the input componentry 140.
  • the robotic system 130 may be placed into a calibration state to generate at least part of the environment data.
  • the robotic system 130 may be placed into the calibration state when the robotic system 130 is first placed in the environment 100, or otherwise.
  • Generating at least part of the knowledge data may be performed autonomously, or without user supervision.
  • Generating at least part of the knowledge data before the command is received may reduce a latency in processing the command compared to a situation in which none of the knowledge data has been generated before the command is received. For example, less processing may be performed after receiving the command compared to a case in which all of the knowledge data is generated after the command is received, thereby enabling a response time of the robotic system 130 to be reduced.
  • the controller 150 is configured to generate at least part of the knowledge data in response to the command being received via the input componentry 140.
  • the knowledge data may be updated with new information obtained after the command is received. Generating at least part of the knowledge data in response to receiving the command may improve an accuracy with which the command is processed, as the knowledge data is kept relatively up-to-date.
  • the knowledge data is updated in response to receiving the command if it is determined that environment data has not been received within a predetermined time period prior to receiving the command. This may be a particular consideration in relatively dynamic environments, where entities may move in relation to one another and/or may enter and exit the environment at particular points in time.
  • the robotic system 130 also comprises output componentry 160.
  • the output componentry 160 comprises one or more components of the robotic system 130 that are arranged to generate one or more outputs, for example in the form of output data or signalling.
  • the controller 150 is communicatively coupled to the output componentry 160.
  • the controller 150 may transmit data to the output componentry 160 to cause the output componentry 160 to generate output data.
  • the output componentry 160 comprises an interface, for example between the robotic system 130 and the environment.
  • the output componentry 160 comprises a loudspeaker.
  • performing the action may comprise causing the loudspeaker to output a sound.
  • the sound may include a notification, alert, message or the like, to be delivered to a person.
  • the output componentry 160 comprises a network interface.
  • the network interface may enable the robotic system 130 to output data via one or more data communication networks.
  • the network interface comprised in the output componentry 160 may be the same as or different from the network interface comprised in the input componentry 140.
  • the controller 150 may be operable to cause data to be transmitted via the network interface.
  • the output componentry 160 is operable to transmit a signal. Performing the action may comprise causing the output componentry 160 to transmit a signal for the first entity 110.
  • the signal is arranged to provide the first entity 110 with a notification.
  • the first entity 110 may be a person, for example, and the robotic system 130 may be configured to provide the person with a message or alert.
  • the signal is an audio signal to provide an audio notification to the first entity 110.
  • the signal is a visual signal to provide a visual notification to the first entity 110.
  • the action comprises controlling the first entity 110.
  • the controller 150 may be configured to control the first entity 110 by causing the output componentry 160 to transmit a control signal to control operation of the first entity 110.
  • the control signal may comprise an electrical signal operable to control one or more operations, components and/or functions of the first entity 110.
  • the control signal may be transmitted to the first entity 110 itself, or may be transmitted to another entity that in turn controls the first entity 110.
  • the control signal may be transmitted when the first entity 110 is in the vicinity of the robot 130 or when the first entity 110 is not in the vicinity of the robotic system 130.
  • the control signal is operable to change an operating state of the first entity 110. Changing the operating state may involve activating or deactivating the first entity 110. Different control signals may be generated based on different commands received by the input componentry 140.
  • the first entity 110 is not in the environment when the command is received by the input componentry 140.
  • performing the action in relation to the first entity 110 may be performed in response to the robotic system 130 detecting the presence of the first entity 110 in the environment.
  • the robotic system 130 may monitor the environment and, when the presence of the first entity 110 is detected in the environment, perform the action.
  • the robotic system 130 may have multiple users.
  • the controller 150 is configured to generate different knowledge data each associated with a different one of the multiple users.
  • first knowledge data representing knowledge of the environment may be associated with a first user and contain attribute and/or relationship data that is specific to the first user
  • second knowledge data representing knowledge of the environment may be associated with a second user and contain attribute and/or relationship data that is specific to the second user.
  • information relating to multiple users may be stored in a single knowledge representation of the environment.
  • the controller 150 is configured to discard some or all of the knowledge data. Discarding some or all of the knowledge data may comprise deleting some or all of the knowledge data from memory.
  • the knowledge data may be discarded in response to a trigger.
  • An example of such a trigger is a determination that the robotic system 130 is moving or has been moved to a new environment, the new environment being different from the environment 100.
  • Another example of such a trigger is the user of the robotic system 130 changing.
  • a further example of such a trigger is an expiry of a predetermined time period.
  • Some or all of existing knowledge data may be discarded prior to generating new knowledge data, for example representing a new environment and/or for a new user.
  • Discarding some or all of existing knowledge data enables newly received commands to be interpreted based on a current environment and/or a current user, instead of a previous environment and/or a previous user.
  • Discarding redundant knowledge data for example knowledge data representing knowledge of old environments, also enables an amount of storage required to store knowledge data to be reduced. If the existing knowledge data is not discarded, it may take a relatively long amount of time for the robotic system 130 to converge on a view of the new situation. By deleting the existing knowledge data, the robotic system 130 can build new knowledge data representing knowledge of the environment from scratch, which may lead to quicker convergence, particularly if the new situation is relatively dissimilar to the previous situation. However, it may be useful to retain some of the information from the previous knowledge data.
  • the robotic system 130 may be useful to retain relationship information between entities where the entities are people who live in the house, since those relationships are unlikely to be affected by the change in location of the robotic system 130 in the house.
  • the knowledge graph 200 represents an environment.
  • the knowledge graph 200 is an example of data representing knowledge of the environment.
  • the knowledge graph 200 represents the environment 100 described above with reference to Figure 1.
  • the first entity 110, the second entity 120 and the robotic system 130 are represented. In other examples, the first entity 110 and the second entity 120 are represented, but the robotic system 130 is not represented. In particular, in such other examples, the robotic system 130 may be configured to exclude itself from the knowledge graph 120. In this examples, each of the first entity 110, the second entity 120 and the robotic system 130 corresponds to a node in the knowledge graph 200.
  • the knowledge graph 200 also stores relationships between entities. Relationships are represented in the knowledge graph 200 as edges between nodes. Such relationships may, for example, be location-based and/or proximity-based and/or interaction-based and/or dependency-based relationships.
  • Edge R i .2 represents a relationship between the first entity 110 and the second entity 120.
  • Edge R I .R represents a relationship between the first entity 110 and the robotic system 130.
  • Edge R 2,R represents a relationship between the second entity 120 and the robotic system 130.
  • the knowledge graph 200 also stores one or more attributes of the first and second entities 110, 120. Attributes are represented as labels of the nodes. Attribute Ai is common to the first entity 110 and the second entity 120. Attribute Ai may relate to an entity type of the first and second entities 110, 120. Attribute A 2 is not common to the first entity 110 and the second entity 120. The first entity 110 has attribute A2 but the second entity 120 does not have attribute A2. Attribute A2 may be a relationship attribute. Attribute A2 may therefore relate to a relationship between the first entity 110 and entity reference entity. Attribute A 2 may be based on R 1.2 , R I,R , or otherwise.
  • the second entity has an attribute A 3 , different from attributes Ai and A2.
  • Attribute A 3 may also be a relationship attribute relating to a relationship between the second entity 120 and another entity, for example the reference entity.
  • Attribute A 3 may be based on relationship data R.1 , 2, R- 2,R , or other relationship data. Similar approaches can be used for relationships between more than two entities.
  • FIG. 3 there is shown a schematic representation of an example of a command 300.
  • the command 300 is received by a robot.
  • the command 300 is received by the robotic system 130 described above with reference to Figure 1.
  • the command 300 may be issued by a user of the robotic system 130.
  • the command 300 identifies an action, a first attribute, Ai, and a second attribute, A2.
  • the first attribute, Ai is common to the first entity 110 and the second entity 120 in the environment 100.
  • the second attribute, A2 is a distinguishing attribute, which is associated with the first entity 110 but not the second entity 120.
  • the second attribute, A2 may be a relationship attribute.
  • the robotic system 130 may interpret the command 300 to determine that the action is to be performed in relation to the first entity and not the second entity, based on the attributes identified in the command 300, namely Ai and A2.
  • a robot can perform automatically-inferred lighting operation using natural language processing.
  • an environment comprises a robot, two lights and a television. A first light of the two lights is next to the television, whereas a second light of the two lights is not.
  • the robot can recognise and semantically understand the working environment in which the robot is located. For example, the robot can generate a semantic model of the environment.
  • the semantic model of the environment is an example of data representing knowledge of the environment.
  • the robot can identify each of the two lights and the television in the environment, using received environment data.
  • the robot can also identify a relationship between at least the first light and the television, and the second light and the television.
  • the robot can determine the spatial positioning of the two lights and the television in the scene. The robot may use the determined spatial positioning to identify mutual proximity and/or dependency.
  • the robot can identify a current operating state of the first light and/or second light and/or television.
  • the robot may identify the current operating state using visual and/or acoustic environment data.
  • a user commands the robot to turn on the light next to the television.
  • the action to be performed is to change the operating state of an entity from‘off to‘on’.
  • the first attribute is that the entity is a light.
  • the first attribute is common to both the first and second lights.
  • the second attribute, or “relationship attribute”, is that the entity is next to the television.
  • the given entity is a light and the reference entity is the television. Only the first light has the second attribute.
  • the second attribute can thus be used to disambiguate between the first and second lights.
  • the robot can determine which of the two lights the command relates to, can determine the current operating state of the applicable light, and can control the applicable light by adjusting the operating state of the applicable light.
  • an environment comprises a robot, two lights and a user reading a book.
  • the user issues a command to turn on a reading light.
  • the robot can determine that the user is reading the book and/or that the user is located in a specific position associated with reading a book.
  • the first attribute is that the entity is a light.
  • the second, distinguishing, attribute is that the entity is a reading light.
  • the label“reading” may be used by the robot to analyse a relationship, e.g. a proximity, between each of the two lights and the user reading the book.
  • the robot can identify a light of the two lights that provides the best operating condition for reading activity in the specific position in which the user is located.
  • the robot can control one or more parameters of the identified light.
  • the robot may control the other light of the two lights.
  • the robot may control the operating state, brightness and/or colour of the other light of the two lights.
  • the robot may control the other light of the two lights based on the location of the other light of the two lights in the other environment, or otherwise.
  • an environment comprises a robot, and can comprise multiple people.
  • the multiple people may be in the environment at the same time or at different times.
  • the user issues a command to the robot for the robot to issue an audio notification to their partner when the robot next detects the partner in the environment.
  • the reference entity is the user and the given entity is the partner.
  • the relationship between the given entity and the reference entity is an interaction-based relationship.
  • the first, common attribute is that the entity is a person
  • the second, disambiguating attribute is that the person is the user’s partner.
  • the robot performs identity recognition and extraction of interpersonal relationships between people.
  • the robot may observe interactions between people, for example between the user and one or more other people, and may use such observed interactions to determine which person is most likely to be the user’s partner.
  • the robot learns information from people in the environment and builds a dependency graph representing dependencies between people as entities of interest.
  • a dependency graph is an example of data representing knowledge of the environment.
  • the robot may receive the command at a first time, and may perform the action in response to a trigger event.
  • the trigger event is the detection of the user’s partner in the environment.
  • examples described herein provide scene understanding, for example audio-visual scene understanding, for contextual inference of user intention.
  • a robot may build an audio-visual representation state of connected devices and objects in the scene.
  • the representation may indicate which of the devices and objects are controllable by the robot, the relationships between active and passive objects in the scene, and how the devices and objects relate to user habits and preferences.
  • a robot may learn there are lights A, B and C in a room, but only light B is in proximity of a television and that light B can be controlled by the robot. The robot is thus able to perform an action that includes entity ‘TV’ and Tight’ conditioned on their mutual spatial relationship.
  • similar local scene knowledge graphs can provide for different room or object configurations, and hence the robot is able to perform an inference of user intent and respond accordingly.
  • a robotic system obtains environment data representing an environment at one or more points in time. First and second entities are represented in the received environment data. The robotic system generates data representing knowledge of the environment based on the received environment data. The robotic system receives a command to perform an action in relation to a given entity. The command identifies a relationship attribute of the given entity. The relationship attribute indicates a relationship between the given entity and a reference entity. The robotic system determines, using the generated data and the relationship attribute, that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute. The robotic system performs the action in relation to the first entity based on the determining. As such, the robotic system uses the data representing knowledge of the environment to determine which of multiple entities is the given entity referred to in the command, and is able to react accordingly. Therefore the reliability and accuracy with which commands are processed may be improved.
  • the determining that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute is performed autonomously. As such, an amount of user burden may be reduced.
  • the generating of the data representing knowledge of the environment involves machine learning. Machine learning may facilitate a more accurate mapping of relationships between entities in the environment compared to a case in which machine learning is not used. An accurate mapping of relationships between entities allows the reliability and accuracy with which commands are processed to be improved.
  • performing the action comprises transmitting a signal for the first entity.
  • the first entity may be external to the robotic system. Transmitting a signal for the first entity allows the first entity to be influenced by the robotic system based on the command. For example, the first entity may be controlled either directly or indirectly via the transmitted signal.
  • the first entity is controllable by the robotic system and the signal is a control signal to control operation of the first entity.
  • the first entity may therefore be controlled by the robotic system on the basis of the processed command. Controlling the first entity may comprise changing an operating state of the first entity.
  • the robotic system may therefore determine which entity of multiple possible entities is to be controlled in accordance with the command, and control the determined entity. This may be particularly useful where the first entity is not readily controllable by the user when the command is issued.
  • the user may be remote from the first entity.
  • the robotic system can act as a proxy to control the first entity on the user’s behalf. As such, the number of different entities the user is required to interact with may be reduced.
  • the signal is arranged to provide the first entity with a notification.
  • the robotic system is able to identify an entity, for example a person, to which a notification is to be provided, using the relationship information stored in the data representing knowledge of the environment. The delivery of the notification is therefore provided more reliably compared to a case in which the data representing knowledge of the environment is not used. Further, the notification may be delivered even when the first entity is not present when the command is issued, thereby increasing the functionality of the robotic system.
  • the generating of the data representing knowledge of the environment comprises determining an operating state of the first entity and/or the second entity and/or the reference entity, and wherein the data representing knowledge of the environment represents the determined operating state or states. As such, a detailed picture of the entities in the environment at a given point in time is obtained, allowing an accurate determination of the entity referred to in the command.
  • the generating of the data representing knowledge of the environment comprises determining whether or not the first entity and/or the second entity is controllable by the robotic system. As such, a detailed picture of the entities in the environment at a given point in time is obtained, allowing an accurate determination of the entity referred to in the command. Further, resources may be used more efficiently, since control signals may not be sent to a given entity unless it is determined that the given entity is in fact controllable by the robotic system.
  • the environment data comprises representations of the environment at multiple points in time and the generating of the data representing knowledge of the environment comprises analysing the representations of the environment at the multiple points in time.
  • the robotic system may accurately and reliably disambiguate commands relating to dynamic environments, in which one or more entities and/or relationships between entities change over time.
  • the environment data comprises a first representation of the environment at a first point in time and a second representation of the environment at a second point in time, later than the first point in time.
  • the generating of the data representing knowledge of the environment may be based on the first representation of the environment at the first point in time.
  • the data representing knowledge of the environment is updated based on the second representation of the environment at the second point in time. As such, the data representing knowledge of the environment may be updated over time to take into account developments or changes in the environment and/or in the relationships between entities in the environment, thereby improving an accuracy and/or reliability with which commands are processed by the robotic system.
  • the generating of the data representing knowledge of the environment comprises analysing the received environment data using static scene analysis.
  • static scene analysis may use fewer hardware resources than may otherwise be the case.
  • Static scene analysis may reduce a need for information to be derived“on the fly” by the robotic system, thereby reducing an amount of processing, compared to a case in which static scene analysis is not used..
  • the generating of the data representing knowledge of the environment comprises analysing the received environment data using dynamic scene analysis.
  • Using dynamic scene analysis may enable more information relating to the environment to be extracted and used for accurate processing of commands compared to some other cases.
  • the robotic system discards the data representing knowledge of the environment in response to a trigger. Discarding the data representing knowledge of the environment may allow subsequent data representing knowledge of the environment to be generated more quickly, thereby reducing latency and improving a user experience.
  • the trigger may comprise an expiry of a predetermined time period.
  • the trigger comprises a determination that a position of the robotic system has changed. As such, redundant knowledge data relating to environments in which the robotic system is no longer located is not stored, thereby reducing an amount of memory requirements of the robotic system.
  • the obtaining of the environment data comprises causing movement of at least part of the robotic system to capture representations of different parts of the environment.
  • a more complete picture of the environment may be obtained and used to accurately process commands compared to a case in which representations of different parts of the environment are not captured.
  • the representations are visual representations captured by one or more cameras
  • obtaining such representations by moving at least part of the robotic system enables a single camera, comprised in the robotic system or otherwise, to be used. If a part of the robotic system were not to move, multiple cameras pointed in multiple different directions may be required in order to capture representations of different parts of the environment. Therefore a complexity and cost of the robotic system may be reduced compared to a case in which part of the robotic system does not move to capture representations of different parts of the environment.
  • the first entity is not in the environment when the command is received.
  • the performing the action is in response to detecting the presence of the first entity in the environment.
  • the given entity referred to in the command may be identified even when the given entity is not present in the environment, thereby facilitating flexible and accurate processing of commands. Further, resources may be used more efficiently by waiting until the first entity is in the environment to perform the action.
  • the reference entity is the second entity.
  • the first entity may be identified as the given entity referred to in the command based on a relationship between the first entity and the second entity, thereby facilitating accurate processing of the command. Where the reference entity is the second entity, there may be fewer entities for the robotic system to consider, e.g.
  • a third entity is represented in the received environment data and, the reference entity being the third entity.
  • the first entity may be identified as the given entity referred to in the command based on an analysis of relationships of the first and second entities with a third, separate entity.
  • the reference entity is a third entity
  • a scope of recognisable and processable commands may be increased compared to a case in which the reference entity is the second entity. There may a trade-off, however, whereby increasing the scope of recognisable commands involves a higher computational complexity.
  • said generating comprises generating part of the data representing knowledge of the environment before the command is received. As such, a latency in processing the command may be reduced compared to a case in which none of the data representing knowledge of the environment is generated before the command is received.
  • said generating comprises generating part of the data representing knowledge of the environment in response to receiving the command.
  • the data representing knowledge of the environment can be kept up-to-date, enabling accurate processing of commands to be maintained, particularly when the robotic system is located in a dynamic environment in which entities and/or relationships between entities may change over time.
  • the relationship comprises a location-based relationship.
  • the robotic system can use contextual information relating to the relative locations of entities with respect to one another to accurately process commands.
  • Location data may be self-contained and readily available in the environment data. Data from other sources is therefore not needed.
  • the robotic system uses the received environment data to determine a location of the first entity relative to the reference entity.
  • the robotic system can autonomously determine locational or proximity-based relationship between entities and use such relationship information to accurately process commands. Determining the location of the first entity relative to the reference entity allows the robotic system to use up-to-date and accurate location information, for example in cases where an initial location of an entity may have changed over time.
  • the relationship comprises an interaction-based relationship.
  • the robotic system may use observed interactions between entities to accurately interpret and process commands.
  • Interaction data may be self- contained and readily available in the environment data. Data from other sources is therefore not needed.
  • the first entity and/or the second entity is a person.
  • the robotic system is able to use information in the data representing knowledge of the environment to distinguish between multiple people to identify a specific person referred to in a command.
  • the reference entity is a person.
  • relationships between a person and multiple other entities may be analysed to accurately process commands.
  • the person may be a user of the robotic system, for example.
  • the reference entity is the robot system. As such, relationships between the robotic system and multiple other entities may be analysed to accurately process commands.
  • the environment data comprises visual environment data.
  • environment information relating to locational or proximal relationships between entities in the environment may be obtained and used to accurately process commands.
  • the environment data comprises audio environment data.
  • environment information relating to interactions between entities in the environment may be obtained and used to accurately process commands.
  • the command comprises a voice command.
  • a user may interact with the robotic system using a voice command that uses natural language, which may be accurately processed by the robotic system, thereby allowing a more natural and meaningful interaction with the user.
  • the robotic system processes the voice command using natural language processing.
  • the voice command may be interpreted accurately without the user having to modify their natural language, thereby reducing a burden on the user and facilitating meaningful interactions.
  • the command comprises a visual command.
  • visual commands such as gestures may be accurately processed in a similar manner as voice commands or other types of command.
  • relatively‘natural’ gestures, such as pointing may be interpreted accurately by the robotic system without the user having to modify their natural behaviour.
  • a robotic system receives environment data representing an environment at one or more points in time.
  • the received environment data is analysed to identify first and second entities.
  • the robotic system receives a command to perform an action in relation to a given entity, the command identifying first and second attributes of the given entity.
  • the robotic system determines, using the received environment data, that the first and second entity both have the first attribute, and that only the first entity of the first and second entities has the second attribute.
  • the robotic system performs the action in relation to the first entity based on the determining. Performing the action comprises transmitting a control signal to control operation of the first entity based on the received command.
  • the robotic system is able to act as a proxy for the first entity, enabling the first entity to be controlled in a new way, namely via the robotic system.
  • Such an interaction technique may have a reduced user burden, and may be more reliable, than if the user were to interact with the first entity directly.
  • multiple controllable entities may be controlled via a single point of contact for the user, namely the robotic system.
  • a robotic system analyses received data to identify first and second entities.
  • the robotic system determines one or both of a current operating state of the first entity and/or the second entity, and a controllability of the first entity and/or the second entity by the robotic system.
  • the robotic system receives a command to perform an action in relation to a given entity, the command identifying an attribute common to both the first and second entities.
  • the robotic system determines, using the analysis, that the command can only be performed in relation to the first entity of the first and second entities.
  • the robotic system performs the action in relation to the first entity based on the determining. As such, a complete picture of the environment and the ability to interact with entities in the environment may be obtained, allowing the command to be processed with a greater accuracy and reliability than a case in which such a complete picture is not obtained.
  • knowledge data representing knowledge of the environment is generated by the robotic system.
  • the knowledge data is received from one or more other entities.
  • the knowledge data may be stored in a network and downloaded therefrom.
  • the knowledge data is received from a further robotic system.
  • the knowledge data may have been generated by the further robotic system.
  • the robotic system is locatable in the environment.
  • a part of the robotic system is locatable in the environment and a further part of the robotic system is locatable outside of the environment.
  • the robotic system may comprise network-based computing components.
  • the robotic system is useable with network-based components.
  • An example of a network-based component is a server. The knowledge data generated by the robotic system may be stored on such network- based components and/or may be retrieved from such network-based components.
  • the robotic system learns information about the environment in which the robotic system is located automatically, for example independent of user supervision.
  • the user can pre-programme the robotic system with names for the entities in the environment manually.
  • naming may also be referred to as “labelling” or“annotating”.
  • the robotic system may provide a naming interface via which the user can name the entities.
  • the user may, for example, be able to use a smartphone, tablet device, laptop or the like to enter the entity names.
  • the user could manually assign a memorable name to a light, for example“desk lamp” rather than“light number 3”.
  • Such an approach involves user attention to perform the pre-programming.
  • Such an approach is therefore somewhat invasive on the user.
  • Such an approach may also be error-prone.
  • Such an approach may also not be effective where the environment is reconfigured, for example if a light named“desk lamp” is moved away from a desk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)

Abstract

A robotic system (130) locatable in an environment (100) processes a command. Environment data representing the environment (100)at one or more points in time is obtained. First and second entities (110, 120) are represented in the received environment data. Data representing knowledge of the environment (100) is generated based on the received environment data. A command to perform an action in relation to a given entity is received. The command identifies a relationship attribute of the given entity. The relationship attribute indicates a relationship between the given entity and a reference entity. It is determined, using the generated data and the relationship attribute, that the first entity (110) has the relationship attribute and/or that the second entity (120) does not have the relationship attribute. The action is performed in relation to the first entity (120)based on the determining.

Description

PROCESSING A COMMAND
Technical Field
The present invention relates to processing a command.
Background
Robots and other personal digital assistants are becoming more prevalent in society and are being provided with increased functionality. Robots can be used in relation to other entities. For example, robots may be used to control other entities and/or to facilitate interaction with other entities. Robots can be used in environments where multiple, for example many, other entities are located.
Robots may be controlled via commands issued by users. While some commands may be trivial for the robot to process, other commands may be more complicated for the robot to process, may be misinterpreted by the robot, or the robot may not be able to process such commands at all.
Summary
According to a first aspect of the present invention, there is provided a method, performed by a robotic system locatable in an environment, of processing a command, the method comprising:
obtaining environment data representing the environment at one or more points in time, wherein first and second entities are represented in the received environment data;
generating data representing knowledge of the environment based on the received environment data;
receiving a command to perform an action in relation to a given entity, the command identifying a relationship attribute of the given entity, the relationship attribute indicating a relationship between the given entity and a reference entity; determining, using the generated data and the relationship attribute, that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute; and
performing the action in relation to the first entity based on the determining. According to a second aspect of the present invention, there is provided a method of processing a command by a robotic system located in an environment, the method comprising:
receiving environment data representing the environment at one or more points in time;
analysing the received environment data to identify first and second entities; receiving a command to perform an action in relation to a given entity, the command identifying first and second attributes of the given entity;
determining, using the received environment data, that:
the first and second entity both have the first attribute; and only the first entity of the first and second entities has the second attribute; and
performing the action in relation to the first entity based on the determining, wherein performing the action comprises transmitting a control signal to control operation of the first entity based on the received command.
According to a third aspect of the present invention, there is provided a method of processing a command by a robotic system, the method comprising:
analysing received data to identify first and second entities and determine one or both of:
a current operating state of the first entity and/or the second entity; and a controllability of the first entity and/or the second entity by the robotic system;
receiving a command to perform an action in relation to a given entity, the command identifying an attribute common to both the first and second entities;
determining, using the analysis, that the command can only be performed in relation to the first entity of the first and second entities; and
performing the action in relation to the first entity based on the determining. According to a fourth aspect of the present invention, there is provided a robotic system configured to perform a method according to any of the first, second or third aspects of the present invention.
According to a fifth aspect of the present invention, there is provided a computer program comprising instructions which, when executed, cause a robotic system to perform a method according to any of the first, second or third aspects of the present invention.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 shows schematically an example of an environment in accordance with an embodiment of the present invention;
Figure 2 shows schematically an example of a knowledge graph representing the environment shown in Figure 1 in accordance with an embodiment of the present invention; and
Figure 3 shows a schematic block diagram of an example of a command received by a robot in accordance with an embodiment of the present invention.
Detailed Description
Referring to Figure 1, there is shown an example of an environment 100. An environment may also be referred to as a“scene” or a“working environment”.
In this example, the environment 100 comprises a plurality of entities. An entity may also be referred to as an “object”. In this specific example, the environment 100 comprises at least three entities, namely a first entity 110, a second entity 120 and a robotic system 130. The robotic system 130 is an example of an entity. The first and second entities 110, 120 are different from the robotic system 130. The first entity 110 and/or the second entity 120 may be human. The first entity 110 and/or the second entity 120 may be non-human. The first entity 110 and/or the second entity 120 may be inanimate. The first entity 110 and/or the second entity 120 may be animate. In this example, the first entity 110 and the second entity 120 are both in the environment 100 simultaneously. In other examples, the first entity 110 and the second entity 120 are in the environment 100 at different times. As such, the environment 100 may comprise one or both of the first entity 110 and the second entity 120 at a given point in time.
A robotic system may be considered to be a guided agent. A robotic system may be guided by one or more computer programs and/or electronic circuitry. A robotic system may be guided by an external control device or the control may be embedded within the robotic system. A robotic system may comprise one or more components, implemented on one or more hardware devices. In an example, the components of the robotic system are comprised in a single housing. In another example, the components of the robotic system are comprised a plurality of housings. The plurality of housings may be distributed in the environment 100. The plurality of housings may be coupled by wired and/or wireless connections. The robotic system may comprise software components, including cloud or network-based software components. A robotic system may be configured to interact with human and/or non human entities in an environment. A robotic system may be considered an interactive device. A robotic system as described herein may or may not be configured to move. A robotic system may be considered to be a smart device. An example of a smart device is a smart home device, otherwise referred to as a home automation device. A smart home device may be arranged to control aspects of an environment including, but not limited to, lighting, heating, ventilation, telecommunications systems and entertainment systems. A robotic system as described in the examples herein may be arranged to perform some or all of the functionality of a smart home device. The robotic system 130 may comprise an autonomous robot. An autonomous robot may be considered to be a robot that performs functions with a relatively high degree of autonomy or independence compared to non-autonomous robots.
For brevity, a robotic system may be referred to as a“robot” herein, it being understood that the robotic system can comprise more than one hardware device.
In this example, the first entity 110 has first and second attributes.
In this example, the second entity 120 has the first attribute. However, the second entity 120 does not have the second attribute. The second entity 120 may or may not have a third attribute, the third attribute being different from the second attribute.
As such, the first attribute is a common attribute with which both the first and second entities 110, 120 are associated. In some examples, the first attribute is associated with an entity type.
Further, the second attribute is a disambiguating attribute (which may also be referred to as a“distinguishing attribute”) with which only one of the first and second entities 110, 120, namely the first entity 110, is associated.
Indicators may be used to indicate the first attribute and/or the second attribute. Such indicators may be part of commands received by the robot 130, for example. In some examples, an indicator for the first attribute is substantival. In other words, in such examples, the indicator for the first attribute pertains to a substantive. “Substantive” is another term for“noun”. As such, the term“substantival” may be referred to as“nounal”.
In some examples, an indicator for the second attribute is adjectival. In other words, in such examples, the second attribute and/or the indicator for the second attribute pertains to an adjective. For example, a command received by the robot 130 may refer to a“desk light”. In this example,“light” is substantival and indicates the first attribute, and“desk” is adjectival and indicates the second attribute. In another example, the command may refer to a“light next to the TV”. In this example,“light” is substantival and indicates the first attribute, and“next to the TV” is adjectival and indicates the second attribute.
In some examples, the second attribute is referred to as a“relationship attribute”. The relationship attribute is associated with a relationship between the first entity 110 and a reference entity. In some examples, the reference entity is the second entity 120. In other examples, the reference entity is different from the second entity 120. In some examples, the relationship comprises a location-based relationship. In some examples, the relationship comprises an interaction-based relationship.
In other examples, the second attribute is an absolute attribute of the first entity 110. For example, where a command refers to a“red light”, the indicator“red” indicates an absolute attribute, rather than a relationship attribute. The robotic system 130 comprises input componentry 140. The input componentry 140 comprises one or more components of the robotic system 130 that are arranged to receive input data.
The input componentry 140 obtains environment data. For example, the input componentry 140 may receive environment data. In some examples, the robotic system 130 generates the environment data. The environment data is a type of input data. The environment data represents an environment in which the robotic system 130 is locatable. In some examples, the environment data represents the environment 100 in which the robotic system 130 is located at the point in time at which the environment data is received, namely the surroundings of the robotic system 130. “Surroundings” is used herein to refer to the physical space around the robotic system 130 at a given point in time. The environment data represents the environment 110 at one or more points in time. The first entity 110 and the second entity 120 are represented in the received environment data. Where the environment data comprises multiple time samples, a given time sample may represent both the first entity 110 and the second entity 120 and/or a given time sample may represent only one of the first entity 110 and the second entity 120.
The input componentry 140 comprises an interface. The interface may be between the robotic system 130 and the environment 100. An interface comprises a boundary via which data can be passed or exchanged in one or both directions.
In some examples, the input componentry 140 comprises a camera. The obtained environment data may comprise visual environment data received using the camera. For example, the environment data may comprise image data.
In some examples, the input componentry 140 comprises a microphone. The obtained environment data may comprise audio environment data received using the microphone.
In some examples, the input componentry 140 comprises a network interface. The network interface may enable the robotic system 130 to receive data via one or more data communications networks. Some or all of the received environment data may be received using the network interface. As such, the environment data could represent an environment in which the robotic system 130 is not presently located, in other words an environment in which the robotic system 130 is not located at the point in time at which the environment data is received. In some examples, the environment data is based on a virtual or artificial environment. As such, a camera or microphone, comprised in the robotic system or otherwise, may not be used to obtain the environment data. For example, an artificial environment may be generated by a computer, and a representation of that environment may be provided to the robotic system 130. In some examples, the environment data is based on an environment whose image is captured by an image capture device before the robotic system 130 is moved to or located in the environment.
The robotic system 130 also comprises a controller 150. The controller 150 may be a processor. The controller 150 can include a microprocessor, microcontroller, processor module or subsystem, programmable integrated circuit, or programmable gate array. The controller 150 is communicatively coupled to the input componentry 140. The controller 150 may therefore receive data from the input componentry 140.
In some examples, the obtaining of the environment data by the robotic system 130 comprises the controller 150 causing movement of at least part of the robotic system 130 to capture representations of different parts of the environment 100. For example, where the input componentry 140 comprises a camera, a part of the robotic system 130 comprising the camera may move from an initial position to one or more further positions to enable the camera to capture images of different parts in the environment. The environment data obtained by the robotic system 130 may comprise the captured images of the different parts of the environment. A representation of a first part of the environment 100 may be at one point in time and a representation of a second part of the environment 100 may be at another point in time. In some examples, representations of both the first part and the second part of the environment 100 are at the same point in time. Such movement may comprise rotation of the at least part of the robotic system 130. For example, the at least part of the robotic system 130 may rotate 360 degrees around a vertical axis. As such, images may captured for different angles of the rotation. For example, an image may be captured for each degree of rotation. The robotic system 130 may be configured to move in other ways besides rotation. Images of the environment from the different perspectives may be stitched together digitally to form a 360 degree map of the environment. Therefore, a 360 degree spatial map of the environment may be obtained without multiple cameras positioned at different locations being used. In other examples, multiple cameras may be used to capture environment data from different perspectives. Cameras may be positioned at various locations in the environment 100, for example.
In some examples, the controller 150 is configured to generate data representing knowledge of the environment 100. The data representing knowledge of the environment 100 may be referred to as“knowledge data” or“a knowledge representation” of the environment 100. The controller 150 generates the data representing knowledge of the environment based on the received environment data. In some examples, the controller 150 generates the knowledge data based on initial knowledge data and the received environment data. For example, the robotic system 130 may be pre-loaded with initial knowledge data representing knowledge of an initial environment and the controller 150 can modify the initial knowledge data based on the received environment data to more accurately represent the actual environment 100. This may be effective where, for example, an approximation of the actual environment 100 is known before the robotic system 130 is located in the environment 100, or where the robotic system 130 remains in the same environment for a relatively long period of time. Pre-loading the initial knowledge data in this way may reduce latency in generating the model of the environment 100, compared to the robotic system 130 generating the knowledge data representing knowledge of the environment 100 without the initial knowledge data. Modifying the initial knowledge data based on the received environment data also enables the accuracy of the knowledge data and/or the information represented therein to be evolved and/or improved over time.
The data representing knowledge of the environment is a conceptual or semantic data model of entities and/or interrelations of entities in the environment. In some examples, the knowledge data comprises a knowledge graph. The knowledge graph is a topological representation of entities in the environment. The knowledge graph may be referred to as a “scene graph”. The knowledge graph may be represented as a network of entities, entity attributes and relationships between entities. In some examples, the knowledge data does not comprise a graphical or topological representation of the environment. For example, the knowledge data may comprise a model that can recognise and/or learn entities, entity attributes and relationships between entities, without generating an explicit graphical representation. The knowledge data may comprise a statistical and/or a probabilistic model. The knowledge data comprises a relationship model representing relationships between entities in the environment. The knowledge data may be able to store relatively large amounts of complicated information or knowledge compared to more a more simplistic representation or recollection of the environment. The generated knowledge data may be stored locally, for example in a memory of the robotic system 130, and accessed by the controller 150 at one or more subsequent points in time.
The controller 150 may generate the knowledge data independently of user supervision. As such, a burden on the user may be reduced compared to a case in which user supervision and/or input is used. In some examples, generating the knowledge data involves machine learning. Generating the knowledge data may involve the use of an artificial neural network, deep learning model and/or other machine learning models. The machine learning used to generate the knowledge data may be supervised, semi- supervised, weakly-supervised, data-supervised, or unsupervised machine learning, for example.“Generating” as used herein may refer to an act of creating new knowledge data or an act of updating or amending previously- created knowledge data. In some examples, generating the data representing knowledge of the environment is performed autonomously.
In some examples, generating the knowledge data comprises determining an operating state of the first entity 110 and/or the second entity 120. An operating state of a given entity indicates a state in which the given entity is currently operating. In an example, the operating state is binary in that it may be one of two possible values. An example of a binary operating state is whether the given entity is on or off. In another example, the operating state is not binary, in that it may have more than two possible values. The knowledge data may represent the determined operating state of the first entity 110 and/or of the second entity 120. Alternatively or additionally, the determined operating state of the first entity 110 and/or of the second entity 120 may be stored separately from the knowledge data.
In some examples, generating the knowledge data comprises determining whether or not the first entity 110 and/or the second entity 120 is controllable by the robotic system 130. The controllability of the first entity 110 and/or second entity 120 may be represented in the knowledge data. The controllability could be determined in various ways. For example, the controllability may be determined based on an entity type, such as TV, an initial interaction between the robotic system 130 and the entity, a test signal, etc.
As explained above, in some examples the environment data comprises representations of the environment 100 at multiple points in time. As such, generating the knowledge data may comprise analysing representations of the environment 100 at multiple points in time. In some examples, the environment data comprises a first representation of the environment at a first point in time and a second representation of the environment at a second point in time, later than the first point in time. Knowledge data may be generated based on the first representation of the environment at the first point in time. Such knowledge data may then be updated based on the second representation of the environment at the second point in time. The first and second representations may comprise static images, or“snapshots”, of the environment taken at different points in time, for example. In some examples, the second representation is obtained in response to a trigger. In an example, the second representation is obtained in response to the controller 150 determining that the position of the robotic system 130 has changed. For example, the controller 150 may be configured to determine that the robotic system 130 has moved to a different and/or unknown environment, or to a different location within a same environment. As such, the trigger may be a determination that the robotic system 130 has moved. The controller 150 may be configured to determine that the robotic system 130 has moved based on an output from a motion sensor of the robotic system 130. In an example, the trigger is the expiry of a predetermined time period. For example, the robotic system 130 may be configured to receive new representations of the environment and update the knowledge data accordingly once per day, once per week, or at other predetermined intervals. In some examples, the trigger is the receipt of a command, which the robotic system 130 recognises as being the trigger.
In some examples, the controller 150 uses static scene analysis to generate the knowledge data. In static scene analysis, environment data representing the environment 100 at a single point in time is analysed. Where the environment data comprises visual environment data, the visual environment data may comprise an image representing the environment 100 at a single point in time. The image may be analysed by itself. Such analysis may identify one or more entities represented in the image, one or more attributes of one or more entities represented in the image and/or one or more relationships between one or more entities represented in the image. In some examples, after the static scene analysis has been performed on the environment data representing the environment 100 at the single point in time, such environment data is discarded. Such environment data can therefore be used to improve the knowledge data, but is not stored in the robotic system 130 after it has been analysed. This can enable the robotic system 130 to have smaller amounts of memory than may otherwise be the case. In some examples, the environment data is not discarded.
In some examples, the controller 150 uses dynamic scene analysis to generate the knowledge data. In dynamic scene analysis, environment data representing the environment 100 at multiple points in time is analysed. Where the environment data comprises visual environment data, the visual environment data may comprise sequence of images representing the environment 100 at multiple points in time. The sequence of images may be in the form of a video, or otherwise. Such analysis may identify one or more entities represented in the sequence of images, one or more attributes of one or more entities represented in the sequence of images and/or one or more relationships between one or more entities represented in the sequence of images. In some examples, first data representing the environment 100 at a first point in time is stored in the robotic system 130. In some examples, the first data is analysed using static scene analysis. However, in some such examples, the first data is not discarded following the static scene analysis. Instead, second data representing the environment 100 at a second, later point in time is received. The first and second data are subject to dynamic scene analysis. The first and second data may be discarded following the dynamic scene analysis. Dynamic scene analysis may capture additional information that cannot be extracted using static analysis. Examples of such additional information include, but are not limited to, how entities move in relation to each other over time, how entities interact with each other over time etc. Dynamic scene analysis involving storing smaller amounts of data than static scene analysis, in some examples. The controller 150 may generate the knowledge data using static scene analysis and/or dynamic scene analysis. Dynamic scene analysis may involve more data being stored at least temporarily than static scene analysis, but may enable more information on the environment 100 to be extracted.
The robotic system 130 receives a command to perform an action in relation to a given entity via the input componentry 140. In some examples, the command identifies a relationship attribute of the given entity. The relationship attribute indicates a relationship between the given entity and a reference entity.
Where the input componentry 140 comprises a camera, the received command may comprise a visual command received via the camera. For example, a user of the robot 130 may perform a gesture, which is captured by the camera and is interpreted as a command.
Where the input componentry 140 comprises a microphone, the received command may comprise a voice command received via the microphone. For example, a user of the robotic system 130 may speak a command out loud, which is picked up by the microphone and is interpreted as a command. The controller 150 may be configured to process the voice command using natural language processing (NLP). For example, the controller 150 may be configured to use natural language processing to identify the relationship attribute of the given entity from the received voice command. In some examples, the microphone is used to capture environment data representing the environment 100. The controller 150 may be configured to determine attributes of entities in the environment based on received audio data. For example, the controller 150 may determine that a television is in an activated state based on receiving an audio signal from the direction of the television.
Where the input componentry 140 comprises a network interface, the command may be received via the network interface. For example, a user of the robotic system 130 may transmit a command over a data communications network, which is received via the network and is interpreted as a command. The user may not be in the environment 100 when they issue the command. As such, the user may be able to command the robotic system 130 remotely.
In some examples, the command identifies first and second attributes of the given entity. The relationship may be a location-based relationship. For example, the relationship may be based on a location of the given entity relative to the reference entity.
The relationship may be an interaction-based relationship. For example, the relationship may be based on one or more interactions between the given entity and the reference entity.
In some examples, the reference entity is the second entity 120. In other examples, the reference entity is a third entity represented in the received environment data. The third entity may be the robotic system 130. The third entity may be different from the second entity 120 and the robotic system 130.
The reference entity may be a person. For example, the reference entity may be a user of the robotic system 130.
In this example, the controller 150 determines that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute. The determining is performed using the knowledge data and the relationship attribute. In an example, the controller 150 searches the knowledge data using the relationship attribute. Where the knowledge data comprises a knowledge graph, the controller 150 may search the knowledge graph using the relationship attribute. Searching the knowledge graph using the relationship attribute may involve the controller 150 accessing the knowledge graph and querying the knowledge graph using the relationship attribute identified in the command. In other examples, the controller 150 is configured to infer from the knowledge data and the relationship attribute that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute.
In some examples, the controller 150 is configured to use the received environment data to determine a location of the first entity 110 relative to the reference entity. The controller 150 may also be configured to determine a location of the second entity 120 relative to the reference entity. In examples where the relationship attribute relates to a location-based relationship, the determined location of the first entity 110 and/or the second entity 120 relative to the reference entity may be used to determine that the first entity 110 is the given entity. In some examples, the controller 150 determines that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute autonomously. In other words, in such examples, the controller 150 performs the determining without requiring specific user input.
The controller 150 is configured to perform the action indicated in the received command in relation to the first entity 110 based on the determining that the first entity 110 has the relationship attribute and/or that the second entity 120 does not have the relationship attribute.
As such, the controller 150 is able to determine which of the first entity 110 and the second entity 120 is the given entity that is referred to in the received command, and react accordingly.
The command can therefore be processed and responded to with a greater accuracy and/or reliability compared to a case in which the controller 150 is not able to determine which of the first entity 110 and the second entity 120 is the given entity. In particular, the knowledge data representing knowledge of the environment 100 enables the robotic system 130 to more accurately determine which of multiple entities is the given entity referred to in the command. By processing commands more accurately, the robotic system 130 may be able to interact with a user in a more natural manner compared to a case in which the controller 150 is not able to determine which of the first entity 110 and the second entity 120 is the given entity. For example, the robotic system 130 may be able to interpret commands which may be too ambiguous for some other, known robotic systems to interpret. Such potential ambiguity could be due, for example, to natural language used by the user when issuing the command. The robotic system 130 is able to disambiguate such commands through the use of the knowledge data. This allows the robotic system 130 to infer user intention non-trivially without the need for more explicit, less ambiguous commands to be issued by the user, which would result in a less natural interaction with the robotic system 130.
Further, by using the knowledge data representing knowledge of the environment 100, the robotic system 130 may be able to handle new commands or commands that have not previously been encountered. For example, the knowledge data may store relationship data representing a plurality of relationships having a plurality of relationship types between different entities, before any commands have even been received. Such relationship data may be stored and updated via the knowledge data, and used to interpret subsequently-received commands.
In some examples, the controller 150 is configured to generate at least part of the knowledge data before the command is received by the input componentry 140. The robotic system 130 may be placed into a calibration state to generate at least part of the environment data. The robotic system 130 may be placed into the calibration state when the robotic system 130 is first placed in the environment 100, or otherwise. Generating at least part of the knowledge data may be performed autonomously, or without user supervision. Generating at least part of the knowledge data before the command is received may reduce a latency in processing the command compared to a situation in which none of the knowledge data has been generated before the command is received. For example, less processing may be performed after receiving the command compared to a case in which all of the knowledge data is generated after the command is received, thereby enabling a response time of the robotic system 130 to be reduced.
In some examples, the controller 150 is configured to generate at least part of the knowledge data in response to the command being received via the input componentry 140. For example, the knowledge data may be updated with new information obtained after the command is received. Generating at least part of the knowledge data in response to receiving the command may improve an accuracy with which the command is processed, as the knowledge data is kept relatively up-to-date. In some examples, the knowledge data is updated in response to receiving the command if it is determined that environment data has not been received within a predetermined time period prior to receiving the command. This may be a particular consideration in relatively dynamic environments, where entities may move in relation to one another and/or may enter and exit the environment at particular points in time.
In the example of Figure 1, the robotic system 130 also comprises output componentry 160. The output componentry 160 comprises one or more components of the robotic system 130 that are arranged to generate one or more outputs, for example in the form of output data or signalling. The controller 150 is communicatively coupled to the output componentry 160. The controller 150 may transmit data to the output componentry 160 to cause the output componentry 160 to generate output data. The output componentry 160 comprises an interface, for example between the robotic system 130 and the environment.
In some examples, the output componentry 160 comprises a loudspeaker. In such examples, performing the action may comprise causing the loudspeaker to output a sound. The sound may include a notification, alert, message or the like, to be delivered to a person.
In some examples, the output componentry 160 comprises a network interface. The network interface may enable the robotic system 130 to output data via one or more data communication networks. Where the input componentry 140 comprises a network interface, the network interface comprised in the output componentry 160 may be the same as or different from the network interface comprised in the input componentry 140. The controller 150 may be operable to cause data to be transmitted via the network interface.
In some examples, the output componentry 160 is operable to transmit a signal. Performing the action may comprise causing the output componentry 160 to transmit a signal for the first entity 110.
In an example, the signal is arranged to provide the first entity 110 with a notification. The first entity 110 may be a person, for example, and the robotic system 130 may be configured to provide the person with a message or alert. In some examples, the signal is an audio signal to provide an audio notification to the first entity 110. In some examples, the signal is a visual signal to provide a visual notification to the first entity 110.
In some examples, for example when the first entity 110 is controllable by the robotic system 130, the action comprises controlling the first entity 110. In such examples, the controller 150 may be configured to control the first entity 110 by causing the output componentry 160 to transmit a control signal to control operation of the first entity 110. The control signal may comprise an electrical signal operable to control one or more operations, components and/or functions of the first entity 110. The control signal may be transmitted to the first entity 110 itself, or may be transmitted to another entity that in turn controls the first entity 110. The control signal may be transmitted when the first entity 110 is in the vicinity of the robot 130 or when the first entity 110 is not in the vicinity of the robotic system 130. In an example, the control signal is operable to change an operating state of the first entity 110. Changing the operating state may involve activating or deactivating the first entity 110. Different control signals may be generated based on different commands received by the input componentry 140.
In some examples, the first entity 110 is not in the environment when the command is received by the input componentry 140. In such examples, performing the action in relation to the first entity 110 may be performed in response to the robotic system 130 detecting the presence of the first entity 110 in the environment. For example, the robotic system 130 may monitor the environment and, when the presence of the first entity 110 is detected in the environment, perform the action.
The robotic system 130 may have multiple users.
In some examples, the controller 150 is configured to generate different knowledge data each associated with a different one of the multiple users. For example, first knowledge data representing knowledge of the environment may be associated with a first user and contain attribute and/or relationship data that is specific to the first user, and second knowledge data representing knowledge of the environment may be associated with a second user and contain attribute and/or relationship data that is specific to the second user.
In other examples, information relating to multiple users may be stored in a single knowledge representation of the environment.
In some examples, the controller 150 is configured to discard some or all of the knowledge data. Discarding some or all of the knowledge data may comprise deleting some or all of the knowledge data from memory. The knowledge data may be discarded in response to a trigger. An example of such a trigger is a determination that the robotic system 130 is moving or has been moved to a new environment, the new environment being different from the environment 100. Another example of such a trigger is the user of the robotic system 130 changing. A further example of such a trigger is an expiry of a predetermined time period. Some or all of existing knowledge data may be discarded prior to generating new knowledge data, for example representing a new environment and/or for a new user. Discarding some or all of existing knowledge data enables newly received commands to be interpreted based on a current environment and/or a current user, instead of a previous environment and/or a previous user. Discarding redundant knowledge data, for example knowledge data representing knowledge of old environments, also enables an amount of storage required to store knowledge data to be reduced. If the existing knowledge data is not discarded, it may take a relatively long amount of time for the robotic system 130 to converge on a view of the new situation. By deleting the existing knowledge data, the robotic system 130 can build new knowledge data representing knowledge of the environment from scratch, which may lead to quicker convergence, particularly if the new situation is relatively dissimilar to the previous situation. However, it may be useful to retain some of the information from the previous knowledge data. For example, if the robotic system 130 is moved to a new room in a same house in which the robotic system 130 was previously located, it may be useful to retain relationship information between entities where the entities are people who live in the house, since those relationships are unlikely to be affected by the change in location of the robotic system 130 in the house.
Referring to Figure 2, there is shown an example of a knowledge graph 200. The knowledge graph 200 represents an environment. The knowledge graph 200 is an example of data representing knowledge of the environment. In this example, the knowledge graph 200 represents the environment 100 described above with reference to Figure 1.
In the knowledge graph 200, the first entity 110, the second entity 120 and the robotic system 130 are represented. In other examples, the first entity 110 and the second entity 120 are represented, but the robotic system 130 is not represented. In particular, in such other examples, the robotic system 130 may be configured to exclude itself from the knowledge graph 120. In this examples, each of the first entity 110, the second entity 120 and the robotic system 130 corresponds to a node in the knowledge graph 200.
The knowledge graph 200 also stores relationships between entities. Relationships are represented in the knowledge graph 200 as edges between nodes. Such relationships may, for example, be location-based and/or proximity-based and/or interaction-based and/or dependency-based relationships. Edge R i .2 represents a relationship between the first entity 110 and the second entity 120. Edge R I .R represents a relationship between the first entity 110 and the robotic system 130. Edge R2,R represents a relationship between the second entity 120 and the robotic system 130.
The knowledge graph 200 also stores one or more attributes of the first and second entities 110, 120. Attributes are represented as labels of the nodes. Attribute Ai is common to the first entity 110 and the second entity 120. Attribute Ai may relate to an entity type of the first and second entities 110, 120. Attribute A 2 is not common to the first entity 110 and the second entity 120. The first entity 110 has attribute A2 but the second entity 120 does not have attribute A2. Attribute A2 may be a relationship attribute. Attribute A2 may therefore relate to a relationship between the first entity 110 and entity reference entity. Attribute A2 may be based on R 1.2, RI,R, or otherwise. In this example, the second entity has an attribute A3, different from attributes Ai and A2. Attribute A3 may also be a relationship attribute relating to a relationship between the second entity 120 and another entity, for example the reference entity. Attribute A3 may be based on relationship data R.1,2, R-2,R, or other relationship data. Similar approaches can be used for relationships between more than two entities.
Referring to Figure 3, there is shown a schematic representation of an example of a command 300. The command 300 is received by a robot. In this example, the command 300 is received by the robotic system 130 described above with reference to Figure 1. The command 300 may be issued by a user of the robotic system 130.
In this example, the command 300 identifies an action, a first attribute, Ai, and a second attribute, A2. The first attribute, Ai, is common to the first entity 110 and the second entity 120 in the environment 100. The second attribute, A2, is a distinguishing attribute, which is associated with the first entity 110 but not the second entity 120. The second attribute, A2, may be a relationship attribute. The robotic system 130 may interpret the command 300 to determine that the action is to be performed in relation to the first entity and not the second entity, based on the attributes identified in the command 300, namely Ai and A2.
In a first example scenario, a robot can perform automatically-inferred lighting operation using natural language processing. In the first example scenario, an environment comprises a robot, two lights and a television. A first light of the two lights is next to the television, whereas a second light of the two lights is not. The robot can recognise and semantically understand the working environment in which the robot is located. For example, the robot can generate a semantic model of the environment. The semantic model of the environment is an example of data representing knowledge of the environment. The robot can identify each of the two lights and the television in the environment, using received environment data. The robot can also identify a relationship between at least the first light and the television, and the second light and the television. For example, the robot can determine the spatial positioning of the two lights and the television in the scene. The robot may use the determined spatial positioning to identify mutual proximity and/or dependency.
The robot can identify a current operating state of the first light and/or second light and/or television. The robot may identify the current operating state using visual and/or acoustic environment data.
In this scenario, a user commands the robot to turn on the light next to the television. As such, the action to be performed is to change the operating state of an entity from‘off to‘on’. The first attribute is that the entity is a light. The first attribute is common to both the first and second lights. The second attribute, or “relationship attribute”, is that the entity is next to the television. As such, in this example the given entity is a light and the reference entity is the television. Only the first light has the second attribute. The second attribute can thus be used to disambiguate between the first and second lights.
As such, the robot can determine which of the two lights the command relates to, can determine the current operating state of the applicable light, and can control the applicable light by adjusting the operating state of the applicable light.
In a second example scenario, an environment comprises a robot, two lights and a user reading a book. The user issues a command to turn on a reading light. The robot can determine that the user is reading the book and/or that the user is located in a specific position associated with reading a book. In this case, the first attribute is that the entity is a light. The second, distinguishing, attribute is that the entity is a reading light. The label“reading” may be used by the robot to analyse a relationship, e.g. a proximity, between each of the two lights and the user reading the book. For example, the robot can identify a light of the two lights that provides the best operating condition for reading activity in the specific position in which the user is located. The robot can control one or more parameters of the identified light. Examples of such parameters include, but are not limited to, brightness and colour. The robot may control the other light of the two lights. For example, the robot may control the operating state, brightness and/or colour of the other light of the two lights. The robot may control the other light of the two lights based on the location of the other light of the two lights in the other environment, or otherwise.
In a third example scenario, an environment comprises a robot, and can comprise multiple people. The multiple people may be in the environment at the same time or at different times. The user issues a command to the robot for the robot to issue an audio notification to their partner when the robot next detects the partner in the environment. In this example, the reference entity is the user and the given entity is the partner. In this example, the relationship between the given entity and the reference entity is an interaction-based relationship. In this example, the first, common attribute is that the entity is a person, and the second, disambiguating attribute is that the person is the user’s partner. In this example, the robot performs identity recognition and extraction of interpersonal relationships between people. The robot may observe interactions between people, for example between the user and one or more other people, and may use such observed interactions to determine which person is most likely to be the user’s partner. In this example, the robot learns information from people in the environment and builds a dependency graph representing dependencies between people as entities of interest. A dependency graph is an example of data representing knowledge of the environment.
In this example scenario, there is a delay between the robot receiving the command and the robot performing the action. The robot may receive the command at a first time, and may perform the action in response to a trigger event. In this example, the trigger event is the detection of the user’s partner in the environment.
As such, examples described herein provide scene understanding, for example audio-visual scene understanding, for contextual inference of user intention.
In a calibrating state a robot may build an audio-visual representation state of connected devices and objects in the scene. For example, the representation may indicate which of the devices and objects are controllable by the robot, the relationships between active and passive objects in the scene, and how the devices and objects relate to user habits and preferences. In principle, a robot may learn there are lights A, B and C in a room, but only light B is in proximity of a television and that light B can be controlled by the robot. The robot is thus able to perform an action that includes entity ‘TV’ and Tight’ conditioned on their mutual spatial relationship. Likewise, similar local scene knowledge graphs can provide for different room or object configurations, and hence the robot is able to perform an inference of user intent and respond accordingly.
Various measures (for example robotic systems, methods, computer programs and computer-readable media) are provided in which a robotic system obtains environment data representing an environment at one or more points in time. First and second entities are represented in the received environment data. The robotic system generates data representing knowledge of the environment based on the received environment data. The robotic system receives a command to perform an action in relation to a given entity. The command identifies a relationship attribute of the given entity. The relationship attribute indicates a relationship between the given entity and a reference entity. The robotic system determines, using the generated data and the relationship attribute, that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute. The robotic system performs the action in relation to the first entity based on the determining. As such, the robotic system uses the data representing knowledge of the environment to determine which of multiple entities is the given entity referred to in the command, and is able to react accordingly. Therefore the reliability and accuracy with which commands are processed may be improved.
In some examples, the determining that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute is performed autonomously. As such, an amount of user burden may be reduced. In some examples, the generating of the data representing knowledge of the environment involves machine learning. Machine learning may facilitate a more accurate mapping of relationships between entities in the environment compared to a case in which machine learning is not used. An accurate mapping of relationships between entities allows the reliability and accuracy with which commands are processed to be improved.
In some examples, performing the action comprises transmitting a signal for the first entity. The first entity may be external to the robotic system. Transmitting a signal for the first entity allows the first entity to be influenced by the robotic system based on the command. For example, the first entity may be controlled either directly or indirectly via the transmitted signal.
In some examples, the first entity is controllable by the robotic system and the signal is a control signal to control operation of the first entity. The first entity may therefore be controlled by the robotic system on the basis of the processed command. Controlling the first entity may comprise changing an operating state of the first entity. The robotic system may therefore determine which entity of multiple possible entities is to be controlled in accordance with the command, and control the determined entity. This may be particularly useful where the first entity is not readily controllable by the user when the command is issued. For example, the user may be remote from the first entity. The robotic system can act as a proxy to control the first entity on the user’s behalf. As such, the number of different entities the user is required to interact with may be reduced.
In some examples, the signal is arranged to provide the first entity with a notification. As such, the robotic system is able to identify an entity, for example a person, to which a notification is to be provided, using the relationship information stored in the data representing knowledge of the environment. The delivery of the notification is therefore provided more reliably compared to a case in which the data representing knowledge of the environment is not used. Further, the notification may be delivered even when the first entity is not present when the command is issued, thereby increasing the functionality of the robotic system.
In some examples, the generating of the data representing knowledge of the environment comprises determining an operating state of the first entity and/or the second entity and/or the reference entity, and wherein the data representing knowledge of the environment represents the determined operating state or states. As such, a detailed picture of the entities in the environment at a given point in time is obtained, allowing an accurate determination of the entity referred to in the command. In some examples, the generating of the data representing knowledge of the environment comprises determining whether or not the first entity and/or the second entity is controllable by the robotic system. As such, a detailed picture of the entities in the environment at a given point in time is obtained, allowing an accurate determination of the entity referred to in the command. Further, resources may be used more efficiently, since control signals may not be sent to a given entity unless it is determined that the given entity is in fact controllable by the robotic system.
In some examples, the environment data comprises representations of the environment at multiple points in time and the generating of the data representing knowledge of the environment comprises analysing the representations of the environment at the multiple points in time. As such, the robotic system may accurately and reliably disambiguate commands relating to dynamic environments, in which one or more entities and/or relationships between entities change over time.
In some examples, the environment data comprises a first representation of the environment at a first point in time and a second representation of the environment at a second point in time, later than the first point in time. The generating of the data representing knowledge of the environment may be based on the first representation of the environment at the first point in time. In some examples, the data representing knowledge of the environment is updated based on the second representation of the environment at the second point in time. As such, the data representing knowledge of the environment may be updated over time to take into account developments or changes in the environment and/or in the relationships between entities in the environment, thereby improving an accuracy and/or reliability with which commands are processed by the robotic system.
In some examples, the generating of the data representing knowledge of the environment comprises analysing the received environment data using static scene analysis. Using static scene analysis may use fewer hardware resources than may otherwise be the case. Static scene analysis may reduce a need for information to be derived“on the fly” by the robotic system, thereby reducing an amount of processing, compared to a case in which static scene analysis is not used..
In some examples, the generating of the data representing knowledge of the environment comprises analysing the received environment data using dynamic scene analysis. Using dynamic scene analysis may enable more information relating to the environment to be extracted and used for accurate processing of commands compared to some other cases.
In some examples, the robotic system discards the data representing knowledge of the environment in response to a trigger. Discarding the data representing knowledge of the environment may allow subsequent data representing knowledge of the environment to be generated more quickly, thereby reducing latency and improving a user experience. The trigger may comprise an expiry of a predetermined time period. In some examples, the trigger comprises a determination that a position of the robotic system has changed. As such, redundant knowledge data relating to environments in which the robotic system is no longer located is not stored, thereby reducing an amount of memory requirements of the robotic system.
In some examples, the obtaining of the environment data comprises causing movement of at least part of the robotic system to capture representations of different parts of the environment. As such, a more complete picture of the environment may be obtained and used to accurately process commands compared to a case in which representations of different parts of the environment are not captured. In examples where the representations are visual representations captured by one or more cameras, obtaining such representations by moving at least part of the robotic system enables a single camera, comprised in the robotic system or otherwise, to be used. If a part of the robotic system were not to move, multiple cameras pointed in multiple different directions may be required in order to capture representations of different parts of the environment. Therefore a complexity and cost of the robotic system may be reduced compared to a case in which part of the robotic system does not move to capture representations of different parts of the environment.
In some examples, the first entity is not in the environment when the command is received. In such examples, the performing the action is in response to detecting the presence of the first entity in the environment. As such, the given entity referred to in the command may be identified even when the given entity is not present in the environment, thereby facilitating flexible and accurate processing of commands. Further, resources may be used more efficiently by waiting until the first entity is in the environment to perform the action. In some examples, the reference entity is the second entity. As such, the first entity may be identified as the given entity referred to in the command based on a relationship between the first entity and the second entity, thereby facilitating accurate processing of the command. Where the reference entity is the second entity, there may be fewer entities for the robotic system to consider, e.g. two entities, compared to a case in which the reference entity is separate from the second entity, where there may for example be three entities for the robotic system to consider. Reducing a number of entities to analyse may reduce computational complexity and increase processing efficiency. There may be a trade-off, however, whereby reducing computational complexity means being able to handle a more limited set of commands.
In some examples, a third entity is represented in the received environment data and, the reference entity being the third entity. As such, the first entity may be identified as the given entity referred to in the command based on an analysis of relationships of the first and second entities with a third, separate entity. Where the reference entity is a third entity, a scope of recognisable and processable commands may be increased compared to a case in which the reference entity is the second entity. There may a trade-off, however, whereby increasing the scope of recognisable commands involves a higher computational complexity.
In some examples, said generating comprises generating part of the data representing knowledge of the environment before the command is received. As such, a latency in processing the command may be reduced compared to a case in which none of the data representing knowledge of the environment is generated before the command is received.
In some examples, said generating comprises generating part of the data representing knowledge of the environment in response to receiving the command. As such, the data representing knowledge of the environment can be kept up-to-date, enabling accurate processing of commands to be maintained, particularly when the robotic system is located in a dynamic environment in which entities and/or relationships between entities may change over time.
In some examples, the relationship comprises a location-based relationship. As such, the robotic system can use contextual information relating to the relative locations of entities with respect to one another to accurately process commands. Location data may be self-contained and readily available in the environment data. Data from other sources is therefore not needed. Further, there may be a relatively high likelihood of user commands relating to location-based relationships. The robotic system may therefore react to such commands accurately and reliably.
In some examples, the robotic system uses the received environment data to determine a location of the first entity relative to the reference entity. As such, the robotic system can autonomously determine locational or proximity-based relationship between entities and use such relationship information to accurately process commands. Determining the location of the first entity relative to the reference entity allows the robotic system to use up-to-date and accurate location information, for example in cases where an initial location of an entity may have changed over time.
In some examples, the relationship comprises an interaction-based relationship. As such, the robotic system may use observed interactions between entities to accurately interpret and process commands. Interaction data may be self- contained and readily available in the environment data. Data from other sources is therefore not needed. Further, there may be a relatively high likelihood of user commands relating to interaction-based relationships. The robotic system may therefore react to such commands accurately and reliably.
In some examples, the first entity and/or the second entity is a person. As such, the robotic system is able to use information in the data representing knowledge of the environment to distinguish between multiple people to identify a specific person referred to in a command.
In some examples, the reference entity is a person. As such, relationships between a person and multiple other entities may be analysed to accurately process commands. The person may be a user of the robotic system, for example.
In some examples, the reference entity is the robot system. As such, relationships between the robotic system and multiple other entities may be analysed to accurately process commands.
In some examples, the environment data comprises visual environment data. As such, environment information relating to locational or proximal relationships between entities in the environment may be obtained and used to accurately process commands.
In some examples, the environment data comprises audio environment data. As such, environment information relating to interactions between entities in the environment, for example between people, may be obtained and used to accurately process commands.
In some examples, the command comprises a voice command. As such, a user may interact with the robotic system using a voice command that uses natural language, which may be accurately processed by the robotic system, thereby allowing a more natural and meaningful interaction with the user.
In some examples, the robotic system processes the voice command using natural language processing. As such, the voice command may be interpreted accurately without the user having to modify their natural language, thereby reducing a burden on the user and facilitating meaningful interactions.
In some examples, the command comprises a visual command. As such, visual commands such as gestures may be accurately processed in a similar manner as voice commands or other types of command. Further, relatively‘natural’ gestures, such as pointing, may be interpreted accurately by the robotic system without the user having to modify their natural behaviour.
Various measures (for example robotic systems, methods, computer programs and computer-readable media) are provided in which a robotic system receives environment data representing an environment at one or more points in time. The received environment data is analysed to identify first and second entities. The robotic system receives a command to perform an action in relation to a given entity, the command identifying first and second attributes of the given entity. The robotic system determines, using the received environment data, that the first and second entity both have the first attribute, and that only the first entity of the first and second entities has the second attribute. The robotic system performs the action in relation to the first entity based on the determining. Performing the action comprises transmitting a control signal to control operation of the first entity based on the received command. As such, the robotic system is able to act as a proxy for the first entity, enabling the first entity to be controlled in a new way, namely via the robotic system. Such an interaction technique may have a reduced user burden, and may be more reliable, than if the user were to interact with the first entity directly. Further, multiple controllable entities may be controlled via a single point of contact for the user, namely the robotic system.
Various measures (for example robotic systems, methods, computer programs and computer-readable media) are provided in which a robotic system analyses received data to identify first and second entities. The robotic system determines one or both of a current operating state of the first entity and/or the second entity, and a controllability of the first entity and/or the second entity by the robotic system. The robotic system receives a command to perform an action in relation to a given entity, the command identifying an attribute common to both the first and second entities. The robotic system determines, using the analysis, that the command can only be performed in relation to the first entity of the first and second entities. The robotic system performs the action in relation to the first entity based on the determining. As such, a complete picture of the environment and the ability to interact with entities in the environment may be obtained, allowing the command to be processed with a greater accuracy and reliability than a case in which such a complete picture is not obtained.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged.
In examples described above, knowledge data representing knowledge of the environment is generated by the robotic system. In other examples, the knowledge data is received from one or more other entities. For example, the knowledge data may be stored in a network and downloaded therefrom. In an example, the knowledge data is received from a further robotic system. The knowledge data may have been generated by the further robotic system.
In examples described above, the robotic system is locatable in the environment. In other examples, a part of the robotic system is locatable in the environment and a further part of the robotic system is locatable outside of the environment. For example, the robotic system may comprise network-based computing components. In some examples, the robotic system is useable with network-based components. An example of a network-based component is a server. The knowledge data generated by the robotic system may be stored on such network- based components and/or may be retrieved from such network-based components.
In examples described above, the robotic system learns information about the environment in which the robotic system is located automatically, for example independent of user supervision.
In other examples, the user can pre-programme the robotic system with names for the entities in the environment manually. Such naming may also be referred to as “labelling” or“annotating”. Such an approach involves user supervision. The robotic system may provide a naming interface via which the user can name the entities. The user may, for example, be able to use a smartphone, tablet device, laptop or the like to enter the entity names. The user could manually assign a memorable name to a light, for example“desk lamp” rather than“light number 3”. However, such an approach involves user attention to perform the pre-programming. Such an approach is therefore somewhat invasive on the user. Such an approach may also be error-prone. Such an approach may also not be effective where the environment is reconfigured, for example if a light named“desk lamp” is moved away from a desk.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims

1. A method, performed by a robotic system locatable in an environment, of processing a command, the method comprising:
obtaining environment data representing the environment at one or more points in time, wherein first and second entities are represented in the received environment data;
generating data representing knowledge of the environment based on the received environment data;
receiving a command to perform an action in relation to a given entity, the command identifying a relationship attribute of the given entity, the relationship attribute indicating a relationship between the given entity and a reference entity; determining, using the generated data and the relationship attribute, that the first entity has the relationship attribute and/or that the second entity does not have the relationship attribute; and
performing the action in relation to the first entity based on the determining.
2. A method according to claim 1, wherein the determining is performed autonomously.
3. A method according to claim 1 or claim 2, wherein the generating of the data representing knowledge of the environment involves machine learning.
4. A method according to any of claims 1 to 3, wherein performing the action comprises transmitting a signal for the first entity.
5. A method according to claim 4, wherein the first entity is controllable by the robotic system and wherein the signal is a control signal to control operation of the first entity.
6. A method according to claim 4 or claim 5, wherein the signal is arranged to provide the first entity with a notification.
7. A method according to any of claims 1 to 6, wherein the generating of the data representing knowledge of the environment comprises determining an operating state of the first entity and/or the second entity and/or the reference entity, and wherein the data representing knowledge of the environment represents the determined operating state or states.
8. A method according to any of claims 1 to 7, wherein the generating of the data representing knowledge of the environment comprises determining whether or not the first entity and/or the second entity is controllable by the robot.
9. A method according to any of claims 1 to 8, wherein the environment data comprises representations of the environment at multiple points in time and the generating of the data representing knowledge of the environment comprises analysing the representations of the environment at the multiple points in time.
10. A method according to any of claims 1 to 9,
wherein the environment data comprises a first representation of the environment at a first point in time and a second representation of the environment at a second point in time, later than the first point in time,
wherein the generating of the data representing knowledge of the environment is based on the first representation of the environment at the first point in time, and wherein the method comprises updating the data representing knowledge of the environment based on the second representation of the environment at the second point in time.
11. A method according to any of claims 1 to 10, wherein the generating of the data representing knowledge of the environment comprises analysing the received environment data using static scene analysis.
12. A method according to any of claims 1 to 10, wherein the generating of the data representing knowledge of the environment comprises analysing the received environment data using dynamic scene analysis.
13. A method according to any of claims 1 to 12, the method comprising discarding the data representing knowledge of the environment in response to a trigger.
14. A method according to claim 13, wherein the trigger comprises an expiry of a predetermined time period.
15. A method according to claim 13, wherein the trigger comprises a determination that a position of the robotic system has changed.
16. A method according to any of claims 1 to 15, wherein the obtaining of the environment data comprises causing movement of at least part of the robotic system to capture representations of different parts of the environment.
17. A method according to any of claims 1 to 16,
wherein the first entity is not in the environment when the command is received; and
wherein the performing the action is in response to detecting the presence of the first entity in the environment.
18. A method according to any of claims 1 to 17, wherein the reference entity is the second entity.
19. A method according to any of claims 1 to 18, wherein a third entity is represented in the received environment data and wherein the reference entity is the third entity.
20. A method according to any of claims 1 to 19, wherein said generating comprises generating part of the data representing knowledge of the environment before the command is received.
21. A method according to any of claims 1 to 20, wherein said generating comprises generating part of the data representing knowledge of the environment in response to receiving the command.
22. A method according to any of claims 1 to 21, wherein the relationship comprises a location-based relationship.
23. A method according to claim 22, the method comprising using the obtained environment data to determine a location of the first entity relative to the reference entity.
24. A method according to any of claims 1 to 23, wherein the relationship comprises an interaction-based relationship.
25. A method according to any of claims 1 to 24, wherein the first entity and/or the second entity is a person.
26. A method according to any of claims 1 to 25, wherein the reference entity is a person.
27. A method according to any of claims 1 to 25, wherein the reference entity is the robotic system.
28. A method according to any of claims 1 to 27, wherein the environment data comprises visual environment data.
29. A method according to any of claims 1 to 28, wherein the environment data comprises audio environment data.
30. A method according to any of claims 1 to 29, wherein the command comprises a voice command.
31. A method according to claim 30, the method comprising processing the voice command using natural language processing.
32. A method according to any of claims 1 to 31, wherein the command comprises a visual command.
33. A method of processing a command by a robotic system located in an environment, the method comprising:
receiving environment data representing the environment at one or more points in time;
analysing the received environment data to identify first and second entities; receiving a command to perform an action in relation to a given entity, the command identifying first and second attributes of the given entity;
determining, using the received environment data, that:
the first and second entity both have the first attribute; and only the first entity of the first and second entities has the second attribute; and
performing the action in relation to the first entity based on the determining, wherein performing the action comprises transmitting a control signal to control operation of the first entity based on the received command.
34. A method of processing a command by a robotic system, the method comprising:
analysing received data to identify first and second entities and determine one or both of:
a current operating state of the first entity and/or the second entity; and a controllability of the first entity and/or the second entity by the robot; receiving a command to perform an action in relation to a given entity, the command identifying an attribute common to both the first and second entities;
determining, using the analysis, that the command can only be performed in relation to the first entity of the first and second entities; and
performing the action in relation to the first entity based on the determining.
35. A robotic system configured to perform a method according to any of claims 1 to 34.
36. A computer program comprising instructions which, when executed, cause a robotic system to perform a method according to any of claims 1 to 34.
37. A computer-readable medium comprising a computer program according to claim 36.
EP19719351.9A 2018-03-21 2019-03-20 Processing a command Withdrawn EP3769168A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1804519.5A GB2572175B (en) 2018-03-21 2018-03-21 Processing a command
PCT/GB2019/050785 WO2019180434A1 (en) 2018-03-21 2019-03-20 Processing a command

Publications (1)

Publication Number Publication Date
EP3769168A1 true EP3769168A1 (en) 2021-01-27

Family

ID=62017973

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19719351.9A Withdrawn EP3769168A1 (en) 2018-03-21 2019-03-20 Processing a command

Country Status (4)

Country Link
EP (1) EP3769168A1 (en)
CN (1) CN112154383A (en)
GB (1) GB2572175B (en)
WO (1) WO2019180434A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11784845B2 (en) * 2018-09-28 2023-10-10 Qualcomm Incorporated System and method for disambiguation of Internet-of-Things devices
CN114153980A (en) * 2020-09-07 2022-03-08 中兴通讯股份有限公司 Knowledge graph construction method and device, inspection method and storage medium
US20240233728A1 (en) * 2021-07-30 2024-07-11 Hewlett-Packard Development Company, L.P. User Gestures to Initiate Voice Commands
US20230048006A1 (en) * 2021-08-10 2023-02-16 Palo Alto Research Center Incorporated Interacting with machines using natural language input and a state graph

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120226981A1 (en) * 2011-03-02 2012-09-06 Microsoft Corporation Controlling electronic devices in a multimedia system through a natural user interface
JP2015501025A (en) * 2011-10-05 2015-01-08 オプテオン コーポレーション Method, apparatus and system for monitoring and / or controlling a dynamic environment
US20150156030A1 (en) * 2012-09-21 2015-06-04 Google Inc. Handling specific visitor behavior at an entryway to a smart-home
US10614048B2 (en) * 2013-09-20 2020-04-07 Oracle International Corporation Techniques for correlating data in a repository system
US9900177B2 (en) * 2013-12-11 2018-02-20 Echostar Technologies International Corporation Maintaining up-to-date home automation models
US10170123B2 (en) * 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10789041B2 (en) * 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9579790B2 (en) * 2014-09-17 2017-02-28 Brain Corporation Apparatus and methods for removal of learned behaviors in robots
US10274911B2 (en) * 2015-06-25 2019-04-30 Intel Corporation Conversational interface for matching text of spoken input based on context model
CN105446162B (en) * 2016-01-26 2018-11-20 北京进化者机器人科技有限公司 A kind of intelligent home furnishing control method of smart home system and robot
WO2018009397A1 (en) * 2016-07-06 2018-01-11 Pcms Holdings, Inc. System and method for customizing smart home speech interfaces using personalized speech profiles
GB2565315B (en) * 2017-08-09 2022-05-04 Emotech Ltd Robots, methods, computer programs, computer-readable media, arrays of microphones and controllers

Also Published As

Publication number Publication date
GB201804519D0 (en) 2018-05-02
CN112154383A (en) 2020-12-29
GB2572175B (en) 2022-10-12
WO2019180434A1 (en) 2019-09-26
GB2572175A (en) 2019-09-25

Similar Documents

Publication Publication Date Title
WO2019180434A1 (en) Processing a command
US11644964B2 (en) Controlling a device based on processing of image data that captures the device and/or an installation environment of the device
KR102643027B1 (en) Electric device, method for control thereof
JP6439817B2 (en) Adapting object handover from robot to human based on cognitive affordance
CN105045122A (en) Intelligent household natural interaction system based on audios and videos
CN109521927B (en) Robot interaction method and equipment
CN112528004B (en) Voice interaction method, voice interaction device, electronic equipment, medium and computer program product
CN110121696B (en) Electronic device and control method thereof
US20180018393A1 (en) Automatic ontology generation for internet of things applications
US20200081981A1 (en) System and method for a scene builder
US11269789B2 (en) Managing connections of input and output devices in a physical room
JP7380556B2 (en) Information processing device, information processing method and program
US11698614B2 (en) Systems, device and method of managing a building automation environment
KR20210056220A (en) Interchangeable multimodal human interaction system
US20230252990A1 (en) Procedural knowledge for a smart home automation system
CN114281185B (en) Body state identification and somatosensory interaction system and method based on embedded platform
KR102258531B1 (en) Analysis unit for integrated recognition for multiple input recognition system
KR102251858B1 (en) Deep learning based image analysis method, system and mobile terminal
Dhillon et al. Method for Real-Time Voice Communication
CN115213888A (en) Robot control method, device, medium, and electronic apparatus

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20201021

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20210515