GB2572213A - Second user avatar method and system - Google Patents

Second user avatar method and system Download PDF

Info

Publication number
GB2572213A
GB2572213A GB1804671.4A GB201804671A GB2572213A GB 2572213 A GB2572213 A GB 2572213A GB 201804671 A GB201804671 A GB 201804671A GB 2572213 A GB2572213 A GB 2572213A
Authority
GB
United Kingdom
Prior art keywords
user
entertainment device
robot
control signals
robotic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1804671.4A
Other versions
GB201804671D0 (en
Inventor
Eder Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB1804671.4A priority Critical patent/GB2572213A/en
Publication of GB201804671D0 publication Critical patent/GB201804671D0/en
Publication of GB2572213A publication Critical patent/GB2572213A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/003Manipulators for entertainment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39439Joystick, handle, lever controls manipulator directly, manually by operator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40146Telepresence, teletaction, sensor feedback from slave to operator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/45Nc applications
    • G05B2219/45007Toy

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Toys (AREA)
  • Manipulator (AREA)

Abstract

A second user avatar system 600 comprises a first entertainment device 110A, operable to generate a virtual environment, responsive to inputs from a first user and also to received data indicative of a state of a second user of a second entertainment device 110B within a corresponding virtual environment and robotic device 100 that receives control signals from the first entertainment device. The first entertainment device is operable to receive data relating to the actions of the second user and to transmit control signals to the robotic device in response to the received data. The robotic device may be controlled to mimic the movements and actions of the second user.

Description

SECOND USER AVATAR METHOD AND SYSTEM
The present invention relates to a second user avatar method and system.
Many videogames have a social aspect, where two remotely located players, who may for example be friends, either play cooperatively or adversarially with each other in a game having a shared environment generated for each player by their respective entertainment device, such as a Sony ® PlayStation 4 ®. In order to provide a sense of connection, typically these players are able to talk to each other via headsets or the like. Potentially, they may also be able to see each other via a video link, for example within a picture-in-picture window superposed on the game. However in this latter case, such a window may interfere with gameplay.
In other case the scope for interaction with the remote friend is limited by these schemes.
The present invention aims to solve or mitigate this problem.
In a first aspect, a second-user avatar system is provided in accordance with claim 1.
In another aspect, a method of representing a second user is provided in accordance with claim
10.
Further respective aspects and features of the invention are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which:
- Figure 1 is a schematic diagram showing front and rear elevations of a robot, in accordance with embodiments of the present invention.
- Figure 2 is a schematic diagram showing front and rear elevations of points of articulation of a robot, in accordance with embodiments of the present invention.
- Figure 3 is a schematic diagram illustrating degrees of freedom at respective points of articulation of a robot, in accordance with embodiments of the present invention.
- Figure 4 is a schematic diagram of a control system for a robot, in accordance with embodiments of the present invention.
Figure 5 is a schematic diagram of an interactive robot system in accordance with embodiments of the present invention.
- Figure 6 is a schematic diagram of a second user avatar system in communication with an entertainment device of a second user, in accordance with embodiments of the present invention.
- Figure 7 is a flow diagram of a method of representing a second user, in accordance with embodiments of the present invention.
A second user avatar method and system are disclosed. In the following description, a number of specific details are presented in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to a person skilled in the art that these specific details need not be employed to practice the present invention. Conversely, specific details known to the person skilled in the art are omitted for the purposes of clarity where appropriate.
In embodiments of the present invention, a physical avatar of a second user is provided to a first user. This physical avatar takes the form of a robotic device. In addition to optionally relaying the second user’s voice through the robot, as opposed to a headset, the robot may respond to physical and/or virtual actions of the second user related to it via the first user’s entertainment device, which in turn receives information indicative of these actions directly or indirectly from the second user’s entertainment device.
A robot platform 100 for implementing embodiments of the present invention may take the form of any suitable robotic device.
The robot platform may have any suitable physical features. Hence movement, where required, may be achieved by wheels, tracks, articulated limbs, internal mass displacement or any other suitable means. Manipulation, where required, maybe achieved by one or more of a mechanical hand, pincer or any other hooking or gripping system, such as a suction or electromagnetic attachment mechanism or a hook or clip, and any further optional articulation such as one or more jointed arms. Vision, where required, may be achieved by optical camera and/or infra-red camera/detector, mounted on the robot and/or located within the environment navigated by the robot. Other situational awareness systems such as ultrasound echolocation, or detection of metal tracks and/or electrically charged tracks, and proximity systems such as whiskers coupled to sensors, or pressure pads, may also be considered. Control of the robot may be provided by running suitable software instructions on a processor of the robot and/or a processor of a remote computer communicating with the robot, for example via a wireless protocol.
Figure 1 illustrates front and rear views of an exemplary legged locomotive robot platform 100. As shown, the robot includes a body, head, right and left upper limbs, and right and left lower limbs for legged movement. A control unit 80 (not shown in Figure 1) within the body provides a control system for the robot.
Each of the right and left lower limbs includes a thigh, knee joint, second thigh (calf/shin), ankle and foot. The lower limb is coupled by a hip joint to the bottom of the trunk. Each of the right and left upper limb includes an upper arm, elbow joint and forearm. The upper limb is coupled by a shoulder joint to each upper edge of the trunk. Meanwhile, the head is coupled by a neck joint near to the upper end centre of the trunk.
Figure 2 illustrates front and rear views of the robot, showing its points of articulation (other than the hands).
Figure 3 then illustrates the degrees of freedom available for each point of articulation.
Referring to these Figures, a neck joint for supporting the head 1 has 3 degrees of freedom: a neck-joint yaw-axis 2, a neck-joint pitch-axis 3, and a neck-joint roll-axis 4. Meanwhile each arm has 7 degrees of freedom; a shoulder-joint pitch-axis 8, a shoulder-joint roll-axis 9, an upper-arm yaw-axis 10, an elbow-joint pitch-axis 11, a forearm yaw-axis 12, a wrist-joint pitchaxis 13, a wrist-joint roll-axis 14, and a hand 15. Typically the hand 15 also has a multi-joints multi-degrees-of-freedom structure including a plurality of fingers. However, these are omitted for simplicity of explanation. The trunk has 3 degrees of freedom; a trunk pitch-axis 5, a trunk roll-axis 6, and a trunk yaw-axis 7. Each leg constituting the lower limbs has 6 degrees of freedom; a hip-joint yaw-axis 16, a hip-joint pitch-axis 17, a hip-joint roll-axis 18, a knee-joint pitch-axis 19, an ankle-joint pitch-axis 20, an ankle-joint roll-axis 21, and a foot 22. In the exemplary robot platform, the cross point between the hip-joint pitch-axis 17 and the hip-joint roll-axis 18 defines a hip-joint location of the legged walking robot 100 according to the embodiment. Again for simplicity it is assumed that the foot itself has no degrees of freedom, but of course this is non-limiting. As a result the exemplary robot 100 has 32 (= 3 + 7^2 + 3 + 6^
2) degrees of freedom in total. It will be appreciated however that this is merely exemplary, and other robot platforms may have more or fewer degrees of freedom.
Each degree of freedom of the exemplary legged locomotive robot platform 100 is implemented by using an actuator. For example, a small AC servo actuator that is directly coupled to a gear and that houses a one-chip servo-system may be used, although any suitable actuator may be considered, such as a linear servo, electroactive polymer muscle, pneumatic, piezoelectric, or the like.
It will be appreciated that any desired action that the robot platform is capable of may be implemented by control signals issued by a control system to one or more of the actuators of the robot (or to simulated actuators in a simulation, as applicable), to adjust the pose of the robot within its available degrees of freedom.
Figure 4 schematically illustrates an exemplary control system for the robot platform 100.
A control unit 80 operates to co-ordinate the overall motion / actions of the robot. The control unit 80 has a main control unit 81 including main circuit components (not shown) such as a CPU (central processing unit) and a memory, and typically a periphery circuit 82 including an interface (not shown) for sending and receiving data and/or commands to and from a power supply circuit (not shown) and each component of the robot. The control unit may comprise a communication interface and communication device for receiving data and/or commands by remote-controlling. The control unit can be located anywhere suitable within the robot.
As shown in Figure 4, the robot has logical units 30 (head), 40 (torso), and 50R/L and 60R/L each representing the corresponding one of four human limbs. The degrees-of-freedom of the robot 100 shown in Fig. 3 are implemented by the corresponding actuator within each unit. Hence the head unit 30 has a neck-joint yaw-axis actuator A2, a neck-joint pitch-axis actuator A3, and a neck-joint roll-axis actuator A4 disposed therein for representing the neck-joint yawaxis 2, the neck-joint pitch-axis 3, and the neck-joint roll-axis 4, respectively. Meanwhile the trunk unit 40 has a trunk pitch-axis actuator A5, a trunk roll-axis actuator A6, and a trunk yawaxis actuator A7 disposed therein for representing the trunk pitch-axis 5, the trunk roll-axis 6, and the trunk yaw-axis 7, respectively. Similarly the arm units 50R/L are broken down into upper-arm units 51R/L, elbow-joint units 52R/L, and forearm units 53R/L. Each of the arm units 50R/L has a shoulder-joint pitch-axis actuator A8, a shoulder-joint roll-axis actuator A9, an upper-arm yaw-axis actuator A10, an elbow-joint pitch-axis actuator All, an elbow-joint rollaxis actuator A12, a wrist-joint pitch-axis actuator A13, and a wrist-joint roll-axis actuator A14 disposed therein for representing the shoulder-joint pitch-axis 8, the shoulder-joint roll-axis 9, the upper-arm yaw-axis 10, the elbow-joint pitch-axis 11, an elbow-joint roll-axis 12, the wristjoint pitch-axis 13, and the wrist-joint roll-axis 14, respectively. Finally the leg units 60R/L are broken down into thigh units 61R/L, knee units 62R/L, and second-thigh units 63R/L. Each of the leg units 60 R/L has a hip-joint yaw-axis actuator Al6, a hip-joint pitch-axis actuator Al7, a hip-joint roll-axis actuator A18, a knee-joint pitch-axis actuator A19, an ankle-joint pitch-axis actuator A20, and an ankle-joint roll-axis actuator A21 disposed therein for representing the hipjoint yaw-axis 16, the hip-joint pitch-axis 17, the hip-joint roll-axis 18, the knee-joint pitch-axis 19, the ankle-joint pitch-axis 20, and the ankle-joint roll-axis 21, respectively. Optionally the head unit 30, the trunk unit 40, the arm units 50, and the leg units 60 may have sub-controllers 35, 45, 55, and 65 for driving the corresponding actuators disposed therein.
Hence by issuing appropriate commands, the main controller (81) can control the driving of the joint actuators included in the robot 100 to implement the desired action. For example, the controller may implement a walking action by implementing successive phases, as follows:
(1) Single support phase (left leg) with the right leg off the walking surface;
(2) Double support phase with the right foot touching the walking surface;
(3) Single support phase (right leg) with the left leg off the walking surface; and (4) Double support phase with the left foot touching the walking surface.
Each phase in turn comprises the control of a plurality of actuators, both within the relevant leg and potentially elsewhere in the robot, for example moving the opposing arm and/or attitude of the torso to maintain the centre of gravity of the robot over the supporting foot or feet.
Optionally, to detect the manner and/or extent of a physical interaction with an object and/or the environment, physical sensors may be provided.
Hence in the exemplary robot, the feet 22 have grounding detection sensors 91 and 92 (e.g. a proximity sensor or microswitch) for detecting the grounding of the feet 22 mounted on legs 60R and 60L respectively, and the torso is provided with an attitude sensor 93 (e.g. an acceleration sensor and/or a gyro-sensor) for measuring the trunk attitude. Outputs of the grounding detection sensors 91 and 92 are used to determine whether each of the right and left legs is in a standing state or a swinging state during the walking action, whilst an output of the attitude sensor 93 is used to detect an inclination and an attitude of the trunk. Other sensors may also be provided, for example on a gripping component of the robot, to detect that an object is being held.
The robot may also be equipped with sensors to provide additional senses. Hence for example the robot may be equipped with one or more cameras, enabling the control unit (or a remote system to which sensor-based data is sent) to recognise a user of the robot, or a target object for retrieval. Similarly one or more microphones may be provided to enable voice control or interaction by a user. Any other suitable sensor may be provided, according to the robot’s intended purpose. For example, a security robot intended to patrol a property may include heat and smoke sensors, and GPS.
Hence more generally, a robot platform may comprise any suitable form factor and comprise those degrees of freedom necessary to perform an intended task or tasks, achieved by the use of corresponding actuators that respond to control signals from a local or remote controller that in turn operates under suitable software instruction to generate a series of control signals corresponding to a performance of the intended task(s).
In order to provide software instruction to generate such control signals, a robot software development system may be provided for developing control sequences for desired actions, and/or for developing decision making logic to enable the robot control system to respond to user commands and/or environmental features.
As part of this development system, a virtual robot (i.e. a simulation) may be used in order to simplify the process of implementing test software (for example by avoiding the need to embed test software within robot hardware that may not have simple user-serviceable parts, or to simulate an environment or action where a mistake in the software could damage a real robot). The virtual robot may be characterised by the dimensions and degrees of freedom of the robot, etc., and an interpreter or API operable to respond to control signals to adjust the state of the virtual robot accordingly.
Control software and/or scripts to use with such software may then be developed using, and to use, any suitable techniques, including rule based / procedural methods, and/or machine learning / neural network based methods.
Referring to Figure 5, in an exemplary usage scenario a (toy) real robot crane 260 and a corresponding simulation (virtual robot crane 262) interact for entertainment purposes, for example mirroring each other’s actions or behaving in a complementary manner, and/or using sensor data from the real or virtual robot to control actions of the other. The virtual robot may be graphically embellished compared to the real robot, for example having a face, or resembling an object or creature only approximated by the real robot.
In this example, the robot platform 260 has motorised wheels 266a-d and one articulated arm with actuators 264a-c. However it will be appreciated that any suitable form factor may be chosen, such as for example the humanoid robot 100 of Figure 1, or a dog-shaped robot (not shown) or a spheroidal robot (not shown).
In Figure 5, control of both the virtual and real robots is performed by a general purpose computer (110) operating under suitable software instructions, such as the Sony® PlayStation 4®. A user can interact with the PlayStation and hence optionally indirectly interact with one or both of the real and virtual robots using any suitable interface, such as a videogame controller
143. The PlayStation can detect the state of the real robot by receiving telemetry and other status data from the robot, and/or from analysis of an image of the real robot captured by a video camera 141. Alternatively or in addition the PlayStation can assume the state of the real robot based on expected outcomes of the commands sent to it. Hence for example, the PlayStation may analyse captured images of the real robot in expected final poses to determine its positon and orientation, but assume the state of the robot during intermediate states such as transitions between poses.
In the example scenario, the user provides inputs to control the real robot via the PlayStation (for example indicating an amount and direction of travel with one joystick, and a vertical and horizontal position of the arm end with another joystick). These inputs are interpreted by the PlayStation into control signals for the robot. Meanwhile the virtual simulation of the robot may also be controlled in a corresponding or complementary manner using the simulation technique described above, according to the mode of play.
Alternatively or in addition, the user may directly control the real robot via its own interface or by direct manipulation, and the state of the robot may be detected by the PlayStation (e.g. via image analysis and/or telemetry data from the robot as described previously) and used to set a corresponding state of the virtual robot.
It will be appreciated that the virtual robot may not be displayed at all, but may merely act as a proxy for the real robot within a virtual environment. Hence for example the image of the real robot may be extracted from a captured video image and embedded within a generated virtual environment in an augmented reality application, and then actions of the real robot can be made to appear to have an effect in the virtual environment by virtue of those interactions occurring with a corresponding virtual robot in the environment mirroring the state of the real robot.
Alternatively, a virtual robot may not be used at all, and the PlayStation may simply provide control and/or state analysis for the real robot. Hence for example the PlayStation may monitor the robot via the camera, and cause it to pick up a ball or other target object placed within the camera’s field of view by the user.
Hence more generally, a robot platform may interact with a general purpose computer such as the Sony ® PlayStation 4 ® to obtain a series of control signals relating to setting a state of the robot, for the purposes of control by a user and/or control by the PlayStation to achieve a predetermined task or goal. Optionally the state, task or goal may be at least in part defined within or in response to a virtual environment, and may make use of a simulation of the robot.
In embodiments of the present invention, a robot platform such as the exemplary platforms 100 or 260 described previously herein may be used for the purposes of explanation, whilst it will be appreciated that any robot platform suited to the techniques and actions claimed herein below may be envisaged as being within the scope of the invention.
Referring now also to Figure 6, a second-user avatar system 600 comprises a first entertainment device 110A such as a Sony ® PlayStation 4 ®, operable to generate a representation of a virtual environment, such as a videogame.
This representation is generated responsive to inputs from a first user (e.g. the user of the first entertainment device), for example via a videogame controller 143A. The inputs may typically affect the position of an avatar or viewpoint within the virtual environment, and a direction of view at that position. The inputs may also affect the state of the virtual environment in other ways, such as for example changing the pose or equipment of a displayed avatar, or causing an interaction with a feature of the virtual environment.
The representation is also generated responsive to received data indicative of a state of a corresponding representation of the virtual environment generated by a second entertainment device 110B, again for example a Sony ® PlayStation 4 ®. The data is typically received via a network 610 such as the internet, and may be received directly in a peer-to-peer fashion, or alternatively via a server administering the virtual environment (not shown)
Typically this data will indicate at least the state of an avatar of the second user, who uses the second entertainment device, and optionally other data such as an aiming line for a weapon of the second user, or indicators of features within the virtual environment that the second user is interacting with. The state of the avatar of the second user in turn may be responsive to inputs by the second user, and optionally also to interactions with or effects of the virtual environment. Hence for example movements of the avatar may be responsive to inputs of the second user, whilst a health value of the avatar may be responsive to interactions with or effects of the virtual environment.
In this way, the first and second entertainment systems can interact to support a co-operative or adversarial multiplayer game between the first and second users (and optionally other remote users as well).
The second user avatar system also comprises a robotic device (e.g. as a non-limiting example the robot platform 100 of Figure 1) comprising one or more actuators, and a receiver (not shown) operable to receive control signals from the first entertainment device.
The first entertainment device 110A is then operable to receive data indicative of actions of the second user, as will be described later herein, and to transmit control signals to the robotic device responsive to the received data indicative of actions of the second user.
In embodiments of the present invention, the received data comprises indicators of one or more physical actions of the second user.
These may be captured for example using a video camera 141 operably coupled to the second entertainment device. The second entertainment device may then perform image analysis to detect one or more aspects of the second user’s physical actions / states, such as one or more of poses, gestures, facial expressions, physical (re)positioning and the like, and transmit data representative of these one or more aspects. Such data may include one or more of parameters for a skeletal model of the second user, a classification of facial expression, a spatial position with respect to a predetermined origin, and the like. A sequence of such parameters may serve to describe dynamic changes corresponding to gestures by the second user, and/or gesture recognition may be used to classify one or more gestures and send corresponding data.
In principle, as an alternative the second entertainment device could instead transmit video data, and the image analysis could be performed by a recipient, being either a central server if used, or the first entertainment device; however this would be likely to require more data bandwidth than the transmission of indicative data such as parametric descriptors of pose and/or position, and/or classifications of gesture and/or facial expression.
Accordingly, in response to the data indicative of physical actions of the second user, the first entertainment device is operable to transmit control signals to the robotic device that cause the robot to mimic at least one aspect of a physical action of the second user.
For example, the first entertainment device may map some or all of a received skeletal model to a target pose for the robot, and issue commands to the robot to update its pose accordingly. For a stream of received data, this may result in incremental changes to the robots pose that closely track the second user’s position. For periodic pose or gesture information, this may result in the use of commands to transition from a current pose to a new pose, or to implement the identified gesture.
Alternatively or in addition, in embodiments of the present invention the received data may comprise indicators of one or more virtual actions of the second user within the virtual environment.
As described in relation to the first user, actions by the second user that form input to the second entertainment device can serve to influence the state of the virtual environment, for example by moving an avatar or viewpoint of the second user in the game, for example causing their avatar to equip different items, adopt a specific pose, aim a gun, jump, start to run, or indeed fall over or interact with the environment in any way that it allows.
Hence in a similar manner to physical actions of the user, virtual actions of their avatar may be similarly interpreted to control the robotic device, with the first entertainment device being operable to transmit control signals to the robotic device that cause it to mimic at least one aspect of a virtual action of the second user.
Alternatively or in addition, in embodiments of the present invention the first entertainment device is operable to transmit control signals to the robotic device that cause the robotic device to perform a predetermined action in response to a detected state of the corresponding representation of the virtual environment generated by a second entertainment device.
It will be appreciated that the state may encompass other aspects of the second user’s avatar, viewpoint or situation within the virtual environment than just their pose or equipment. For example, in a game the second user’s health level may vary. If their in-game health is high, this may be reflected in commands for the robot to perform movements such as jogging on the spot, looking around and apparently taking an interest in its environment, and, if it has the ability to convey emotions (for example through a built in visual display), then to convey a positive emotion. By contrast if the second user’s in-game health is low, this may be reflected in commands for the robot to adopt a stooped pose, or bow its head, or move more slowly for example when replicating real or virtual actions of the second user.
Similarly, below a critical health level the robot could sit or lie down, or perform a predefined ‘death scene’, mimicking an overly-dramatic actor’s portrayal of death. It will be appreciated that any suitable responses to detected states of the corresponding representation of the virtual environment generated by a second entertainment device may be considered.
Hence for example the robot could perform a predefined celebration action if the second user scores a threshold number of points, or wins a race, or is awarded an achievement trophy.
It was noted previously herein that transmission of video of the second user may not be desirable due to the comparatively high bandwidth required.
However, optionally in embodiments of the present invention the first entertainment device can receive face data representative of the second user’s face from the second entertainment device. For example, this face data can be an isolated image of the second user’s face, or a rectilinear section of captured video centred upon and/or primarily containing the second user’s face.
Alternatively, this face data can be parametric description of the users face, for example indicating relative positions of facial features, mouth morphology, gaze direction and so on, and/or data for a transposed representation of the user’s face or facial expressions onto a another form, such as a so-called ‘animoji’. Alternatively, the face data could represent a classification of the second user’s face into one of a predetermined number of states such as surprise, boredom, or taunting.
Meanwhile, optionally the robotic device is operable to display a representation of a face in response to face control signals transmitted by the first entertainment device. Depending on how such a display is implemented, the face signals may comprise the video image of the second user’s face either in colour, greyscale or black or white according to the display capabilities of the robot, or may comprise instructions to turn on or off predefined shaped elements in a display corresponding to certain facial features such as a flat, circular or smiling mouth, flat or raised eyebrows wide, normal or closed eyes, and so on. In this way the robot could convey simple emoji style expressions such as surprise (circular mouth, raised eyebrows, wide eyes), boredom (flat mouth, closed eyes), or taunting (smiling mouth, one raised eyebrow, other eye closed), and the like.
Where the face control signals are not simply a relay of video data (optionally after any resolution and/or colour depth alteration needed for the robot device’s display), then as noted above they may be generated in response to the image data, parametric data, and/or classification data as appropriate. Hence for example face signals corresponding to the surprise expression may be transmitted in response to receiving data classifying the second user’s face as being surprised. Meanwhile animoji data may be used to generate an animoji graphic that is then output as video data to the robot.
Whilst the robotic device may be controlled to simply act in a passive manner in response to real or virtual actions of the second user, optionally the robot may act interactively with the first user, or part of the first user’s environment.
For example, the robot could hold a conventional or bespoke videogame controller operably coupled to the first user’s entertainment device, and act to operate one or more inputs on that controller. The inputs may be interpreted as being supplementary to the second player (as the robot is responsive to them), but may equally be interpreted as supplementary to the first player, or interpreted as a third player or participant, depending on the application.
Alternatively or in addition the control system 80 of the robot may comprise means to transmit inputs to the first entertainment device directly, for example via a Bluetooth ® link. This may be of help where the robot form factor makes physical interaction with a controller difficult.
The robot may also interact with the first user or with an environment of the first user, in response to real or virtual actions of the second user. For example, the first user may have to press a button on the robot within a short period of time, and the second user could act so as to move the robot, making this a challenge. Similarly, the second user may be able to play a game of treasure hunt or Battleships ® by moving the robot to find a marker or other key position within the first user’s play area, or to pick up a predetermined object. In either case the amount of time or the distance that the second user can control the robot for may be a function of their real or virtual performance of an activity; a non-limiting example being clapping, waving, jumping or jogging on the spot (these avoid the problem that the layout of the second user’s room may be so different to that of the first user that a simple 1:1 mapping of movements between user and robot may not be possible). Other examples of interaction, stimulus and mapping (or lack thereof) will be apparent to the skilled person.
It will be appreciated that in principle the second entertainment device may itself be part of another second user avatar system, in communication with a corresponding robot, in order to replicate one or more aspects of real and/or virtual actions of the first user, and/or one or more aspects the state of the virtual environment in the first entertainment device. Hence the two entertainment devices and each be part of a respective parallel and complementary second user avatar system, enabling each player to enjoy having a robotic device act as a physical avatar for their friend/opponent.
As described up until now, the second user avatar system comprises the first entertainment device 110A and the robotic device 100, with the first entertainment device receiving relevant data from a separate system, namely the second entertainment device. This is the case even if the first and second entertainment devices have reciprocal features so that both independently operate as second user avatar systems, receiving descriptive data about the real and/or virtual actions and/or states of the other user and their instance of the virtual environment.
However optionally the second user avatar system for the first entertainment device can be expanded to incorporate the second entertainment device as well, so that for example the second entertainment device may generate the control signals for the robotic device responsive to the actions of its user, and transmit these to the first entertainment device, thereby saving processing time for the first entertainment device. Optionally in this configuration, the control signals can be synchronised with other data describing the state of the virtual environment at the second entertainment device at the point of transmission, which may for example assist with the synchronisation of robot behaviour with on-screen events. Again it will be appreciated that both entertainment devices can comprise a typical features so that both act as parallel complementary second user avatar systems.
It will be appreciated that the techniques described herein may be implemented on conventional hardware (such as a Sony ® PlayStation 4 ® and a robotic device such as a Sony ® Qrio ® or Aibo®), suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware.
Thus the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device. Separately, such a computer program may be transmitted via data signals on a network such as an Ethernet, a wireless network, the Internet, or any combination of these or other networks.
Hence a corresponding method of representing a second user may comprise:
in a first step S710, generating, at a first entertainment device, a virtual environment responsive to inputs from a first user and also to data received at the first entertainment device indicative of a corresponding representation of the virtual environment generated by a second entertainment device;
in a second step S720, receiving, at a robotic device comprising one or more actuators, control signals from the first entertainment device;
in a third step S730 receiving, at the first entertainment device data indicative of actions of the second user; and in a fourth step S740 transmitting, from the first entertainment device to the robotic device, control signals responsive to the received data indicative of actions of the second user.
It will be apparent to a person skilled in the art that variations in the above method corresponding to operation of the various embodiments of the apparatus as described and claimed herein are similarly considered within the scope of the present invention, including but not limited to:
- the received data comprising indicators of one or more physical actions of the second user, and the transmitting step comprises transmitting control signals to the robotic device that cause the robot to mimic at least one aspect of a physical action of the second user;
- the received data comprising indicators of one or more virtual actions of the second user within the virtual environment, and the transmitting step comprises transmitting control signals to the robotic device that cause the robot to mimic at least one aspect of a virtual action of the second user;
- the transmitting step comprising transmitting control signals to the robotic device that cause the robotic device to perform a predetermined action in response to a detected state of the corresponding representation of the virtual environment generated by a second entertainment device; and
- if the robotic device is operable to display a representation of a face in response to received face signals, the method comprising the steps of receiving at the first entertainment device data representative of the second user’s face, and transmitting from the first entertainment device to the robotic device face signals that cause a robotic device to display a representation of the second user’s face.

Claims (15)

1. A second-user avatar system, comprising:
a first entertainment device, operable to generate a representation of a virtual environment, responsive to inputs from a first user and also to received data indicative of a corresponding representation of the virtual environment generated by a second entertainment device; and a robotic device comprising one or more actuators, and a receiver operable to receive control signals from the first entertainment device;
wherein the first entertainment device is operable to receive data indicative of actions of the second user; and the first entertainment device is operable to transmit control signals to the robotic device responsive to the received data indicative of actions of the second user.
2. A second user avatar system in accordance with claim 1, in which the received data comprises indicators of one or more physical actions of the second user.
3. A second user avatar system in accordance with claim 2, in which the first entertainment device is operable to transmit control signals to the robotic device that cause the robot to mimic at least one aspect of a physical action of the second user.
4. A second user avatar system in accordance with claim 1 or claim 2, in which the received data comprises indicators of one or more virtual actions of the second user within the virtual environment.
5. A second user avatar system in accordance with claim 4, in which the first entertainment device is operable to transmit control signals to the robotic device that cause the robot to mimic at least one aspect of a virtual action of the second user.
6. A second user avatar system in accordance with any one of the preceding claims, in which the first entertainment device receives data from the second entertainment device via a server administering the virtual environment.
7. A second user avatar system in accordance with any one of the preceding claims, in which the first entertainment device is operable to transmit control signals to the robotic device that cause the robotic device to perform a predetermined action in response to a detected state of the corresponding representation of the virtual environment generated by a second entertainment device.
8. A second user avatar system in accordance with any one of the preceding claims, in which the first entertainment device is operable to receive data representative of the second user’s face;
the robotic device is operable to display a representation of a face in response to received face control signals; and the first entertainment device is operable to transmit face control signals to the robotic device that cause a robotic device to display a representation of the second user’s face.
9. A second user avatar system in accordance with any one of the preceding claims, comprising:
the second entertainment device; and in which the second entertainment device is operable to generate, as data indicative of actions of the second user, control signals for the robotic device responsive to the actions of the second user.
10. A method of representing a second user, comprising the steps of:
generating, at a first entertainment device, a virtual environment responsive to inputs from a first user and also to data received at the first entertainment device indicative of a corresponding representation of the virtual environment generated by a second entertainment device;
receiving, at a robotic device comprising one or more actuators, control signals from the first entertainment device;
receiving, at the first entertainment device data indicative of actions of the second user; and transmitting, from the first entertainment device to the robotic device, control signals responsive to the received data indicative of actions of the second user.
11. A method in accordance with claim 10, in which the received data comprises indicators of one or more physical actions of the second user, and the transmitting step comprises transmitting control signals to the robotic device that cause the robot to mimic at least one aspect of a physical action of the second user.
12. A method in accordance with claim 10 or claim 11, in which the received data comprises indicators of one or more virtual actions of the second user within the virtual environment, and the transmitting step comprises transmitting control signals to the robotic device that cause the robot to mimic at least one aspect of a virtual action of the second user.
13. A method in accordance with any one of claims 10-12, in which the transmitting step comprises transmitting control signals to the robotic device that cause the robotic device to
5 perform a predetermined action in response to a detected state of the corresponding representation of the virtual environment generated by a second entertainment device.
14. A method in accordance with any one of claims 10-13, in which the robotic device is operable to display a representation of a face in response to received face signals, the method comprising the steps of
10 receiving at the first entertainment device data representative of the second user’s face;
and transmitting from the first entertainment device to the robotic device face signals that cause a robotic device to display a representation of the second user’s face.
15. A computer readable medium having computer executable instructions adapted to cause a 15 computer system to perform the method of any one of claims 10-14.
GB1804671.4A 2018-03-23 2018-03-23 Second user avatar method and system Withdrawn GB2572213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1804671.4A GB2572213A (en) 2018-03-23 2018-03-23 Second user avatar method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1804671.4A GB2572213A (en) 2018-03-23 2018-03-23 Second user avatar method and system

Publications (2)

Publication Number Publication Date
GB201804671D0 GB201804671D0 (en) 2018-05-09
GB2572213A true GB2572213A (en) 2019-09-25

Family

ID=62068136

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1804671.4A Withdrawn GB2572213A (en) 2018-03-23 2018-03-23 Second user avatar method and system

Country Status (1)

Country Link
GB (1) GB2572213A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110834330A (en) * 2019-10-25 2020-02-25 清华大学深圳国际研究生院 Flexible mechanical arm teleoperation man-machine interaction terminal and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232735B1 (en) * 1998-11-24 2001-05-15 Thames Co., Ltd. Robot remote control system and robot image remote control processing system
US20060223637A1 (en) * 2005-03-31 2006-10-05 Outland Research, Llc Video game system combining gaming simulation with remote robot control and remote robot feedback
US20120239196A1 (en) * 2011-03-15 2012-09-20 Microsoft Corporation Natural Human to Robot Remote Control
US20150314440A1 (en) * 2014-04-30 2015-11-05 Coleman P. Parker Robotic Control System Using Virtual Reality Input
US20160046023A1 (en) * 2014-08-15 2016-02-18 University Of Central Florida Research Foundation, Inc. Control Interface for Robotic Humanoid Avatar System and Related Methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6232735B1 (en) * 1998-11-24 2001-05-15 Thames Co., Ltd. Robot remote control system and robot image remote control processing system
US20060223637A1 (en) * 2005-03-31 2006-10-05 Outland Research, Llc Video game system combining gaming simulation with remote robot control and remote robot feedback
US20120239196A1 (en) * 2011-03-15 2012-09-20 Microsoft Corporation Natural Human to Robot Remote Control
US20150314440A1 (en) * 2014-04-30 2015-11-05 Coleman P. Parker Robotic Control System Using Virtual Reality Input
US20160046023A1 (en) * 2014-08-15 2016-02-18 University Of Central Florida Research Foundation, Inc. Control Interface for Robotic Humanoid Avatar System and Related Methods

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110834330A (en) * 2019-10-25 2020-02-25 清华大学深圳国际研究生院 Flexible mechanical arm teleoperation man-machine interaction terminal and method
CN110834330B (en) * 2019-10-25 2020-11-13 清华大学深圳国际研究生院 Flexible mechanical arm teleoperation man-machine interaction terminal and method

Also Published As

Publication number Publication date
GB201804671D0 (en) 2018-05-09

Similar Documents

Publication Publication Date Title
US10936149B2 (en) Information processing method and apparatus for executing the information processing method
KR102065687B1 (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
US11498223B2 (en) Apparatus control systems and method
CN102947777B (en) Usertracking feeds back
RU2475290C1 (en) Device for games
CN102473320B (en) Bringing a visual representation to life via learned input from the user
EP3587048B1 (en) Motion restriction system and method
Oppenheim et al. WiiEMG: A real-time environment for control of the Wii with surface electromyography
US20220134218A1 (en) System and method for virtual character animation using motion capture
EP3575044A2 (en) Robot interaction system and method
US11312002B2 (en) Apparatus control system and method
US20200035073A1 (en) Robot interaction system and method
GB2572213A (en) Second user avatar method and system
US20220226996A1 (en) Robot control system
JP2019086848A (en) Program, information processing device and method
US11780084B2 (en) Robotic device, control method for robotic device, and program
JP6978240B2 (en) An information processing method, a device, and a program for causing a computer to execute the information processing method.
US10242241B1 (en) Advanced mobile communication device gameplay system
US11733705B2 (en) Moving body and moving body control method
US11648672B2 (en) Information processing device and image generation method
Dave et al. Avatar-Darwin a social humanoid with telepresence abilities aimed at embodied avatar systems
Ojeda et al. Gesture-gross recognition of upper limbs to physical rehabilitation
Shyngys et al. Application of Gamification Tool in Hand Rehabilitation Process
KR20210079077A (en) Robot operation system for boarding experience
GB2573790A (en) Robot development system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)