WO2000043167A1 - Robot et procede de commande de deplacement - Google Patents
Robot et procede de commande de deplacement Download PDFInfo
- Publication number
- WO2000043167A1 WO2000043167A1 PCT/JP2000/000263 JP0000263W WO0043167A1 WO 2000043167 A1 WO2000043167 A1 WO 2000043167A1 JP 0000263 W JP0000263 W JP 0000263W WO 0043167 A1 WO0043167 A1 WO 0043167A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- posture
- information
- model
- robot device
- emotion
- Prior art date
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/008—Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
Definitions
- the present invention relates to a robot device and an operation control method, and is suitably applied to, for example, a robot device that operates like a quadruped.
- a quadruped walking robot apparatus that operates in response to a command from a user or the surrounding environment, an articulated robot, or an animation using a character that moves by computer graphics.
- a robot device or animation hereinafter collectively referred to as a robot device, etc. performs a series of operations based on a command from a user.
- a so-called petrobot which is a robotic device that resembles a quadruped like a dog, takes a prone posture when the user receives a command to prone. Whenever the user presents his hand in front of his / her mouth, he or she will “hand”.
- conventional robot devices only perform predetermined operations based on commands from the user and the environment.
- the conventional robot device does not operate autonomously like a real animal, and therefore, operates as close to a real animal as possible and autonomously. It was not possible to satisfy the user's request to obtain a robot device or the like that determines the action.
- the robot device and the like execute a target posture and a target operation via a predetermined posture and motion, but during the transition to the target posture and motion, Preparing a plurality of postures and movements to be executed leads to enrichment of movement expressions of robot devices and the like.
- the posture or movement to be passed is optimally selected and is set as the purpose. It can be said that it is preferable to be able to make a transition to a posture and motion that is appropriate. Disclosure of the invention
- the present invention has been made in view of the above-described circumstances, and has as its object to propose a robot apparatus and an operation control method for autonomously performing a natural operation.
- the present invention has been made in view of the above-described circumstances, and provides a robot device and an operation control method that enable enrichment of expressions and optimize postures and operations during transition.
- the purpose is.
- the robot device according to the present invention is a robot device that performs an operation according to supplied input information, has a model caused by the operation, and changes the model based on the input information.
- Model change means for determining the action by the operation.
- a robot device having such a configuration has a model caused by an action, and determines the action by changing the model based on input information. For example, if the model is an emotion model or an instinct model, robot The robot will act autonomously based on the emotions and instinct of the smart device itself.
- an operation control method is an operation control method for operating according to supplied input information, and determines an operation by changing a model caused by the operation based on the input information.
- Such a motion control method determines a motion by changing a model based on input information. For example, if the model is an emotion model or an instinct model, the robot device can change the state of its own emotion or instinct. Based on this, they will behave autonomously.
- the robot device is a robot device that performs an operation in accordance with supplied input information, and a current operation in accordance with a history of sequentially supplied input information and a next operation to be supplied.
- An operation determining means for determining a next operation following the current operation based on the input information is provided.
- a robot device having such a configuration determines a current operation according to a history of input information sequentially supplied and a next operation following the current operation based on input information supplied next.
- the robot device acts autonomously based on its own emotions and instinct.
- the operation control method according to the present invention is an operation control method for operating according to supplied input information, wherein the current operation according to the history of sequentially supplied input information and the next supplied input Based on the information, determine the next action following the current action.
- Such an operation control method determines a current operation according to a history of input information sequentially supplied and a next operation following the current operation based on input information supplied next.
- the robot device autonomously acts based on its own emotions and instinct.
- the robot apparatus according to the present invention includes: a graph storage unit configured to hold a graph in which the posture and the motion are registered, the graph being configured by connecting the posture and the motion of transitioning the posture. The path from the posture to the target posture or the target movement is searched on the graph based on the action command information, and is operated based on the search result. Control means for transitioning to a posture or a target movement.
- the robot apparatus having such a configuration is based on a graph in which the posture and the motion stored in the graph storage means are registered, and the graph is formed by connecting the posture and the motion for changing the posture. Then, the posture changes to the target posture or the target posture designated by the action command information. Specifically, the robot device searches for a path from the current posture to the target posture or the target movement on the graph based on the action command information by the control means. To make a transition from the current posture to the target posture or the target motion.
- the motion control method according to the present invention is configured such that a posture from a current posture to a target posture or a target movement is registered based on the behavior command information, and the posture and the above-described movement are registered.
- a search is performed on a graph formed by connecting the position and the operation to change the posture, and the operation is performed based on the search result, thereby transiting from the current posture to the target posture or the target operation.
- the motion control method is based on a posture and a registered motion, which are formed by connecting the posture and the motion for changing the posture, to the target indicated by the action command information.
- Figure A transition is made to a force or a target posture.
- FIG. 1 is a perspective view showing an embodiment of a robot device according to the present invention.
- FIG. 2 is a block diagram showing a circuit configuration of the mouth bot device.
- FIG. 3 is a schematic diagram illustrating data processing in the controller.
- FIG. 4 is a schematic diagram illustrating data processing by the emotion / instinct model unit.
- FIG. 5 is a schematic diagram illustrating data processing by the emotion / instinct model unit.
- FIG. 6 is a schematic diagram illustrating data processing by the emotion / instinct model unit.
- Figure 7 is a state transition diagram of the finite automaton in the action decision mechanism.
- FIG. 8 is a block diagram illustrating a configuration of an action determining mechanism unit and the like used for describing generation of action command information.
- FIG. 9 is a diagram used to explain a case where a state is determined by probability.
- FIG. 10 is a diagram showing a table in which the relationship between the transition probability and the state to be transitioned is described.
- FIG. 11 is a diagram showing a graph of a posture transition in the posture transition mechanism unit.
- FIG. 12 is a diagram showing a specific example of a graph of a posture transition.
- Fig. 13 shows the transition to a temporary transition when the current posture cannot be grasped. -This is a state transition diagram used to explain that the vehicle is provided with a tall posture and that it is possible to return from a fall.
- FIG. 14 is a diagram used to explain a route search using distance as an index.
- FIG. 15 is a diagram used to explain a case where a route search is performed by classification.
- FIG. 16 is a diagram used to explain a route search when a distance is specifically set.
- FIG. 17 is a top view of the robot device used to explain the case where the walking direction is used as a parameter.
- Figure 18 is a diagram showing the parameters and the contents of the operation.
- FIG. 19 is a diagram used to explain a case where another operation is synchronized with an operation during transition between postures.
- FIG. 20 is a diagram used to explain a case where a similar operation is performed in different postures.
- FIG. 21 is a perspective view showing a robot device.
- FIGS. 22A and 22B are views used to explain the case where the basic posture is adopted and the posture is changed between the whole and the part.
- Fig. 23 is a diagram used to explain the case where the current posture is the whole and the target motion is in the part, and the target motion is executed after temporarily transitioning to the basic posture.
- Fig. 24 is a diagram used to explain the case where the target posture is executed after the transition to the basic posture once when the current posture is in the part and the target motion is in the whole.
- FIGS. 25A and B are diagrams used to explain the insertion process of the command. It is.
- FIG. 26 is a diagram showing a command storage unit that can store commands corresponding to the whole and each component.
- FIG. 27 is a diagram used to explain an example of a processing mode by a command storage unit that can store commands corresponding to the whole and each component.
- FIG. 28 is a block diagram illustrating an operation route search unit that performs a route search.
- FIG. 29 is a flowchart showing a series of processing until an operation is executed by a command.
- FIG. 30 is a diagram used to explain that a plurality of directional arcs are passed from the current posture to the target operation on the graph of the head.
- FIG. 31 is another example to which the present invention is applied, and is a diagram showing a character moving with computer graphics.
- FIG. 32 is a perspective view showing another embodiment of the robot apparatus according to the present invention. BEST MODE FOR CARRYING OUT THE INVENTION
- the robot apparatus 1 is entirely configured, and includes a head 2 corresponding to a head, a main body 3 corresponding to a body, and feet 4A, 4B, 4 corresponding to feet. It is composed of C, 4D and a tail 5 corresponding to the tail.
- the robot device 1 has a head 2, Acts like a real quadruped by moving the feet 4A-4D and the tail 5.
- the head 2 has an image recognition unit 10 that is equivalent to the eye and captures an image, for example, a CCD (Charge Coupled Device) camera, a microphone 11 that corresponds to the ear and collects voice, and a mouth.
- the speakers 1 and 2 that emit sound are respectively mounted at predetermined positions.
- the head 2 has a remote controller receiving unit 13 for receiving a command transmitted from a user via a remote controller (not shown) and a head for detecting that the user's hand or the like has come into contact.
- a touch sensor 14 and an LED (Light Emitter Diode) 15 as a light emitting means are mounted.
- a battery 21 is attached to the body 3 at a position corresponding to the belly, and an electronic circuit (not shown) for controlling the operation of the entire robot device 1 and the like are housed inside the battery 21.
- the actuators 23 A to 23 N are connected to each other, and are driven based on control of an electronic circuit housed in the main body 3. In this way, the robot apparatus 1 drives the actuators 23 A to 23 N to swing the head 2 up and down, left and right, swing the tail 5, and swing the feet 4 A to Move 4D to walk or run to make it move like a real quadruped.
- the robot device 1 configured as described above will be described later in detail, but has, for example, the following features in outline.
- first posture first posture
- second posture the robot device does not directly transition from the first posture to the second posture, but in advance. The transition follows the prepared posture.
- the robot device 1 includes a head, a foot, and a tail, and can independently manage the posture of each of these parts. Therefore, for example, the head and the foot can be independently controlled in posture. In addition, the entire posture, including the head, feet, and tail, can be managed separately from the parts.
- parameters for indicating details of the operation can be passed to the operation instruction of the robot device 1.
- the robot device 1 has the above features, and many features including such features will be described below.
- the circuit configuration of the robot device 1 is, for example, as shown in FIG.
- the head 2 is composed of a command receiving unit 30 consisting of a microphone 11 and a remote controller receiving unit 13, an external sensor 31 consisting of an image recognizing unit 10 and a touch sensor 14, a speaker 12 and an LED 15 And.
- the main unit 3 has a battery 21 and a controller 32 for controlling the operation of the robot device 1 as a whole, and a battery sensor 33 for detecting the remaining amount of the battery 21.
- an internal sensor 35 comprising a heat sensor 34 for detecting heat generated inside the robot device 1.
- actuators 23A to 23N are provided at predetermined positions of the robot device 1, respectively. You.
- the command receiving unit 30 is for receiving a command given by the user to the robot device 1, for example, a command such as “walk”, “down”, “follow the ball”, and the like. 3 and a microphone 11.
- the remote controller receiving section 13 receives a desired command input by a user operating a remote controller (not shown). For example, transmission of commands from the remote controller is performed by infrared light.
- the remote controller receiving section 13 receives the infrared light, generates a received signal S 1 A, and sends it to the controller 32.
- the remote controller is not limited to the case where the remote control is performed by infrared light, and may be such that a command is given to the robot device 1 by a musical scale.
- the robot device 1 performs, for example, a process according to the scale from the remote controller input from the microphone 11.
- the microphone 11 collects the voice uttered by the user, generates a voice signal S 1 B, and sends this to the controller 32.
- the command receiving unit 30 generates a command signal S1 composed of the received signal S1A and the audio signal S1B according to the command given to the mouth-bot device 1 from the user in this way, and sends this to the controller 32. To supply.
- the touch sensor 14 of the external sensor 31 is used to detect a user's action on the mouth pot device 1, for example, an action such as "stroke” or "slap".
- a user's action on the mouth pot device 1 for example, an action such as "stroke” or "slap”.
- a contact detection signal S 2 A corresponding to the action is generated and sent to the controller 32 (the image recognition unit 10 of the external sensor 31 As a result of identifying the environment around the mouth robot 1, information on the surrounding environment such as “dark” or “there is a favorite toy” or other information such as “other robot is running”
- the image recognition unit 10 sends an image signal S 2 B obtained as a result of capturing a surrounding image to the controller 32.
- the external sensor 31 generates an external information signal S2, which is a contact detection signal S2A and an image signal S2B, according to the external information given from outside the robot apparatus 1 as described above.
- the internal sensor 35 is for detecting the internal state of the robot apparatus 1 itself, for example, an internal state such as "hungry” or "heated", which means that the battery capacity has decreased. It is composed of a battery sensor 33 and a heat sensor 34.
- the battery sensor 33 detects the remaining amount of the battery 21 that supplies power to each circuit of the robot device 1.
- the battery sensor 33 sends a battery capacity detection signal S 3 A, which is a result of the detection, to the controller 32.
- the heat sensor 34 is for detecting heat inside the robot device 1.
- the heat sensor 34 sends a heat detection signal S 3 B, which is the detection result, to the controller 32.
- the internal sensor 35 generates an internal information signal S3 including a battery capacity detection signal S3A and a heat detection signal S3B according to the information inside the robot device 1 as described above, and outputs the signal to the controller. 3 Send to 2.
- the controller 32 includes a command signal S 1 supplied from the command receiving section 30, an external information signal S 2 supplied from the external sensor 31, and an internal information signal S supplied from the internal sensor 35. 3 to generate control signals S5A to S5N for driving each of the actuators 23A to 23N, which are respectively transmitted to the actuators 23A to 23N.
- the robot device 1 is operated by sending and driving.
- the controller 32 generates the audio signal S10 and the light emission signal S11 for output to the outside as necessary, and outputs the audio signal S10 to the outside via the speaker 12 as necessary. Or by sending a light emission signal S11 to the LED 15 to produce a desired light emission output (for example, blinking or changing a color) to inform a user of necessary information. .
- a desired light emission output for example, blinking or changing a color
- the user can be informed of his / her own emotion by emitting light.
- an image display unit for displaying an image may be provided instead of the LED 15. This allows the user to be informed of necessary information, such as emotions, by displaying a desired image.
- the controller 32 includes a command signal S 1 supplied from the command receiving unit 30, an external information signal S 2 supplied from the external sensor 31, and an internal information signal S 2 supplied from the internal sensor 35. 3 is subjected to data processing in software based on a program stored in a predetermined storage area in advance, and a control signal S 5 obtained as a result is supplied to the actuator 23.
- the operation of the actuator 23 is expressed as the operation of the robot device 1, and the present invention aims at realizing such an expression.
- the controller 32 includes an emotion / instinct model section 40 as an emotion instinct model changing means and an action determining mechanism as an action determining means.
- Section 41 and an attitude transition mechanism section 4 2 as attitude transition means and a control mechanism section 4 3 .
- the command signal S 1, the external information signal S 2, and the internal information signal S 3 are supplied from the outside. Is input to the emotion ⁇ instinct model section 40 and the action decision mechanism section 41. In outline, it works as follows.
- Instinct model section 40 determines the state of feelings and instinct based on command signal S 1, external information signal S 2 and internal information signal S 3. Then, in addition to the command signal S 1, the external information signal S 2, and the internal information signal S 3 in the action determination mechanism section 41, based on the emotion obtained by the instinct model section 40 and the instinct state information S 10, The action (behavior) is determined, and a posture transition plan for transitioning to the next motion (behavior) determined by the behavior determination mechanism 41 in the subsequent stage transition mechanism 42 is made. The information on the action (action) determined by the action determination mechanism section 41 is fed back to the emotion. Instinct model section 40, and the emotion / instinct model section 40 refers to the determined action (action). It is also used to determine the state of emotions and instinct. In other words, the emotion and instinct model unit 40 determines the instinct and emotion by also referring to the action (action) result.
- the control mechanism section 43 controls each operation section based on the posture transition information S18 sent from the posture transition mechanism section 42 based on the posture transition plan, and after the posture is actually changed, The next action (action) determined by the action decision mechanism 41 is actually executed.
- the robot device 1 has the controller 3 2 as described above. Based on emotions and instincts, the next action (action) is determined, a transition plan is made to a posture that can perform such action (action), and the posture is transitioned based on the transition plan. After that, the action (action) determined based on such emotion and instinct is actually executed.
- the controller 32 each component of the controller 32 will be described.
- the emotions' instinct model section 40 is roughly divided into an emotion group 50 that constitutes an emotion model and an instinct model prepared as a model having different attributes from the emotion model.
- the desire group 5 1 is provided.
- the emotion model is a model configured by an emotion parameter having a certain value, and for expressing an emotion defined in the robot device through an operation according to the value of the emotion parameter.
- the value of the emotion parameter fluctuates mainly based on external input signals (external factors) such as “hit” and “angry” detected by sensors such as a pressure sensor and a visual sensor.
- emotion parameters may also change based on internal input signals (internal factors) such as the remaining battery power and internal temperature.
- the instinct model is composed of instinct parameters having a certain value, and is a model for expressing the instinct (desire) specified in the robot device through an operation corresponding to the value of the instinct parameter.
- the values of the instinct parameters fluctuate mainly based on internal input signals such as “I want to exercise” based on the action history or “I want to charge (hungry)” based on the battery level. Needless to say, the instinct parameters also change based on external input signals (external factors), like the emotion parameters. You may do it.
- emotion model and instinct model are composed of multiple types of models each having the same attribute. That is, the emotion group 50 has the emotion units 50 A to 50 F as independent emotion models having the same attribute, and the desire group 51 has the desire unit as an independent desire model having the same attribute. It has 5 1 A to 5 IDs.
- the emotion group 50 is an emotion unit 50 A showing the emotion of "joy”, an emotion unit 50 B showing the emotion of "sadness", and an emotion unit showing the emotion of "anger”. 50 C, an emotion unit 50 D showing an emotion of “surprise”, an emotion unit 50 E showing an emotion of “fear”, and an emotion unit 50 F showing an emotion of “disgust”.
- the desire group 51 includes a desire unit for movement instincts 51 A, a desire unit for love instincts 51 B, a desire unit for love inst i nct. , A desire unit 51 C indicating the desire of “Recharge Inst inct” and a desire unit 51 D indicating the desire of “Search Inst Inct”.
- the emotion unit 50 A to 50 F expresses the degree of emotion by, for example, an intensity (emotion parameter) from 0 to 100 level, and is supplied with a command signal S 1, an external information signal S 2, and an internal signal. Based on the information signal S3, the intensity of the emotion is changed every moment.
- the emotional instinct model section 40 expresses the emotional state of the robot device 1 by combining the intensity of the emotional unit 5 OA to 50 D, which changes every moment, and expresses the temporal change of the emotion. Modeling.
- desired emotion units are mutually influenced to change the intensity.
- mutual suppression of emotional units They combine in a forcible or mutually stimulating manner so that they interact with each other and change in intensity.
- the user is praised by mutually suppressing the emotion unit 5OA of "joy” and the emotion unit 50B of "sadness".
- the input information S1 to S3 that increases the intensity of the emotion unit 5A of "joy” and the intensity of the emotion unit 50B of "sadness” is supplied.
- the intensity of the emotional unit of sadness 5 OB decreases as the intensity of the emotional unit 5 OA of “joy” increases.
- the intensity of the sadness emotion unit 50B increases and the emotional unit 5 of joyfulness increases. Decrease 0 A intensity.
- the emotion unit 50 OB of “sadness” and the emotion unit 50 C of “anger” are reciprocally coupled to each other, so that when the user is hit, the emotion unit 50 of “anger” 50 In addition to increasing the intensity of C, the emotional unit of “sadness” 50 Even if input information S1 to S3 that changes the intensity of B is not supplied, the emotional unit of “anger” G.
- the intensity of the sadness emotion unit 50B increases as the intensity of the 50C increases.
- the intensity of the sadness emotion unit 50B increases, and the anger emotion unit 5B increases. Increase the intensity of 0 C.
- the desired emotion units are influenced by each other to change the intensity, so that the combined emotion units have the same effect.
- the intensity of one emotion unit is changed, the intensity of the other emotion unit changes accordingly, and the mouth-bot device 1 having a natural emotion is realized.
- the desire units 51 A to 51 D express the degree of desire in the same way as the emotion unit 50 A to 50 F, for example, by the intensity (instinct parameter) from the 0 to 100 level, respectively.
- the intensity of the desire is changed every moment.
- the instinct model unit 40 expresses the state of the instinct of the robot device 1 by combining the intensity of the desire unit 51 A to 51 D that changes from moment to moment, and expresses the instinct time. It models change.
- desired desire units interact with each other to change the intensity.
- the desire units are connected to each other in a mutually repressive or mutually stimulating manner, so that they influence each other to change the intensity.
- the intensity of one desire unit is changed among the combined desire units, the intensity of the other desire unit changes accordingly, and the robot device 1 having a natural instinct is provided. Is achieved.
- the unit is influenced by each other between the emotion group 50 and the desire group 51, and the intensity is changed.
- the change in the intensity of the desire unit 51 A of “love affection” 51 and the desire unit 51 C of the “appetite” is caused by the emotion unit 51 of “sadness” of the emotion group 50.
- the emotional unit of 50B and “anger” affects the intensity of 50C, and the desire unit of “appetite” 51
- the intensity of 51C changes the intensity of “sadness”.
- anger emotional unit 50 C change in intensity Seems to affect.
- the emotion and instinct model section 40 receives the input information S:! ⁇ S 3. Consisting of the command signal S 1, the external information signal S 2, and the internal information signal S 3.
- the emotion units 50 A to 50 F and the desire units due to the interaction between the emotion units in the desire group 51 and the interaction between the units between the emotion group 50 and the desire group 51.
- the intensity of 51 A to 51 D is changed respectively.
- the emotion's instinct model section 40 determines the state of the emotion by combining the intensity of the changed emotion unit 50A to 50F, and the changed desire unit 51A to 51D.
- the state of the instinct is determined by combining the intensities of the instinct, and the determined emotion and the state of the instinct are sent to the action determination mechanism section 41 as emotion / instinct state information S10.
- the emotion / instinct model section 40 is supplied with behavior information S12 indicating the current or past behavior of the robot apparatus 1 itself from the behavior determination mechanism section 41 at a later stage. For example, when the action of the walking is determined by the action determining mechanism section 41 described later, the action information S12 is supplied as information indicating that "walking for a long time".
- the behavior information S12 indicates Different emotions and instinct state information S10 can be generated according to the behavior of the robot device 1. Specifically, the behavior information S12 to be fed back is referred to in determining the state of emotions and behaviors by the following configuration.
- the emotion and instinct model section 40 includes action information S12 indicating the action of the robot device 1 and input information S1 to S0 before the emotion units 50A to 50C.
- the intensity increasing / decreasing means 55 A to 55 C for generating intensity information S 14 A to S 14 C for increasing / decreasing the intensity of each emotion unit 50 A to 50 C based on S 3 are respectively provided.
- Intensity increasing / decreasing means Increase / decrease the intensity of each emotion unit 50 A to 50 C according to the intensity information S 14 A to S 14 C output from 55 A to 55 C . For example, if the emotions and instinct model section 40 rub the head when greeting the user, the behavior information S12 indicating that the user greeted the user and the input information S1 ⁇ that the user could pat the head.
- the intensity increasing / decreasing means 55 A is a function or a table that generates intensity information S 14 A to S 14 C based on the behavior information S 12 and the input information S:! To S 3. It is configured.
- the emotion and instinct model unit 40 includes the intensity increasing / decreasing means 55 A to 55 C, and includes not only the input information S 1 to S 3 but also the current or past mouth.
- the intensity increasing / decreasing means 55 A to 55 C includes not only the input information S 1 to S 3 but also the current or past mouth.
- the emotional unit of “joy” 5 O A can be avoided from causing unnatural emotions that increase the intensity of AA.
- emotion By the way, emotion
- each of the desire units 5 based on the supplied input information S1 to S3 and action information S12.
- the intensity of 1 A to 51 C is increased or decreased, respectively.
- the emotion unit 50 A to 50 C for “joy”, “sadness” and “anger” is provided with intensity increasing / decreasing means 55 A to 55 C. did.
- the present invention is not limited to this, and it goes without saying that the intensity units 5 OD to 50 F of the other “surprise”, “fear” and “disgust” may be provided with intensity increasing / decreasing means. .
- the intensity increasing / decreasing means 55 A to 55 C receive the intensity information S 1 according to the preset parameters. Since 4 A to S 14 C are generated and output, by setting the parameter to a different value for each robot device 1, for example, a robot device with an angry robot or a bright robot device Thus, the robot device can have individuality.
- the action determining mechanism 41 determines the next action (action) based on various information. As shown in FIG. 3, the action determining mechanism 41 includes a command signal S 1, an external information signal S 2, and an internal information signal S 3. Based on input information S14 consisting of instinct state information S10 and action information S12, the next action (action) is determined, and the content of the determined action (action) is set as an action command. The information is sent to the attitude transition mechanism section 42 as information S16.
- the action determining mechanism unit 41 represents the history of the input information S 14 supplied in the past as an operating state (hereinafter, this is referred to as a state). By transitioning the state to another state based on the input information S 14 and the state at that time, a probability finite automaton 5 having a finite number of states that determines the next operation It uses an algorithm called 7.
- the action determining mechanism unit 41 changes the state each time the input information S 14 is supplied, and determines the action according to the changed state, so that only the current input information S 14 is provided. Instead, the operation can be determined with reference to the past input information S14.
- the behavior determining mechanism unit 41 detects that a predetermined trigger has occurred, it changes the current state to the next state.
- Trigger One specific example is, for example, that the time during which the action of the current state is being executed has reached a certain value, or that specific input information S 14 has been input, or sent from the emotional instinct model section 40.
- the behavior determination mechanism unit 41 includes the emotion unit 50 A to 50 F and the desire unit 51 A to which the emotions and instinct state information S 10 supplied from the emotion and instinct model unit 40 indicate.
- a transition destination state is selected based on whether or not the intensity of a desired unit exceeds a predetermined threshold value among the intensity of 51D.
- the behavior determining mechanism unit 41 detects, for example, that the palm has been put out in front of the eyes based on the supplied external information signal S2, and based on the emotion / instinct state information S10.
- the intensity of the emotion unit 50 C of “anger” is equal to or less than a predetermined threshold, and “not hungry” based on the internal information signal S 3, that is, when the battery voltage is equal to or higher than the predetermined threshold.
- it detects that there is something it generates action command information S16 for performing the operation of the "front” in response to the palm being put out in front of the eyes, and sends this to the posture transition mechanism section 42. Send out.
- the behavior determination mechanism unit 41 for example, has a palm presented in front of the eyes, the intensity of the emotion unit 50C of "anger” is equal to or lower than a predetermined threshold value, and the "hungry” In other words, when it is detected that the battery voltage is lower than the predetermined threshold value, a message such as “palm palms” is displayed.
- the action command information S 16 for performing the various operations is generated and sent to the posture transition mechanism section 42.
- the behavior determining mechanism unit 41 detects that the palm is put in front of the eyes and the intensity of the emotion unit 50 C of “anger” is equal to or higher than a predetermined threshold value
- the behavior determining mechanism unit 41 determines that Not generated, that is, regardless of whether the battery voltage is equal to or higher than a predetermined threshold, action instruction information S16 for performing an operation such as "turning to the side” is generated, and this is generated. It is sent to the posture transition mechanism section 42.
- the action determining mechanism unit 41 determines the next action based on the input information S14. For example, as shown in FIG. , “Action 2”, “Action 3”, “Action 4”, • ⁇ For example, “Action 1” is composed of the action of kicking the ball, “Action 2” is composed of the action of expressing emotion, “Action 3” is composed of the action of autonomous search, and “Action 4”. “” Consists of actions to avoid obstacles, “action n” consists of actions to indicate that the battery level is low, and so on.
- the selection (decision) of the action is specifically made by the selection module 44 as shown in FIG. Then, the selection module 44 outputs the selection result to the posture transition mechanism section 42 as the action command information S 16, and outputs the emotion / instinct model section 40 or the line as the action information S 12. It is output to the motion determination mechanism section 41. For example, the selection module 44 sets a flag for the decided action, and outputs the information as the action information S 12 and the action command information S 16 to the action decision mechanism 41 and the posture transition mechanism 42. are doing.
- the emotion's instinct model unit 40 changes the state of the emotion and the instinct based on the behavior information S12 in addition to the same input information S1 to S3.
- the emotion 'instinct model unit 40 can generate different emotion. Instinct state information S10 even if the same input information S1 to S3 is given.
- the action determination mechanism section 41 “action 1”, “action 2”, “action 3”, “action 4”,... You can keep it.
- the group ID indicates the same information in the behavior of the same category.For example, when the behavior of “kicking a ball” has a plurality of behaviors, There is such a relationship that the same group ID is assigned. By assigning the same category ID to the same category of behavior, behaviors of the same category can be processed as a group.
- the same group ID is issued regardless of which action is selected in the same category. Is done. Then, the group ID attached to the selected action is sent to the emotion / instinct model unit 40, and the emotion / instinct model unit 40 determines the state of emotions and instinct based on the group ID. Will be able to decide.
- the behavior determination mechanism unit 41 includes an emotion unit 50 A to 50 F and a desire unit 51 A to 5 F indicated by the emotion and instinct state information S 10 supplied from the emotion and instinct model unit 40.
- the parameters of the action performed in the transition destination state such as the speed of walking, the magnitude and speed of the movement when moving the limbs, the sound when making a sound
- the height and size of the action are determined, and action command information S 16 corresponding to the parameter of the action is generated and sent to the posture transition mechanism section 42.
- the controller 32 when the controller 32 receives the external information signal S2 indicating that "the head has been stroked", the controller 32 sends the emotion "Instinct model section 40" and the emotion “Instinct state information S10" Is generated, and the emotion's instinct state information S10 is supplied to the action determining mechanism 41. In this state, the external information signal S2 indicating that "the hand is in front of the eyes” is supplied. Then, based on the above-mentioned emotion of “happy” in the instinctive state information S 10 and the external information signal S 2 of “hand is in front of the eyes”, the behavior determining mechanism 41 1 Action instruction information S 1 6 Is generated and transmitted to the posture transition mechanism section 42.
- the transition destination of the operation state is determined by a certain probability in the probability finite automaton 57. For example, when there is an input from the outside, it is assumed that the input makes a transition to a certain operation state (action state) with a probability of 20% (transition probability). Specifically, as shown in FIG. 9, the state ST10 of "I'm going" transitions to the state ST11 of "Hashishiteru,” and the state ST12 of "Sleeping". When it is possible, the transition probability to the state of “going down” to ST 11 is assumed, the transition probability to the state of “sleeping” to ST 12 is taken to be P 2, and such a probability is assumed. The transition destination is determined based on this. As a technique for determining the transition destination based on the probability, there is a technique disclosed in Japanese Patent Application Laid-Open No. 9-1114514.
- the robot device 1 holds information on the probability of transition to a certain state as a table.
- An example is a table as shown in FIG.
- the table shown in Fig. 10 includes the node name indicating the current action state, the name of the input event, the data name of the input event (hereinafter referred to as the input event), and the input event. It consists of information such as the range of the data and the transition probability to a certain state.
- the transition probability to a certain state is determined according to the input event, and the state of the transition destination is determined based on the transition probability.
- the node name indicating the current action state is It indicates the behavior state of the robot device 1 and indicates what kind of behavior is currently being executed.
- the input event is information input to the robot device 1, and this table is organized by such input events.
- the input event name "BA LL” indicates that the input event found the ball
- "PAT” indicates that it was hit lightly
- "HIT” indicates that it was hit.
- Good means finding a moving object
- "OBS TAC LE” means finding an obstacle.
- the data range of an input event refers to the range of such data when such an input event requires parameters, and the data name of the input event specifies when such parameter names are used. I have. In other words, if the input event is "BALL", the data name refers to the size of the ball "SIZE”, and the data range is such that the size range is 0 to 100 Say. Similarly, input event S
- the data name means the distance "DISTANCE”, and the data range means that such a distance range is from 0 to 1 ⁇ 0.
- the transition probability to a certain state is assigned to a plurality of states that can be selected according to the properties of the input event. In other words, the status that can be selected in one input event
- the transition probabilities are assigned to each arc so that the total of the transition probabilities assigned to the arcs is 100%. More specifically, in the case of “BAL L” input events, “AC TION l”, “ACT I ON 3”, The total of the transition probabilities 30%, 20%, and ⁇ 'is set to 100%.
- node and arc are generally defined in a so-called probability finite automaton, and “node” is a state (in this example, an action state), and “arc” Is a directed line connecting nodes with a certain probability (in this example, a transition operation).
- the state of the transition destination is selected by referring to the input event, the range of data acquired in the input event, and the transition probability using the table including the above information as follows.
- the robot device 1 finds a ball (when the input event is “BALL”) and the size of the ball is 0 to 100, the current state of the node ( Node) Transitions from 3 to node 1 20 with a probability of 30%. At the time of that transition
- the arc labeled "ACTION” is selected, and the action or expression corresponding to "ACTION1" is performed.
- the arc with “ACT I ON 3” is selected,
- ACTION and ACT I ⁇ N 3 include, for example, actions such as “barking” and “kicking the ball”. Meanwhile, the ball If the size of is more than 1 000, there is no probability that “node 120” or “node 500” will be selected as the transition destination.
- the robot apparatus 1 finds an obstacle (when the input event is “OB S TACLE”) and the distance to the obstacle is 0 to 100, the robot 1 moves backward.
- the node (state) force of all “nodel 00 0” is selected with a probability of S 100%.
- the arc with “MOVE—BACK:” is selected, and “MOVE—BACK” is executed.
- a state (node) or an arc can be selected, that is, an action model can be determined by using a table or the like based on probability. In this way, by determining the state of the transition destination in consideration of the probability, it is possible to prevent the transition destination from being always selected as the same. That is, the expression of the behavior of the robot device 1 can be enriched.
- the state selection as described above that is, the determination of the behavior model
- an action model is determined by changing the transition probability between the states based on the state of the emotion model. Specifically, it is as follows.
- the transition to the state as described above is determined by probability.
- the state of the emotion model for example, level
- the transition probability is changed according to the state of the emotion model.
- the behavior model is determined based on the emotion model, and as a result, the behavior model is influenced by the state of the emotion model. Specifically, this is explained using the table shown in FIG. I will tell.
- “JOY (joy)”, “SUR PRISE (surprise)” and “S ADNE SS (sadness)” can be used to determine the transition probability regardless of the input.
- the transition probability can be determined, for example, even when there is no external input.
- the emotion model is referred to in determining the transition probability.
- the reference may be referred to as "J ⁇ Y ( “SUR PRISE (surprise)”, “S ADNE SS (sadness)”, “JOY (joy)”, “SURPRISE (surprise)”, “S AD NESS (surprise)” In the order of "sadness”).
- the actual level will be referred to. This allows, for example,
- the actual level of "SADNE SS (sadness)" is 6 °, since the data range is 0 to 50, the actual level of the next "JOY (joy)" will be Will be referenced. For example, if the actual level of “JOY”, whose data range is 0 to 50, is 20, the arc labeled “ACT I ON 2” is 3 The arc with “MOVE-BACKJ” is selected with a probability of 0% and transitions to a predetermined state with a probability of 60%.
- the behavior model can be determined based on the state of the emotion model.
- the expression of the robot device 1 can be enriched by the behavior model being affected by the state of the emotion model.
- the action command information S16 is determined by the action determining mechanism unit 41 by the various means described above.
- the posture transition mechanism unit 42 is a part that generates information for transitioning to a target posture and a target operation. Specifically, as shown in FIG. 3, the posture transition mechanism unit 42 determines the next posture or the next posture from the current posture or action based on the action command information S16 supplied from the action determination mechanism unit 41. Generates posture transition information S 18 for transitioning to a motion (target posture or target motion) and sends it to the control mechanism 43. Put out.
- the postures that can transition from the current posture to the next are the physical shape of the robot device 1 such as the shape, weight, and connection state of each part, such as the shape of the body, hands and feet, and the direction and angle at which the joint bends.
- the posture transition information S 18 is determined by such a mechanism of the actuators 23 A to 23 N, and is information for transition in consideration of such a situation.
- the control mechanism 43 actually operates the robot device 1 based on the posture transition information S18 sent from the posture transition mechanism 42.
- the posture transition mechanism unit 42 pre-registers the posture in which the robot device 1 can transition and the operation at the time of the transition, and holds, for example, as a graph, and is supplied from the behavior determination mechanism unit 41.
- the obtained action command information S 16 is sent to the control mechanism 43 as posture transition information S 18.
- the control mechanism section 43 operates in accordance with the posture transition information S18 to make a transition to a target posture or a target operation.
- the processing performed by the posture transition mechanism section 42 will be described in detail.
- the robot device 1 may not be able to directly transition to the posture according to the content of the command (the action command information S16). That is, the posture of the mouth bot device 1 is classified into a posture that can directly transit from the current posture, and a posture that cannot be transited directly, and is possible through a certain motion or posture. is there.
- a four-legged robot device 1 can directly transition from a lying down state to a lying down state by throwing out a large limb, but cannot directly transition to a standing posture, and A two-step movement is required, in which the body is pulled down near the torso, becomes prone, and then rises. There are also postures that cannot be safely executed. For example, The four-legged robot device 1 may fall if the banzai is tried with both front legs raised while standing.
- the robot device 1 may lose balance and fall over. is there.
- the attitude transition mechanism section 42 transfers the action command information S16 as it is. While the information is sent to the control mechanism section 43 as the information S 18, if it indicates a posture that cannot be directly transited, the posture (a behavior command information It generates posture transition information S 18 that makes a transition to the posture indicated by S 16) and sends it to the control mechanism 43. Thereby, the robot device 1 can avoid a situation in which the robot cannot forcibly execute an untransitionable posture or fall. Alternatively, preparing multiple movements before transitioning to the target posture or movement leads to enrichment of expression.
- the posture transition mechanism unit 42 is a graph in which the posture and the movement that the robot device 1 can take are registered, and is a graph configured by connecting the posture and the operation of transitioning this posture. , And a path from the current posture to the target posture or the target movement is searched on the graph based on the action command information S16 as the command information, and the search result is obtained. Based on the current posture and the goal To a desired posture or a target movement.
- the posture transition mechanism section 42 is configured to register in advance the postures that the robot device 1 can assume and to record between the two possible postures. Based on the action command information S 16 output from the determination mechanism section 41, a transition is made to a target posture or motion.
- the posture transition mechanism unit 42 uses an algorithm called a directed graph 60 as shown in FIG. 11 as the graph as described above.
- a node indicating the posture that the robot device 1 can take a directed arc (moving arc) connecting between two transitionable postures (nodes), and, in some cases, one node From one node to another node, that is, a self-operation arc indicating an operation that completes the operation within one node.
- the posture transition mechanism 42 includes a node that is information indicating the posture (stationary posture) of the robot device 1, a directed arc that is information indicating the operation of the robot device 1, and a self-directed information.
- a directional graph 60 composed of motion arcs is held, the posture is grasped as point information, and the motion information is grasped as directional line information.
- a plurality of directed arcs and self-moving arcs may be joined between transitable nodes (postures).
- Self-acting arcs may be coupled.
- a plan for posture transition is made.
- a search for a target node (a node specified by a command) or a target arc (an arc specified by a command) from such a current posture is referred to as a route search.
- the target arc may be a directed arc or a self-acting arc.
- the case where the self-operation arc becomes the target arc is the case where the self-operation is targeted (instructed), and, for example, the case where a predetermined trick (operation) is instructed.
- the posture transition mechanism section 42 is used to calculate the target posture obtained by the route search.
- a control command (posture transition information S 18) for transition is sent to the control mechanism unit 4 in the subsequent stage. Output to 3.
- a directed arc a 9 exists at the node ND 5 indicating the posture of “sit down”, and a direct transition is possible, whereby the posture transition mechanism unit 42
- the posture transition information S 18 having the content “smile” is given to the control mechanism 43.
- the posture transition mechanism unit 42 changes from “turn off”. Since it is not possible to make a direct transition to “walk”, the posture is determined by searching for a route from node ND 2 indicating the posture of “disable” to node ND 4 indicating the posture of “araku”. Make a transition plan.
- the posture transition mechanism section 42 outputs posture transition information S 18 having the content “vertical”, and thereafter, the posture transition information S 18 having the content “walk”. Is output to the control mechanism 43.
- the node ND 3 indicating the posture indicating the posture of "nai One” or paste the self-action arc showing the operation of "the dance”, or paste the self-action arc showing the operation of "the Banzai” to the node ND 5 indicating the posture of "sit", or "I
- a self-operation arc indicating the operation of “snoring” is attached to the node ND indicating the posture of “sledding”.
- the robot device 1 is generally configured so as to know the current posture.
- the current attitude is lost.
- the user cannot grasp the current posture of the robot device 1 when the robot is held by a user or the like, when the user falls down, or when the power is turned on.
- such an undefined posture when the current posture cannot be grasped.
- the so-called departure attitude cannot be specified, and it is possible to make a posture transition plan up to the target attitude or motion. Will not be able to.
- a node indicating a neutral (neutral) attitude is provided as a node, and when the current attitude is unknown, the node is transitioned to the neutral attitude, and an attitude transition plan is made. That is, for example, when the current posture is unknown, as shown in FIG. 13, the neutral position is transitioned to the node ND nt and then transitions to the node ND indicating the posture of “touching”. 3, so as to transit to a node indicating a basic position such as node ND, indicating the node ND 5, or the posture of "Nesobe One in which" indicating the posture of "sitting". Then, after transitioning to such a basic posture, an original posture transition plan is established.
- the basic postures that transit from the neutral posture have been described as “standing”, “sit”, and “nesobetsu”, but are not limited thereto. It goes without saying that the attitude may be other than the one being performed.
- the transition from the undefined posture to the neutral posture (node) is specifically performed by driving the operation unit (for example, an actuator) at a low torque or a low speed.
- the burden on the servo can be reduced, and for example, it is possible to prevent the operating section from operating and damage to the operating section as in normal operation.
- the tail 5 normally performs a swinging motion, but when such a movement of the tail 5 is performed at the time of transition to the neutral posture (node), the robot device is moved. This is because the tail part 5 may be damaged when the subject 1 is sleeping in an undefined posture.
- the robot device can detect the fall and transition from the fall state to the node indicating the basic posture as described above.
- the robot device 1 is provided with an acceleration sensor, and detects that the robot device 1 has fallen.
- the robot device 1 detects that the robot device 1 has fallen over by the acceleration sensor, the robot device 1 performs a predetermined operation for returning from the fall, and then transitions to the node indicating the basic posture as described above. .
- the robot device 1 is also configured to grasp the falling direction.
- the acceleration sensor can detect the falling direction in four directions: front, rear, left and right. As a result, the robot device can perform a fall return operation according to the fall direction, and can quickly transition to a basic posture.
- a predetermined expression may be output. For example, as a predetermined expression, an action of flapping a foot by a self-operation arc a> t is performed. This makes it possible to represent a state in which the robot device 1 is straddling and struggling.
- a posture transition plan should be made using the index of when the distance between the current node and the target node is reduced, that is, the so-called shortest distance search that searches the route as the shortest distance.
- the shortest distance search is performed using the concept of distance for the directed arc (arrow) connecting the node ( ⁇ ) and the node ( ⁇ ).
- a route search method there is the theory of the path search of Dijkstra. Distance can be replaced with concepts such as weighting and time, as described later.
- Figure 14 shows the result of connecting nodes that can transition by a directed arc with a distance force S “1”.
- the shortest path from the current attitude (node) to the target node can be selected.
- the distance of the first route is “1 2”
- the distance of the second route is “10”
- the distance of the path is “15” and the distance of the fourth path is “18”
- the distance to the target posture Make a posture transition plan.
- the method for searching the shortest distance is not limited to such a method.
- the shortest distance route to the target node is selected as one route from the results of searching a plurality of routes.
- a route that can transition from the current node to the target node is searched as much as possible, and a plurality of routes are obtained.
- the shortest route is specified using the distance as an index.
- the shortest distance route search is not limited to searching for the shortest distance route by such a method, and when the shortest distance route to the target node is detected, the route search is performed. It is also possible to end the processing of.
- nodes are searched one after another by gradually increasing the distance as a search index from the current posture, and a node at the shortest distance (a node that is not a target node) is determined each time.
- the shortest distance route search process is terminated.
- the route search process ends.
- the shortest-distance route can be searched without searching for all routes existing to the target node.
- the load on the CPU or the like for performing such a search process can be reduced. Therefore, the shortest route to the target node can be detected without searching for all routes, so that the burden of searching the entire network can be eliminated. Even if the network for constructing such a graph becomes large, it becomes possible to search for a route with a reduced burden.
- nodes are roughly classified (clustered) based on their actions or postures, and a rough search (preliminary search) based on the classification is performed first. After that, you may make an advanced search. For example, when assuming the posture of “right front limb”, first select an area of the category “hit the ball” as the route search range, and then search for a path only in that area.
- association between the rough classification and the constituents that is, the association between "hitting the ball” and “right front stake.”
- ID information when designing such a system. It is done by putting.
- the target is described as the posture (node). Force; the target can be an operation, that is, a directed arc or a self-operation arc.
- the target arc is a self-operating arc, for example, an operation of causing the foot 4 to flap.
- the weight assigned to the directed arc or the operation time of the directed arc can be referred to.
- the operation time (execution time) of a directed arc is as follows.
- the weight or distance given to the directed arc is defined as the difficulty of operation. Good. For example, low difficulty is set as short distance.
- one of the directed arcs can be the default. By setting one of the directed arcs as the default, usually, the default directed arc is selected, and when instructed, another non-default directed arc is selected. It can also be done.
- the probability of selection can be varied for each directed arc. That is, for example, a different probability is assigned to each arc.
- various actions are selected even for transitions between the same nodes due to variations due to probabilities, and a series of actions can be varied. For example, when transitioning from a sitting position to a standing position, the probability is that the sitting leg will extend backward and then stand up with four legs, or the front leg will extend forward and stand up. By selecting with, it is possible to bring about an unpredictable effect until playback is started (until it is actually executed).
- the distance or weight of each directed arc is set to m ,, m 2 , m 3 ⁇
- the probability P i of assigning a distance or a weight to a directed arc with mi can be expressed by equation (1).
- a directional arc having a large weight has a low probability of being selected as a passing path
- a directional arc having a small weight has a high probability of being selected as a passing path
- the route search can be limited to only the routes within a predetermined range. This makes it possible to search for an optimal route in a shorter time.
- arcs directed arcs or self-acting arcs
- graphs can be registered in the graph as follows.
- a plurality of execution modes can be considered for an action such as “walking”.
- the operations such as walking in the 0 ° direction, walking in the 30 ° direction, and walking in the 60 ° direction are a plurality of execution modes for “walking”.
- To have a plurality of execution modes for such the same operation that is, increase the parameters for the same operation This leads to a richer expression of the robot device 1.
- Such an operation is made possible by providing a plurality of arcs having different parameters.
- “walk” is one path, and the walking direction is a parameter.
- a command of “walking” is given, and a walking direction is separately given as a parameter at that time, so that “walking” is executed in the direction of the given parameter when a motion is reproduced.
- a walking direction is separately given as a parameter at that time, so that “walking” is executed in the direction of the given parameter when a motion is reproduced.
- the graphs are still simple, and the network resources that have been constructed can be used effectively.
- the motion command information S 16 is added with information on the motion parameters as additional information. For example, the walking direction is added. Such parameters are attached to the posture transition information S 16 and sent to the control mechanism 43.
- a parameter is assigned to the self-operation arc.
- the present invention is not limited to this, and a parameter can be assigned to a directed arc. This allows "walking" between nodes Operation itself is the same, but it is the parameter
- the instruction form can be simplified by executing a “walking” motion and giving “3 times” information as a parameter as the number of repetitions. This allows you, for example, to walk three steps
- the information of "one-step walk” is output to the control mechanism 43, and by instructing "3" as its parameter, the user is allowed to walk three steps, or by instructing "7" as a parameter. It is possible to make the user walk only seven steps, or to continue walking until another instruction is given to the foot by specifying "1-1" as a parameter.
- the self-operation arcs connected to each node may overlap. For example, as shown in FIG. 20, there is a case where an angry operation is performed at each of the node in the sleeping posture, the node in the sitting posture, and the node in the standing posture.
- the action named “angry” is defined as a self-motion arc from a sleeping posture to a sleeping posture, a self-motion arc from a sitting posture to a sitting posture, and an anger that becomes a self-moving arc from a standing posture to a standing posture.
- the action that expresses it is possible to search for the closest “angry” action (self-motion arc) by searching for the shortest path, simply by giving the instruction “angry”.
- the path to the shortest executable action among the targeted actions (self-action function) is established as a posture transition plan.
- the angry character is connected as the self-operation arc of the sleeping posture node.
- an operation for example, an operation that scratches the ground and rubs while lying down Will be done.
- the shortest distance search is performed to determine the optimal posture (the shortest distance). (A posture in which the position is in a different position), thereby enabling the higher-level control means to execute the specified operation without always having to know the state or operation of each unit. That is, for example, when a command to “get angry” is issued, the higher-level control means (posture transition mechanism section 42) only needs to know the current node, and the actual node, such as “get angry while lying down”, is used. By simply searching for the “angry” self-motion arc without searching, it is possible to make a posture transition plan up to angry motion in the shortest sleeping posture.
- the robot apparatus 1 can operate each component separately. That is, the command for each component can be executed.
- the robot device (whole) 1 are roughly divided into a head 2, a foot 4, and a tail 5.
- the tail part 5 and the head part 2 can be operated individually. In other words, since resources do not compete, they can be operated individually.
- the entire robot device 1 and the head 2 cannot be operated separately. That is, They cannot be operated individually because of resource conflicts. For example, while the entire operation in which the content of the command includes the motion of the head 2 is being executed, the content of the command for the head 2 cannot be executed. For example, it is possible to shake the tail 5 while shaking the head 2, but it is impossible to shake the head 2 while performing a trick using the whole body (whole). is there.
- Table 1 shows a combination of a case where resources compete and a case where resources do not compete with the action command information S16 sent from the behavior determining mechanism unit 41.
- the processing in the case where such a command is issued will be described below. Due to resource conflicts, when one command is executed first, for example, when the operation of the entire head 1 is completed and then the instruction for the head 2 is executed, the operation of the entire head 1 is reached. The movement of the head 2 starts from the last posture. However, the final posture after the entire operation 1 is not necessarily a posture suitable for starting the operation such as shaking the head 2. In the case where the final posture after the operation of the whole 1 is not appropriate for the start of the operation of the head 2, that is, the posture before and after the transition by different commands becomes discontinuous, When the movement of the head 2 starts, the head 2 may show a sudden movement, which may result in an unnatural movement.
- the problem is that the target posture (or motion) from the current posture (or motion) extends over the entire mouth robot device 1 and each component, and is constructed to control the entire robot device 1.
- Network consisting of connected nodes and arcs, and a network consisting of nodes and arcs constructed to control each component of the robot device 1.
- the unnatural movement of the robot device 1 due to the discontinuity of the posture before and after the transition is performed by making a posture transition plan so as to smoothly connect the transition operations on the graph.
- a posture transition plan so as to smoothly connect the transition operations on the graph.
- this problem has been solved by introducing a basic posture shared on the graphs of the whole and constituent parts, and making a posture transition plan.
- the network information for use in the posture transition plan of the robot device 1 is shown in Fig. 22A.
- the entire network information (Darafu) and the network information of each component (Graph) ) The case where the whole is configured as a hierarchical structure will be described below.
- the information used in the posture transition plan which consists of the information (graph) of the entire network and the information (graph) of the network of each component, is represented by the posture transition mechanism shown in FIG. It is built in part 42.
- the basic posture is a posture that is temporarily changed in order to change the state between the entire operation and the operation of each component, and the basic posture is, for example, as shown in FIG. 22B. , Sitting posture. In the case where the basic posture is the sitting posture, the procedure to smoothly connect the transition motions is explained.
- the optimal directed arc a is selected from the state of the basic posture ND hb , and the path to the target head 2 movement (self- moving arc) a 2 is determined.
- the current attitude ND a in the entire graph From basic posture N The search for the path to Dab and the path from the state of the basic posture NDhb in the graph of the head to the target operation a is performed by the shortest distance search as described above.
- a transition path selection (posture transition plan) that smoothly connects the motion between the whole and each component part is made in the graph of the head and the whole force. Then, the control mechanism unit 43 outputs the posture transition information S19 to the control mechanism unit 43 based on the posture transition plan.
- the head 2 has the posture ND h in the graph of the head.
- the following describes a case where the foot 4 is recognized as the posture ND ⁇ 0 in the foot graph and the entire motion a 4 is executed as a target.
- the posture of the head 2 is the current posture ND h . Directed arc a to transition from basic posture ND ab to a. Select Also, on the foot graph, the posture of the foot 4 is defined as the current posture ND f .
- the tail 5 is assumed to be in the basic posture. When each component is in the basic posture in this way, it is also grasped as the basic posture on the entire graph.
- the operation may be performed at the same time as the operation of the other components when transiting to, or the operation of each component may be executed with a restriction. For example, you can do it at a certain time.
- a command is given for the operation of the whole 1 while performing the trick using the head 2, since the trick is performed for the head 2, the basic Since the transition to posture ND hl) is not possible, first, when the feet 4 are first placed in the basic posture ND fb , and then the head 2 that has completed the trick is transitioned to the basic posture ND hb , As it was.
- each component can be operated in consideration of the balance of the posture of the whole 1 and the like. For example, if the head 2 and the foot 4 are moved at the same time, or if the head 2 is initially in the basic posture ND hb , the balance is lost and the vehicle falls down. First, the foot 4 is set to the basic posture ND fb , and then the head 2 is changed to the basic posture ND hb .
- the motion can be connected smoothly by making a posture transition plan that makes a transition to the basic posture once.
- resources at unused parts they can be used for other purposes.
- head 2 can be tracked by a moving ball.
- the basic position is not limited to one.
- a plurality of postures such as a sitting posture and a sleeping posture can be set as the basic posture. This enables the transition from the entire operation to the operation of each component, or the operation of each component to the entire operation with the shortest operation. Transition to the shortest distance (shortest time). Setting a plurality of basic postures in this way leads to enrichment of the expression of the robot device 1.
- the determination of the posture and movement transition path in the posture transition mechanism section 42 as described above is performed by the action command information S 1 from the action determination mechanism section 41.
- the action determining mechanism section 41 transmits the action determining instruction information S 16 normally without any restriction.
- An example of the command storage unit is a buffer.
- the command sent from the action decision mechanism 41 is not executable at the present time, for example, while performing a trick (predetermined operation), such a command is sent.
- the command will be stored in the buffer.
- a newly sent command D is newly added to the buffer as a list.
- the oldest instruction is usually fetched from the buffer and a path search is performed. For example, when commands are accumulated as shown in Fig. 25A, the oldest command A is executed first.
- commands are accumulated in the buffer, and force commands that are executed sequentially from the old command are inserted (inserted), or commands are canceled. You can also perform a list operation to cancel (cancel).
- the command D is inserted into the already stored command group.
- the route search for the command D can be executed in preference to the commands A, B, and C waiting for execution.
- the buffer can have a plurality of command storage areas corresponding to the entire robot apparatus 1 and each component.
- the operation commands for the whole and each component are accumulated as shown in FIG.
- the following operations can be performed by providing the areas for storing the commands corresponding to the whole and each component.
- Synchronization information for synchronizing and reproducing movements of different components for example, a head and a foot
- the information stored in the command storage areas of the head 2 and the foot 4 is appended with information on the order in which playback is started, and the information in the same order is used as synchronization information. For example, assign information to be played fifth.
- the list operation allows you to cancel the planning of a series of operations before or during execution, or to interrupt a high-priority command in the early order afterward. Can be assembled.
- the search of the optimal path to the target posture or the movement and the determination of the posture transition plan made based on the action command information S16 sent from the action determination mechanism unit 41 are as follows, for example. This is realized by the posture transition mechanism section 42 including the motion path search device 60 as shown in FIG.
- the operation route search unit 60 includes a command holding unit 61, a route search unit 62, and a graph storage unit 63.
- a command for the whole or each component is given to the motion path search unit 60 from the action determination mechanism unit 41.
- a target posture or a motion part for example, a head
- a current command and a past command A list operation command for a series of commands issued in the above or information on characteristics of the command itself is attached to the action command information S16.
- the attached information is referred to as attached information.
- the list operation instruction for the current instruction and a series of instructions issued in the past includes, for example, the newly generated instruction described with reference to FIG. Instruction to insert at the beginning.
- the characteristic information of the command itself (hereinafter referred to as command characteristic information) is the parameter of the walking direction described in FIG. This is a parameter of the command described above, for example, a parameter such as “three steps” if the operation is “walk forward”, or information for synchronizing another operation described in FIG.
- the command storage unit 61 stores the action command information S16 sent from the action determination mechanism unit 41 as described above, and is, for example, a buffer. When there is information attached to the action command information S16, the command storage unit 61 performs a process based on the content of the attached information. For example, if the attached information includes an instruction to insert a command as list operation information, the command storage unit 61 waits for the action command information S16 that has just arrived according to the content of the command. The operation to insert it at the beginning of the command sequence is also performed.
- the command storage unit 61 also stores it together with the command.
- the command storage unit 61 keeps track of which command is currently being executed and which command is waiting for which turn. To achieve this, for example, instructions are stored in order.
- the command storage unit 61 has four storage areas corresponding to, for example, the whole 1, the head 2, the feet 4, and the tail 5. Then, the command storage unit 61 operates the head 2 if the command to operate the head 2 is being executed, the command to operate the entire head cannot be executed, or the command to operate the head 2 and the foot 4 It is possible to make a judgment that the command can be issued regardless of the command. In other words, it is possible to perform a process for preventing the output of a command in which the resources of the whole and each component conflict with each other, and to solve the so-called resource conflict.
- commands are issued across different parts. For example, if commands are sent in the following order: 1 for the whole, 4 for the feet, 2 for the head, and 1 for the whole, the commands for the entire 1 in the command storage unit 6 1 Save as the second command, save the command for foot 4 as the second command, save the command for head 2 as the third command, I remember the order.
- the command storage unit 61 first sends the first command of the entire command 1 to the route search unit 62, and as soon as the content of the command has been reproduced, the command command of the second command for the foot 4 is transmitted.
- the third command of the head 2 is sent, and when each component finishes reproducing the contents of the command, the fourth command of the fourth command is sent to the route search unit 62.
- the route search unit 62 starts the so-called route search as described above according to the command sent from the command storage unit 61.
- a draf is stored corresponding to each of the parts similar to those classified in the command storage unit 61. That is, a graph corresponding to each of the whole 1, the head 2, the foot 4, and the tail 5 as shown in FIG. 22A is stored.
- the route searching unit 62 determines, based on the graphs stored in the graph storage unit 63, the optimal route to the target posture or operation, which is the content of the command, as described above in the distance or distance. Search using the weights etc. as an index and make a posture transition plan.
- the route search unit 62 sends the posture transition information S 18 to the control mechanism unit 43 until the target posture or the motion is executed based on the posture transition plan acquired by the route search. .
- the attitude transition mechanism section 42 searches for the optimal path to the attitude or motion targeted by the command. Make a posture transition plan, In accordance with the posture transition plan, posture transition information S 18 is output to the control mechanism 43.
- control mechanism section 43 generates a control signal S5 for driving the actuator 23 based on the posture transition information S18, and sends it to the actuator 23 to transmit the control signal S5 to the actuator 23.
- the robot device 1 is made to perform a desired operation.
- control mechanism unit 43 notifies the end of the operation performed based on the posture transition information S 18 until the target posture or the operation is reached.
- the route search unit 62 (Reproduction result) is returned to the route search unit 62.
- the route search unit 62 that has received the end notification notifies the graph storage unit 63 of the end of the operation.
- the graph storage unit 63 updates information on the graph. That is, for example, if the posture transition information S 18 is to cause the foot 2 to perform a transition operation from the sitting posture state to the sitting posture state, the graph storage unit 63 that has received the end notification The graph position of the node of the foot 2 corresponding to the posture state is moved to the graph position of the sitting posture.
- the route search unit 62 further executes “flapping a foot”. It is sent to the control mechanism section 43 as the posture transition information S 18, and the control mechanism section 43 generates control information for flapping the foot 4 based on the posture transition information S 18, Is sent to the actuator 23 to cause the foot 4 to flutter. After reproducing the desired operation, the control mechanism unit 43 sends the end notification to the route search unit 62 again.
- the graph storage section 63 informs the graph storage section 63 that the action of flapping the foot section 4 has been completed. However, in the case of a self-moving arc, the information on the graph does not need to be updated, and the current posture of the foot 4 has advanced to the sitting posture. Since the posture of the foot is the same as the posture before the start, the graph storage unit 63 does not change the current posture position of the foot 4.
- the route search unit 62 notifies the command storage unit 61 that the reproduction of the target posture or motion has ended.
- the command storage unit 61 since the target posture or operation has been successfully completed, that is, since the content of the command has been successfully executed, the command is deleted. For example, if the target action is to flap the foot as described above, the command is deleted from the command storage area of the foot 4.
- the action determining mechanism 41, the motion path searching section 60, and the control mechanism section 43 exchange information, whereby the action determining mechanism 41, the motion path searching section 60, and the control mechanism section. 43 basic functions have been realized.
- FIG. 29 shows a route search performed by the motion route search unit 60 based on the command (behavior command information S 16) generated by the action determination mechanism unit 41, and a result based on the route search result. It shows a series of processing procedures for the operations to be performed.
- step SP1 a command is given from the action determination mechanism unit 41 to the operation route search unit 60, and in the following step SP2, the command storage unit 61 of the operation route search unit 60 outputs the command. To the group of directives.
- a tail command is added. Also, If the command has instruction information to insert it as a list operation, the command is inserted into the command sequence accordingly.
- the command storage unit 61 determines whether there is a command that can be started.
- the command storage unit 61 stores the command for the whole as the first command (the command currently being reproduced), and the stored commands for the head as the second and third commands. If the command for the foot is recognized as the fourth command, and the command accumulated for the tail 5 is recognized as the command for the No. 5 and No. 6 commands, the entire command is recognized. If the first command is executed, it is determined that there is no command that can be started.
- step SP4 If there is no command that can be started, the process proceeds to step SP4. If there is a command that can be started, the process proceeds to step SP5. At the time of proceeding to step SP3, while the entire robot apparatus 1 is operating, it is not possible to execute a command for each component, so the procedure proceeds to step SP4. If the operation has been completed at the time of proceeding to step SP3, the process proceeds to step SP5. In step SP4, a predetermined time is waited. For example, after elapse of 0.1 second, it is determined again in step SP3 whether or not there is a command that can be started.
- step SP3 At the same time as the second flight command for the head, the fourth command for the foot, and the fifth command for the tail become grasped.
- the third command for the head is recognized as a command that can be started as soon as the execution of the second command for the head is completed, and the sixth command for the tail is the head command. As soon as the execution of the fifth command for the department is completed, it will be recognized as a command that can be started.
- step SP4 when there is no command that can be started immediately in step SP3, the process proceeds to step SP4 and waits for a predetermined time to elapse, thereby waiting until a predetermined operation is completed in the whole or each component. With the right timing, the next command can be executed.
- step SP5 based on the contents of the next command sent from the command storage unit 61, the route search unit 62 displays a graph corresponding to the command stored in the graph storage unit 63 on the graph. It is determined whether or not a motion path from the current posture to the target posture can be found.
- a path that can reach from the current posture (state) to the target state or movement on the graph of the head Determine if there is.
- a path is found when there is a path (arc) to the target state or operation of the command, and as shown in Fig. 30, a plurality of directional points that can reach the target posture from the current posture.
- Arc a. , A, and the route is not found when the path to the indicated state or operation does not exist.
- step SP6 if the transition path from the current posture to the target posture cannot be found, the process proceeds to step SP6. Proceed to SP7. If a transition path to the target posture (or motion) is found, the path (arc) a k (k is an integer up to n including 0) is stored, and this is the posture transition plan. Information.
- the command storage unit 61 deletes the command from the list, assuming that no operation path was found. As described above, when the operation path is not found, by deleting the command, the subsequent command can be retrieved. After erasing such a command, again in step SP3, the command storage unit 61 determines whether or not there is a command that can be started.
- step SP8 it is determined whether or not i is equal to or less than n.
- step SP10 If i is less than or equal to n, the process proceeds to step SP10, and if i is greater than n, the process proceeds to step SP9.
- step SP 10 the control mechanism section 43 reproduces the arc a i based on the posture transition information S 18 sent from the route search section 62. For example, as shown in Fig. 30, if you are in the current posture (initial posture), the first directed arc a. Works.
- step SP11 the route search unit 62 receives the notification of the end of the operation, and in step SP12, the graph storage unit 63 updates the position of the posture on the graph.
- step SP9 the command storage unit 61 deletes the command that has just ended from the stored commands. Then, returning to step SP3, the command storage unit 61 determines again whether there is a command that can be started. If the target posture or motion extends over the whole and each component, at steps SP5 and SP6, as described with reference to FIGS. The optimal route is searched on the corresponding graph, and the processing corresponding to that is performed in the processing after step SP7. That is, the transition to the basic posture is performed, and the contents of the directed arc up to the target posture or operation are executed.
- the commands are stored in the command storage unit 61 as described above. Since the components of the mouth bot 1 do not compete for resources, simultaneous operations are performed based on such commands. It becomes possible. For this reason, in the processing procedure (flow) as described above, the flow for each component also exists, and a plurality of flows may be executed simultaneously. Therefore, for example, when the flow for the entire command is being processed, the flow goes to the wait state without performing the flow for the command for each component.
- the emotion and instinct model section 40 of the controller 32 changes the emotion and instinct of the robot device 1 based on the supplied input information S1 to S3.
- the robot device can act autonomously based on its own emotions and the state of the instinct.
- a command from a user is input via the command receiving unit 30 including the remote controller receiving unit 13 and the microphone 11.
- the present invention is not limited to this.
- a computer may be connected to the robot device 1 and a user's command may be input via the connected computer.
- Emotional units 50 A to 50 F that show emotions such as ⁇ sadness, '' ⁇ sadness, '' ⁇ anger, '' etc., and a desire unity 51, which shows the desires such as ⁇ exercise desire, '' ⁇ love affection, '' etc.
- a desire unit indicating an emotion of ⁇ loneliness '' is used as an emotion unit. It is also possible to add a desire unit indicating the desire of “sleep desire” to the desire unit 51, and to use the emotion unit of various types and combinations in addition to the desire unit. You may decide to determine your emotions and your instinct.
- robot device 1 has an emotion model and an instinct model.
- the present invention is not limited to this, and has only an emotion model or only an instinct model. Other models may be used to determine the behavior of the animal.
- the next action is determined based on the command signal S1, the external information signal S2, the internal information signal S3, the emotion / instinct state information S10, and the action information S12.
- the present invention is not limited to this, and the command signal S1, the external information signal S2, the internal information signal S3, the emotion and the instinct state information S10, and the behavior information
- the next action may be determined based on a part of the information of S12.
- a finite automaton 57 a case has been described in which the next action is determined using an algorithm called a finite automaton 57.
- the present invention is not limited to this. It is called a stochastic finite automaton in which multiple states are selected as transition destination candidates based on the state at that time, and the transition destination state among the selected multiple states is randomly determined by random numbers. You may decide to use a certain algorithm to determine the action.
- the action command information S 16 when the action command information S 16 indicates a posture to which a direct transition can be made, the action command information S 16 is directly used as the posture transition information S 18 by the control mechanism unit 43. On the other hand, if it indicates a posture in which direct transition is not possible, it generates posture transition information S18 that transitions to a target posture after transitioning to another transitable posture and generates a control mechanism unit. Although the case where the command is sent to 43 has been described, the present invention is not limited to this. On the other hand, if the action command information S 16 is sent to the position 3, indicating that the posture cannot be directly transited, the action command information S 16 may be rejected.
- the present invention is not limited to this, and for example, a robot device used in the entertainment field such as a game or an exhibition.
- the present invention can be applied to various other robot apparatuses. As shown in Fig. 31, a character that moves with computer graphics, for example, an animation using a character composed of multiple joints, etc. It can also be applied.
- the appearance of the robot device 1 to which the present invention is applied is not limited to the configuration shown in FIG. 1, and has a configuration more similar to a real dog as shown in FIG. 32. It can be a humanoid humanoid robot.
- a robot device is a robot device that performs an operation according to supplied input information, has a model caused by the operation, and determines an operation by changing the model based on the input information.
- the robot device can autonomously act based on the state of the robot device itself, such as emotions and instinct, by changing the model based on the input information and determining the operation.
- an operation control method is an operation control method for operating in accordance with input information supplied, wherein the operation is determined by changing a model caused by the operation based on the input information. For example, it is possible to cause the robot device to autonomously act on the basis of its own emotions and instinct.
- the robot device is a robot device that performs an operation in accordance with supplied input information, and a current operation in accordance with a history of sequentially supplied input information and a next operation to be supplied.
- operation determining means for determining the next operation following the current operation based on the input information
- the current operation according to the history of the input information sequentially supplied and the input to be supplied next
- autonomous operation is performed based on the state of the robot device itself, such as emotions and instinct. You can act.
- the operation control method according to the present invention is an operation control method for operating according to supplied input information, wherein the current operation according to the history of sequentially supplied input information and the next supplied input By determining the next operation following the current operation based on the information, it is possible to cause the robot device to act autonomously, for example, based on its own emotions and instinct.
- the robot apparatus includes: a graph storage unit configured to store a graph in which the posture and the motion are registered, the graph being configured by connecting the posture and the motion of transitioning the posture.
- the path from the posture to the target posture or the target movement is searched on the graph based on the action command information, and is operated based on the search result.
- control means for transitioning to a desired posture or a target movement whereby the control means can determine a path from the current posture to the target posture or the target movement based on the action command information.
- the motion control method according to the present invention is configured such that a posture and a path from a current posture to a target posture or a target movement are registered based on the action command information, and the posture and the movement are registered.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
- Toys (AREA)
- Feedback Control In General (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00900846A EP1088629A4 (en) | 1999-01-20 | 2000-01-20 | ROBOT AND DISPLACEMENT CONTROL METHOD |
US09/646,506 US6442450B1 (en) | 1999-01-20 | 2000-01-20 | Robot device and motion control method |
KR1020007010417A KR100721694B1 (ko) | 1999-01-20 | 2000-01-20 | 로봇 장치 및 동작 제어 방법 |
JP2000594613A JP4696361B2 (ja) | 1999-01-20 | 2000-01-20 | ロボット装置及び動作制御方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP1229299 | 1999-01-20 | ||
JP11/12292 | 1999-01-20 | ||
JP34137499 | 1999-11-30 | ||
JP11/341374 | 1999-11-30 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/646,506 A-371-Of-International US6442450B1 (en) | 1999-01-20 | 2000-01-20 | Robot device and motion control method |
US10/196,683 Continuation US20030023348A1 (en) | 1999-01-20 | 2002-07-15 | Robot apparatus and motion control method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000043167A1 true WO2000043167A1 (fr) | 2000-07-27 |
Family
ID=26347871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2000/000263 WO2000043167A1 (fr) | 1999-01-20 | 2000-01-20 | Robot et procede de commande de deplacement |
Country Status (6)
Country | Link |
---|---|
US (4) | US6337552B1 (ja) |
EP (1) | EP1088629A4 (ja) |
JP (3) | JP4696361B2 (ja) |
KR (1) | KR100721694B1 (ja) |
CN (1) | CN1246126C (ja) |
WO (1) | WO2000043167A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002120181A (ja) * | 2000-10-11 | 2002-04-23 | Sony Corp | ロボット装置及びその制御方法 |
JP2002215673A (ja) * | 2000-11-17 | 2002-08-02 | Sony Computer Entertainment Inc | 情報処理プログラム、情報処理プログラムが記録された記録媒体、情報処理装置及び方法 |
JP2005508761A (ja) * | 2001-04-06 | 2005-04-07 | ヴァンダービルト ユニバーシティー | ロボット知能のアーキテクチャ |
JP2008246607A (ja) * | 2007-03-29 | 2008-10-16 | Honda Motor Co Ltd | ロボット、ロボットの制御方法およびロボットの制御プログラム |
WO2010103399A1 (en) | 2009-03-11 | 2010-09-16 | Toyota Jidosha Kabushiki Kaisha | Robot apparatus and control method therefor |
JP2011056624A (ja) * | 2009-09-10 | 2011-03-24 | Nara Institute Of Science & Technology | 経路計画生成装置および該方法ならびにロボット制御装置およびロボットシステム |
Families Citing this family (101)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6337552B1 (en) * | 1999-01-20 | 2002-01-08 | Sony Corporation | Robot apparatus |
US6560511B1 (en) * | 1999-04-30 | 2003-05-06 | Sony Corporation | Electronic pet system, network system, robot, and storage medium |
JP4518229B2 (ja) * | 1999-05-10 | 2010-08-04 | ソニー株式会社 | ロボット装置 |
GB2350696A (en) * | 1999-05-28 | 2000-12-06 | Notetry Ltd | Visual status indicator for a robotic machine, eg a vacuum cleaner |
US6663393B1 (en) * | 1999-07-10 | 2003-12-16 | Nabil N. Ghaly | Interactive play device and method |
US7442107B1 (en) * | 1999-11-02 | 2008-10-28 | Sega Toys Ltd. | Electronic toy, control method thereof, and storage medium |
US20020059386A1 (en) * | 2000-08-18 | 2002-05-16 | Lg Electronics Inc. | Apparatus and method for operating toys through computer communication |
US6697708B2 (en) * | 2000-10-11 | 2004-02-24 | Sony Corporation | Robot apparatus and robot apparatus motion control method |
TWI236610B (en) * | 2000-12-06 | 2005-07-21 | Sony Corp | Robotic creature device |
DE60029976T2 (de) * | 2000-12-12 | 2007-04-05 | Sony France S.A. | Selbsttätiges System mit heterogenen Körpern |
AU2002251731A1 (en) | 2001-01-04 | 2002-07-16 | Roy-G-Biv Corporation | Systems and methods for transmitting motion control data |
JP2002304188A (ja) * | 2001-04-05 | 2002-10-18 | Sony Corp | 単語列出力装置および単語列出力方法、並びにプログラムおよび記録媒体 |
JP4689107B2 (ja) * | 2001-08-22 | 2011-05-25 | 本田技研工業株式会社 | 自律行動ロボット |
US20030045947A1 (en) * | 2001-08-30 | 2003-03-06 | The Boeing Company | System, method and computer program product for controlling the operation of motion devices by directly implementing electronic simulation information |
JP3837479B2 (ja) * | 2001-09-17 | 2006-10-25 | 独立行政法人産業技術総合研究所 | 動作体の動作信号生成方法、その装置及び動作信号生成プログラム |
GB2380847A (en) * | 2001-10-10 | 2003-04-16 | Ncr Int Inc | Self-service terminal having a personality controller |
JP2003133200A (ja) * | 2001-10-19 | 2003-05-09 | Canon Inc | シミュレーション装置及びシミュレーション方法 |
US7257630B2 (en) * | 2002-01-15 | 2007-08-14 | Mcafee, Inc. | System and method for network vulnerability detection and reporting |
US7543056B2 (en) * | 2002-01-15 | 2009-06-02 | Mcafee, Inc. | System and method for network vulnerability detection and reporting |
KR101137205B1 (ko) * | 2002-03-15 | 2012-07-06 | 소니 주식회사 | 로봇의 행동 제어 시스템 및 행동 제어 방법, 및 로봇 장치 |
FR2839176A1 (fr) * | 2002-04-30 | 2003-10-31 | Koninkl Philips Electronics Nv | Systeme d'animation pour robot comprenant un ensemble de pieces mobiles |
US7118443B2 (en) | 2002-09-27 | 2006-10-10 | Mattel, Inc. | Animated multi-persona toy |
GB2393803A (en) * | 2002-10-01 | 2004-04-07 | Hewlett Packard Co | Two mode creature simulaton |
US7401057B2 (en) | 2002-12-10 | 2008-07-15 | Asset Trust, Inc. | Entity centric computer system |
US7238079B2 (en) * | 2003-01-14 | 2007-07-03 | Disney Enterprise, Inc. | Animatronic supported walking system |
US7248170B2 (en) * | 2003-01-22 | 2007-07-24 | Deome Dennis E | Interactive personal security system |
DE10302800A1 (de) * | 2003-01-24 | 2004-08-12 | Epcos Ag | Verfahren zur Herstellung eines Bauelements |
JP2004237392A (ja) * | 2003-02-05 | 2004-08-26 | Sony Corp | ロボット装置、及びロボット装置の表現方法 |
EP1593228B8 (en) * | 2003-02-14 | 2017-09-20 | McAfee, LLC | Network audit policy assurance system |
US8027349B2 (en) * | 2003-09-25 | 2011-09-27 | Roy-G-Biv Corporation | Database event driven motion systems |
EP1530138A1 (en) * | 2003-11-10 | 2005-05-11 | Robert Bosch Gmbh | Generic measurement and calibration interface for development of control software |
US20050105769A1 (en) * | 2003-11-19 | 2005-05-19 | Sloan Alan D. | Toy having image comprehension |
WO2005069890A2 (en) * | 2004-01-15 | 2005-08-04 | Mega Robot, Inc. | System and method for reconfiguring an autonomous robot |
EP1727605B1 (en) * | 2004-03-12 | 2007-09-26 | Koninklijke Philips Electronics N.V. | Electronic device and method of enabling to animate an object |
JP4456537B2 (ja) * | 2004-09-14 | 2010-04-28 | 本田技研工業株式会社 | 情報伝達装置 |
KR100595821B1 (ko) * | 2004-09-20 | 2006-07-03 | 한국과학기술원 | 로봇의 감성합성장치 및 방법 |
JP2006198017A (ja) * | 2005-01-18 | 2006-08-03 | Sega Toys:Kk | ロボット玩具 |
US8713025B2 (en) | 2005-03-31 | 2014-04-29 | Square Halt Solutions, Limited Liability Company | Complete context search system |
US20070016029A1 (en) * | 2005-07-15 | 2007-01-18 | General Electric Company | Physiology workstation with real-time fluoroscopy and ultrasound imaging |
JP4237737B2 (ja) * | 2005-08-04 | 2009-03-11 | 株式会社日本自動車部品総合研究所 | 車両搭載機器の自動制御装置、およびその装置を搭載した車両 |
EP2281667B1 (en) | 2005-09-30 | 2013-04-17 | iRobot Corporation | Companion robot for personal interaction |
WO2007050406A1 (en) * | 2005-10-21 | 2007-05-03 | Deere & Company | Networked multi-role robotic vehicle |
KR100745720B1 (ko) * | 2005-11-30 | 2007-08-03 | 한국전자통신연구원 | 다중 감정 모델을 이용한 감정 처리 장치 및 그 방법 |
KR100746300B1 (ko) * | 2005-12-28 | 2007-08-03 | 엘지전자 주식회사 | 로봇의 이동방향 결정 방법 |
KR100850352B1 (ko) * | 2006-09-26 | 2008-08-04 | 한국전자통신연구원 | 상태 정보를 이용하여 감성을 표현하기 위한 지능형 로봇의감성 표현 장치 및 그 방법 |
US20080274812A1 (en) * | 2007-05-02 | 2008-11-06 | Inventec Corporation | System of electronic pet capable of reflecting habits of user and method therefor and recording medium |
EP2014425B1 (en) * | 2007-07-13 | 2013-02-20 | Honda Research Institute Europe GmbH | Method and device for controlling a robot |
CN101127152B (zh) * | 2007-09-30 | 2012-02-01 | 山东科技大学 | 用于机器人动物控制的编码信号发生器与无线遥控装置 |
CN101406756A (zh) * | 2007-10-12 | 2009-04-15 | 鹏智科技(深圳)有限公司 | 表达情感的电子玩具及其情感表达方法、发光单元控制装置 |
KR100893758B1 (ko) * | 2007-10-16 | 2009-04-20 | 한국전자통신연구원 | 로봇의 감성 표현 제어 시스템 및 방법 |
CN101411946B (zh) * | 2007-10-19 | 2012-03-28 | 鸿富锦精密工业(深圳)有限公司 | 玩具恐龙 |
WO2009123650A1 (en) * | 2008-04-02 | 2009-10-08 | Irobot Corporation | Robotics systems |
KR101631496B1 (ko) * | 2008-06-03 | 2016-06-17 | 삼성전자주식회사 | 로봇 장치 및 그 단축 명령 등록 방법 |
CN101596368A (zh) * | 2008-06-04 | 2009-12-09 | 鸿富锦精密工业(深圳)有限公司 | 互动式玩具系统及其方法 |
US8414350B2 (en) * | 2008-08-18 | 2013-04-09 | Rehco, Llc | Figure with controlled motorized movements |
CN101653662A (zh) * | 2008-08-21 | 2010-02-24 | 鸿富锦精密工业(深圳)有限公司 | 机器人 |
CN101653660A (zh) * | 2008-08-22 | 2010-02-24 | 鸿富锦精密工业(深圳)有限公司 | 讲故事自动做动作的类生物装置及其方法 |
CN101727074B (zh) * | 2008-10-24 | 2011-12-21 | 鸿富锦精密工业(深圳)有限公司 | 具有生物时钟的类生物装置及其行为控制方法 |
US20100181943A1 (en) * | 2009-01-22 | 2010-07-22 | Phan Charlie D | Sensor-model synchronized action system |
US8539359B2 (en) | 2009-02-11 | 2013-09-17 | Jeffrey A. Rapaport | Social network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic |
BRPI1006383A2 (pt) * | 2009-04-17 | 2019-09-24 | Koninl Philips Electronics Nv | sistema de telecomunicação ambiente e método para operar um sistema de telecomunicação ambiente |
US8939840B2 (en) | 2009-07-29 | 2015-01-27 | Disney Enterprises, Inc. | System and method for playsets using tracked objects and corresponding virtual worlds |
KR100968944B1 (ko) * | 2009-12-14 | 2010-07-14 | (주) 아이알로봇 | 로봇 동기화 장치 및 그 방법 |
US8483873B2 (en) | 2010-07-20 | 2013-07-09 | Innvo Labs Limited | Autonomous robotic life form |
US20120042263A1 (en) | 2010-08-10 | 2012-02-16 | Seymour Rapaport | Social-topical adaptive networking (stan) system allowing for cooperative inter-coupling with external social networking systems and other content sources |
FR2969026B1 (fr) * | 2010-12-17 | 2013-02-01 | Aldebaran Robotics | Robot humanoide dote d'un gestionnaire de ses ressources physiques et virtuelles, procedes d'utilisation et de programmation |
EP2665585B1 (en) * | 2011-01-21 | 2014-10-22 | Abb Ag | System for commanding a robot |
US9431027B2 (en) * | 2011-01-26 | 2016-08-30 | Honda Motor Co., Ltd. | Synchronized gesture and speech production for humanoid robots using random numbers |
US8676937B2 (en) * | 2011-05-12 | 2014-03-18 | Jeffrey Alan Rapaport | Social-topical adaptive networking (STAN) system allowing for group based contextual transaction offers and acceptances and hot topic watchdogging |
US8996167B2 (en) | 2012-06-21 | 2015-03-31 | Rethink Robotics, Inc. | User interfaces for robot training |
JP6162736B2 (ja) * | 2015-03-19 | 2017-07-12 | ファナック株式会社 | 機械と可搬式無線操作盤との間の距離に応じて通信品質基準を変化させる機能を備えたロボット制御システム |
JP6467710B2 (ja) * | 2015-04-06 | 2019-02-13 | 信太郎 本多 | 環境に適応する、汎用性を意識した人工知能システム |
US10500716B2 (en) * | 2015-04-08 | 2019-12-10 | Beijing Evolver Robotics Co., Ltd. | Multi-functional home service robot |
KR20170052976A (ko) * | 2015-11-05 | 2017-05-15 | 삼성전자주식회사 | 모션을 수행하는 전자 장치 및 그 제어 방법 |
CN105573490A (zh) * | 2015-11-12 | 2016-05-11 | 于明 | 一种人机交互系统、穿佩设备和方法 |
US10421186B2 (en) * | 2016-01-04 | 2019-09-24 | Hangzhou Yameilijia Technology Co., Ltd. | Method and apparatus for working-place backflow of robots |
US10471611B2 (en) | 2016-01-15 | 2019-11-12 | Irobot Corporation | Autonomous monitoring robot systems |
CN107346107A (zh) * | 2016-05-04 | 2017-11-14 | 深圳光启合众科技有限公司 | 多样化运动控制方法和系统及具有该系统的机器人 |
JP6628874B2 (ja) * | 2016-05-20 | 2020-01-15 | シャープ株式会社 | 情報処理装置、ロボット、および制御プログラム |
CN109414824B (zh) * | 2016-07-08 | 2022-07-26 | Groove X 株式会社 | 穿衣服的行为自主型机器人 |
JP6571618B2 (ja) * | 2016-09-08 | 2019-09-04 | ファナック株式会社 | 人間協調型ロボット |
CN107813306B (zh) * | 2016-09-12 | 2021-10-26 | 徐州网递智能科技有限公司 | 机器人及其动作控制方法和装置 |
CN108229640B (zh) * | 2016-12-22 | 2021-08-20 | 山西翼天下智能科技有限公司 | 情绪表达的方法、装置和机器人 |
JP6729424B2 (ja) | 2017-01-30 | 2020-07-22 | 富士通株式会社 | 機器、出力装置、出力方法および出力プログラム |
JP6886334B2 (ja) * | 2017-04-19 | 2021-06-16 | パナソニック株式会社 | 相互作用装置、相互作用方法、相互作用プログラム及びロボット |
US10250532B2 (en) * | 2017-04-28 | 2019-04-02 | Microsoft Technology Licensing, Llc | Systems and methods for a personality consistent chat bot |
US10100968B1 (en) | 2017-06-12 | 2018-10-16 | Irobot Corporation | Mast systems for autonomous mobile robots |
US20190111565A1 (en) * | 2017-10-17 | 2019-04-18 | True Systems, LLC | Robot trainer |
JP7238796B2 (ja) * | 2018-01-10 | 2023-03-14 | ソニーグループ株式会社 | 動物型の自律移動体、動物型の自律移動体の動作方法、およびプログラム |
CN110297697B (zh) * | 2018-03-21 | 2022-02-18 | 北京猎户星空科技有限公司 | 机器人动作序列生成方法和装置 |
JP7139643B2 (ja) * | 2018-03-23 | 2022-09-21 | カシオ計算機株式会社 | ロボット、ロボットの制御方法及びプログラム |
CN108382488A (zh) * | 2018-04-27 | 2018-08-10 | 梧州学院 | 一种机器猫 |
JP7298860B2 (ja) * | 2018-06-25 | 2023-06-27 | Groove X株式会社 | 仮想キャラクタを想定する自律行動型ロボット |
WO2020081630A2 (en) | 2018-10-17 | 2020-04-23 | Petoi, Llc | Robotic animal puzzle |
JP7247560B2 (ja) * | 2018-12-04 | 2023-03-29 | カシオ計算機株式会社 | ロボット、ロボットの制御方法及びプログラム |
US11110595B2 (en) | 2018-12-11 | 2021-09-07 | Irobot Corporation | Mast systems for autonomous mobile robots |
KR20200077936A (ko) * | 2018-12-21 | 2020-07-01 | 삼성전자주식회사 | 사용자 상태에 기초하여 반응을 제공하는 전자 장치 및 그의 동작 방법 |
JP7437910B2 (ja) | 2019-10-29 | 2024-02-26 | 株式会社東芝 | 制御システム、制御方法、ロボットシステム、プログラム、及び記憶媒体 |
CN111846004A (zh) * | 2020-07-21 | 2020-10-30 | 李荣仲 | 一种设有重心调节机制的四足机器犬 |
US11670156B2 (en) | 2020-10-30 | 2023-06-06 | Honda Research Institute Europe Gmbh | Interactive reminder companion |
WO2024219095A1 (ja) * | 2023-04-20 | 2024-10-24 | 株式会社日立製作所 | ロボット制御装置、ロボット制御システム、ロボット、および、ロボット制御方法 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0573143A (ja) * | 1991-05-27 | 1993-03-26 | Shinko Electric Co Ltd | 移動ロボツトシステム |
JPH07104778A (ja) * | 1993-10-07 | 1995-04-21 | Fuji Xerox Co Ltd | 感情表出装置 |
JPH0876810A (ja) * | 1994-09-06 | 1996-03-22 | Nikon Corp | 強化学習方法及び装置 |
JPH0934698A (ja) * | 1995-07-20 | 1997-02-07 | Hitachi Ltd | ソフトウェア生成方法及び開発支援方法 |
JPH09114514A (ja) * | 1995-10-17 | 1997-05-02 | Sony Corp | ロボット制御方法およびその装置 |
JPH1027182A (ja) * | 1996-07-11 | 1998-01-27 | Hitachi Ltd | 文書検索配送方法および装置 |
JPH10289006A (ja) * | 1997-04-11 | 1998-10-27 | Yamaha Motor Co Ltd | 疑似感情を用いた制御対象の制御方法 |
EP0898237A2 (en) | 1997-08-22 | 1999-02-24 | Sony Corporation | Storage medium, robot, information processing device and electronic pet system |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6337552B1 (en) * | 1999-01-20 | 2002-01-08 | Sony Corporation | Robot apparatus |
US3888023A (en) * | 1974-08-21 | 1975-06-10 | Jardine Ind Inc | Physical training robot |
JPS6224988A (ja) * | 1985-07-23 | 1987-02-02 | 志井田 孝 | 感情をもつロボツト |
US5182557A (en) * | 1989-09-20 | 1993-01-26 | Semborg Recrob, Corp. | Motorized joystick |
JPH0612401A (ja) * | 1992-06-26 | 1994-01-21 | Fuji Xerox Co Ltd | 感情模擬装置 |
WO1997014102A1 (en) * | 1995-10-13 | 1997-04-17 | Na Software, Inc. | Creature animation and simulation technique |
US5832189A (en) * | 1996-09-26 | 1998-11-03 | Interval Research Corporation | Affect-based robot communication methods and systems |
US5929585A (en) * | 1996-11-19 | 1999-07-27 | Sony Corporation | Robot system and its control method |
EP1053835B1 (en) * | 1997-01-31 | 2006-12-27 | Honda Giken Kogyo Kabushiki Kaisha | Leg type mobile robot control apparatus |
JPH10235019A (ja) * | 1997-02-27 | 1998-09-08 | Sony Corp | 携帯型ライフゲーム装置及びそのデータ管理装置 |
JP3273550B2 (ja) | 1997-05-29 | 2002-04-08 | オムロン株式会社 | 自動応答玩具 |
JP3655054B2 (ja) * | 1997-06-13 | 2005-06-02 | ヤンマー農機株式会社 | 田植機の苗載台構成 |
DE69943312D1 (de) * | 1998-06-09 | 2011-05-12 | Sony Corp | Manipulator und verfahren zur steuerung seiner lage |
IT1304014B1 (it) | 1998-06-29 | 2001-03-02 | Reale S R L | Stampi per la fabbricazione di bicchieri di ghiaccio e supporti persostenere tali bicchieri durante l'impiego. |
US6151571A (en) * | 1999-08-31 | 2000-11-21 | Andersen Consulting | System, method and article of manufacture for detecting emotion in voice signals through analysis of a plurality of voice signal parameters |
JP3555107B2 (ja) * | 1999-11-24 | 2004-08-18 | ソニー株式会社 | 脚式移動ロボット及び脚式移動ロボットの動作制御方法 |
-
1999
- 1999-12-08 US US09/457,318 patent/US6337552B1/en not_active Expired - Lifetime
-
2000
- 2000-01-20 WO PCT/JP2000/000263 patent/WO2000043167A1/ja active IP Right Grant
- 2000-01-20 CN CNB008003718A patent/CN1246126C/zh not_active Expired - Lifetime
- 2000-01-20 EP EP00900846A patent/EP1088629A4/en not_active Withdrawn
- 2000-01-20 JP JP2000594613A patent/JP4696361B2/ja not_active Expired - Fee Related
- 2000-01-20 US US09/646,506 patent/US6442450B1/en not_active Expired - Lifetime
- 2000-01-20 KR KR1020007010417A patent/KR100721694B1/ko not_active IP Right Cessation
-
2001
- 2001-12-14 US US10/017,532 patent/US6667593B2/en not_active Expired - Fee Related
-
2002
- 2002-07-15 US US10/196,683 patent/US20030023348A1/en not_active Abandoned
-
2010
- 2010-03-17 JP JP2010061642A patent/JP4985805B2/ja not_active Expired - Fee Related
- 2010-03-17 JP JP2010061641A patent/JP2010149276A/ja active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0573143A (ja) * | 1991-05-27 | 1993-03-26 | Shinko Electric Co Ltd | 移動ロボツトシステム |
JPH07104778A (ja) * | 1993-10-07 | 1995-04-21 | Fuji Xerox Co Ltd | 感情表出装置 |
JPH0876810A (ja) * | 1994-09-06 | 1996-03-22 | Nikon Corp | 強化学習方法及び装置 |
JPH0934698A (ja) * | 1995-07-20 | 1997-02-07 | Hitachi Ltd | ソフトウェア生成方法及び開発支援方法 |
JPH09114514A (ja) * | 1995-10-17 | 1997-05-02 | Sony Corp | ロボット制御方法およびその装置 |
JPH1027182A (ja) * | 1996-07-11 | 1998-01-27 | Hitachi Ltd | 文書検索配送方法および装置 |
JPH10289006A (ja) * | 1997-04-11 | 1998-10-27 | Yamaha Motor Co Ltd | 疑似感情を用いた制御対象の制御方法 |
EP0898237A2 (en) | 1997-08-22 | 1999-02-24 | Sony Corporation | Storage medium, robot, information processing device and electronic pet system |
Non-Patent Citations (4)
Title |
---|
FUJITA; KITANO: "Developments of an Autonomous Quadruped Robot for Robot Entertainment", AUTONOMOUS ROBOTS, vol. 5, 1998, pages 7 - 18 |
MASAHIRO FUJITA: "Reconfigurable Physical Agents", PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS,, 9 May 1998 (1998-05-09), pages 54 - 61, XP002927682 * |
OSUMI M. ET AL.: "Pet Robot that has Emotions", OMRON TECHNICS, 128,, vol. 38, no. 4, 1998, pages 428 - 432, XP002927683 * |
See also references of EP1088629A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002120181A (ja) * | 2000-10-11 | 2002-04-23 | Sony Corp | ロボット装置及びその制御方法 |
JP2002215673A (ja) * | 2000-11-17 | 2002-08-02 | Sony Computer Entertainment Inc | 情報処理プログラム、情報処理プログラムが記録された記録媒体、情報処理装置及び方法 |
JP2005508761A (ja) * | 2001-04-06 | 2005-04-07 | ヴァンダービルト ユニバーシティー | ロボット知能のアーキテクチャ |
JP2008246607A (ja) * | 2007-03-29 | 2008-10-16 | Honda Motor Co Ltd | ロボット、ロボットの制御方法およびロボットの制御プログラム |
US8260457B2 (en) | 2007-03-29 | 2012-09-04 | Honda Motor Co., Ltd. | Robot, control method of robot and control program of robot |
WO2010103399A1 (en) | 2009-03-11 | 2010-09-16 | Toyota Jidosha Kabushiki Kaisha | Robot apparatus and control method therefor |
US8818559B2 (en) | 2009-03-11 | 2014-08-26 | Toyota Jidosha Kabushiki Kaisha | Robot apparatus and control method therefor |
JP2011056624A (ja) * | 2009-09-10 | 2011-03-24 | Nara Institute Of Science & Technology | 経路計画生成装置および該方法ならびにロボット制御装置およびロボットシステム |
Also Published As
Publication number | Publication date |
---|---|
JP4696361B2 (ja) | 2011-06-08 |
JP4985805B2 (ja) | 2012-07-25 |
KR100721694B1 (ko) | 2007-05-28 |
CN1246126C (zh) | 2006-03-22 |
US6337552B1 (en) | 2002-01-08 |
EP1088629A4 (en) | 2009-03-18 |
JP2010149276A (ja) | 2010-07-08 |
US20030023348A1 (en) | 2003-01-30 |
KR20010092244A (ko) | 2001-10-24 |
CN1297393A (zh) | 2001-05-30 |
JP2010149277A (ja) | 2010-07-08 |
US20020050802A1 (en) | 2002-05-02 |
EP1088629A1 (en) | 2001-04-04 |
US6442450B1 (en) | 2002-08-27 |
US6667593B2 (en) | 2003-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4696361B2 (ja) | ロボット装置及び動作制御方法 | |
US7076334B2 (en) | Robot apparatus and method and system for controlling the action of the robot apparatus | |
KR20010053481A (ko) | 로봇 장치 및 그 제어 방법 | |
US7117190B2 (en) | Robot apparatus, control method thereof, and method for judging character of robot apparatus | |
US6362589B1 (en) | Robot apparatus | |
US7063591B2 (en) | Edit device, edit method, and recorded medium | |
JP2003039363A (ja) | ロボット装置、ロボット装置の行動学習方法、ロボット装置の行動学習プログラム、及びプログラム記録媒体 | |
JP2004283959A (ja) | ロボット装置、その動作制御方法、及びプログラム | |
WO2002030628A1 (fr) | Appareil robotise et procede permettant de le commander | |
JP2001212782A (ja) | ロボット装置及びロボット装置の制御方法 | |
JP2004298975A (ja) | ロボット装置、障害物探索方法 | |
JP2001157982A (ja) | ロボット装置及びその制御方法 | |
JP2001157979A (ja) | ロボット装置及びその制御方法 | |
JP2001157981A (ja) | ロボット装置及びその制御方法 | |
JP2005169567A (ja) | コンテンツ再生システム、コンテンツ再生方法、コンテンツ再生装置 | |
JP2001154707A (ja) | ロボット装置及びその制御方法 | |
JP2001191279A (ja) | 行動管理システム、行動管理方法及びロボット装置 | |
JP2001157980A (ja) | ロボット装置及びその制御方法 | |
JP2003136439A (ja) | ロボット装置及びロボット装置の歩行制御方法並びにロボット装置の歩行制御プログラム | |
JP2001157983A (ja) | ロボット装置及びロボット装置の性格判別方法 | |
JP2002120180A (ja) | ロボット装置及びその制御方法 | |
JP2003159681A (ja) | ロボット装置及びその制御方法 | |
JP2001157984A (ja) | ロボット装置及びロボット装置の動作制御方法 | |
JP2001191282A (ja) | ロボット装置及びその制御方法 | |
JP2001191280A (ja) | ロボット装置及びその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 00800371.8 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020007010417 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000900846 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09646506 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2000900846 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020007010417 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1020007010417 Country of ref document: KR |