WO2017199565A1 - Robot, robot operation method and program - Google Patents

Robot, robot operation method and program Download PDF

Info

Publication number
WO2017199565A1
WO2017199565A1 PCT/JP2017/010467 JP2017010467W WO2017199565A1 WO 2017199565 A1 WO2017199565 A1 WO 2017199565A1 JP 2017010467 W JP2017010467 W JP 2017010467W WO 2017199565 A1 WO2017199565 A1 WO 2017199565A1
Authority
WO
WIPO (PCT)
Prior art keywords
general
motion data
robot
reference pose
continuous
Prior art date
Application number
PCT/JP2017/010467
Other languages
French (fr)
Japanese (ja)
Inventor
中村 珠幾
裕介 栗本
貴之 毛利
慎哉 佐藤
佐藤 義雄
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to CN201780027602.8A priority Critical patent/CN109195754A/en
Priority to JP2018518125A priority patent/JPWO2017199565A1/en
Publication of WO2017199565A1 publication Critical patent/WO2017199565A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H11/00Self-movable toy figures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators

Definitions

  • the present invention relates to a robot that continuously executes a plurality of operations, a robot operation method, and a program.
  • Patent Document 1 As a conventional robot that continuously executes a plurality of operations, for example, a robot disclosed in Patent Document 1 can be cited.
  • the higher-level operation program includes a middle-level operation routine that defines the basic operation of the robot.
  • the intermediate operation routine is configured as a collection of part operation modules that define the operation for each robot part.
  • a standard operation routine that defines a predetermined standard basic operation is prepared by combining a plurality of part operation modules. Then, an appropriate standard motion routine is selected, and a middle motion routine is created by deleting or replacing a specific body motion module and / or adding a new body motion module. .
  • the intermediate motion routine is configured by combining predetermined part motion modules. For this reason, there is a problem that a sense of incongruity may occur in the operation of the robot at the connection point between the intermediate operation routine and the subsequent intermediate operation routine.
  • a plurality of transition states representing a posture and a motion that transition between a certain posture and motion to a target posture and motion are prepared.
  • the transition state to be routed is optimally selected from the plurality of transition states and set as the target. Transition to posture and motion.
  • the present invention has been made in view of the above-described problems, and an object of the present invention is to realize a robot, a robot operation method, and a program in which a sense of incongruity does not occur at the joint between operations.
  • a robot includes a movable unit that executes a continuous operation in which a first general-purpose operation and a second general-purpose operation are continuous, and a control unit that controls the continuous operation.
  • the first general-purpose operation starts with a reference pose, then the first operation is executed and ends with the reference pose, and the second general-purpose operation starts with the reference pose and then executes a second operation.
  • the control unit ends at the reference pose, and the control unit continuously reproduces the first general-purpose motion data for the first general-purpose operation and the second general-purpose motion data for the second general-purpose operation. It is characterized by controlling continuous operation.
  • FIG. 1 is a timing chart which shows an example of the audio
  • (b) is a timing chart which shows another example of the audio
  • (A) is a timing chart which shows an example of the voice and operation
  • (b) is a case where the emotion of a robot is joy
  • It is a timing chart which shows an example of the voice and operation
  • It is a timing chart which shows an example of operation of the robot concerning Embodiment 4 of the present invention.
  • (A) is a schematic diagram which shows an example of the operation
  • (b) is a schematic diagram which shows an example of the operation
  • FIG. 1 is a block diagram showing a schematic configuration of the robot according to the present embodiment.
  • the robot in this embodiment includes a sensor 2, a control unit 10, a storage unit 20, a voice output unit 3 (speech unit), a drive unit 4, and a movable unit 5.
  • the control unit 10 includes an utterance trigger unit 11, an utterance content determination unit 12, an audio output control unit 13, a drive control unit 14 (control unit), and a start / end posture determination unit 15.
  • the storage unit 20 includes an utterance content table 21 and a motion data table 22.
  • the movable part 5 includes an arm part 6 (first part) and a foot part 7 (second part).
  • the sensor 2 senses, for example, sound, light, electrical signals, and contact from the outside of the robot 1, converts them into information, and transmits the information to the utterance trigger unit 11.
  • the utterance trigger unit 11 receives information from the sensor 2 and transmits the information to the utterance content determination unit 12.
  • the utterance content determination unit 12 receives information from the utterance trigger unit 11. The utterance content determination unit 12 selects and acquires utterance content data from the utterance content table 21 based on the information. The utterance content determination unit 12 transmits the utterance content data acquired from the utterance content table 21 to the voice output control unit 13 and the drive control unit 14.
  • the voice output control unit 13 receives the utterance content data from the utterance content determination unit 12.
  • the voice output control unit 13 causes the voice output unit 3 to output voice based on the utterance content data received from the utterance content determination unit 12.
  • the drive control unit 14 receives the utterance content data from the utterance content determination unit 12.
  • the drive control unit 14 selects and acquires a plurality of general-purpose motion data from the motion data table 22 based on the utterance content data received from the utterance content determination unit 12.
  • General-purpose motion refers to what constitutes a series of motions in combination with other general-purpose motions.
  • the drive control unit 14 may select one general-purpose motion data from the motion data table 22. Based on the utterance content data received from the utterance content determination unit 12, the drive control unit 14 determines which general-purpose motion data is to be reproduced in what order and timing.
  • the drive control unit 14 transmits the general-purpose motion data acquired from the motion data table 22 and the playback order and timing data of the general-purpose motion data to the start / end posture determination unit 15.
  • the start / end posture determination unit 15 determines the posture of the robot 1 at the start and end of continuous operation from the general-purpose motion data received from the drive control unit 14 and the order and timing data of the general-purpose motion data to be reproduced.
  • the continuous operation is an operation based on data obtained by combining a plurality of general-purpose motion data.
  • the start / end posture determination unit 15 transmits the posture data of the robot 1 at the start and end of the continuous operation to the drive control unit 14.
  • the drive control unit 14 supplies the drive unit 4 with general-purpose motion data, the order and timing of playback of the general-purpose motion data, and the attitude data of the robot 1 at the start and end of continuous operation.
  • the drive control unit 14 selects and acquires a plurality of general-purpose motion data for the utterance content data.
  • the drive control unit 14 transmits the plurality of general-purpose motion data, the order and timing of reproducing the plurality of general-purpose motion data, and the posture data of the robot 1 at the start and end of the continuous operation to the drive unit 4.
  • the drive unit 4 is based on the plurality of general-purpose motion data received from the drive control unit 14, the order and timing of reproduction of the plurality of general-purpose motion data, and the posture data of the robot 1 at the start and end of continuous operation.
  • the arm part 6 and the foot part 7 of the movable part 5 are operated.
  • the movable part 5 may be provided with things other than the arm part 6 and the foot part 7.
  • the motion data table 22 describes a plurality of general-purpose motion data candidates that start with a reference pose and end with the same reference pose as the start pose.
  • the reference pose is a state in which the movable part 5 is driven to a predetermined position.
  • general motion data candidates 1 to 10 are stored in the motion data table 22.
  • the general-purpose motion data candidates 1 to 10 have different operation contents and different playback times. That is, general motion data candidates may be prepared for each reproduction time. For example, a general motion data candidate with a playback time of 1 second and a general motion data candidate with a playback time of 1.5 seconds.
  • the motion data table 22 may store general-purpose motion data for each type of operation.
  • a table is prepared in which general motion data candidates 1 to 10 representing general motions that execute a predetermined motion from an upright posture state and return to the upright posture state again are described.
  • general motion data candidates 11 to 20 representing general motions from a standing posture state to a posture state in which the head is bent are described.
  • general motion data candidates 21 to 30 representing general-purpose motions that perform a predetermined operation from the state of the posture of raising the neck and return to the state of the posture of raising the neck again is prepared.
  • FIG. 2 is a flowchart showing an example of the flow of the operation of the robot 1 shown in FIG. 1 (the operation method of the robot).
  • the utterance trigger unit 11 receives information from the sensor 2 (step S1).
  • the utterance content determination unit 12 acquires the utterance content data and the length of the utterance time from the utterance content table 21 based on the information received from the sensor 2 by the utterance trigger unit 11, and the voice output control unit 13 and the drive. It transmits to the control part 14 (step S2).
  • the drive control unit 14 determines whether there is general-purpose motion data matching the utterance content in the motion data table 22 (step S3). Specifically, the drive control unit 14 refers to the motion data table 22 based on the utterance content data received from the utterance content determination unit 12, and determines whether there is general-purpose motion data that matches the utterance content.
  • a plurality of general-purpose motion data corresponding to the utterance time are selected from the motion table matching the utterance content in the motion data table 22 (step S3).
  • the motion data table 22 has a motion data table for each utterance content, and the drive control unit 14 selects a plurality of general-purpose motion data from the motion data table for each utterance content.
  • a motion data table such as a sadness motion data table and a pleasure motion data table may be prepared.
  • the drive control unit 14 selects a plurality of general-purpose motion data for the utterance time from the motion data table 22 (step S5). Specifically, the drive control unit 14 selects a plurality of general-purpose motion data from the motion data table 22 so as to match the speech time.
  • step S6 the playback of the general-purpose motion data is also started in order at the same time when the utterance is started.
  • the audio output control unit 13 issues an audio output instruction to the audio output unit 3
  • the drive control unit 14 issues an instruction to drive the movable unit 5 to the drive unit 4.
  • FIG. 3 is a timing chart showing an example of the operation of the robot 1 according to the first embodiment of the present invention.
  • the robot 1 in the present embodiment performs, for example, a continuous operation 30 that combines a first general-purpose operation 30A and a second general-purpose operation 30B.
  • the right direction is time.
  • movement 30A is operation
  • the second general-purpose operation 30B is an operation based on second general-purpose motion data selected by the drive control unit 14 from the motion data table 22.
  • the first general-purpose operation 30A starts with the reference pose 32, then executes the first operation 31A, and ends with the reference pose 32.
  • the second general-purpose operation 30 ⁇ / b> B starts at the reference pose 32, then executes the second operation 31 ⁇ / b> B, and ends at the reference pose 32.
  • the reference pose 32 is connected at the joint between the first general-purpose operation 30A and the second general-purpose operation 30B. Therefore, it is possible to prevent the robot 1 from feeling uncomfortable.
  • the difference between the end pose 33 of the first action 31A and the reference pose 32 is smaller than the difference between the end pose 33 of the first action 31A and the start pose 34 of the second action 31B.
  • the difference between each pose is greater when the reference pose 32 is provided between the first action 31A and the second action 31B than when the first action 31A and the second action 31B are directly combined. Get smaller. Therefore, the operation of the robot 1 becomes natural and it is possible to prevent the robot 1 from feeling uncomfortable.
  • the continuous operation 30 may be formed by combining a plurality of first general-purpose operations 30A and second general-purpose operations 30B. For example, only a plurality of first general-purpose operations 30A may be combined, only a plurality of second general-purpose operations 30B may be combined, or first general-purpose operations 30A and second general-purpose operations 30B may be combined alternately. Further, the first general-purpose operation 30A and the second general-purpose operation 30B may be combined apart.
  • the movable part 5 performs the continuous operation
  • movement 30B continue as mentioned above (movable process).
  • the drive control unit 14 controls the continuous operation 30 in which the first general operation 30A and the second general operation 30B are combined (control process). Specifically, the drive control unit 14 continuously reproduces a combination of the first general-purpose motion data for the first general-purpose operation 30A and the second general-purpose motion data for the second general-purpose operation 30B. 30 is controlled.
  • the first general motion data is data for executing the first general operation 30A
  • the second general motion data is data for executing the second general operation 30B.
  • the drive control unit 14 selects the first general-purpose motion data and the second general-purpose motion data from a plurality of general-purpose motion data candidates in the motion data table 22 according to the content of the sound output from the sound output unit 3. Combine.
  • the robot 1 includes the movable unit 5 that executes the continuous operation 30 in which the first general operation 30A and the second general operation 30B are continuous, and the control unit 10 that controls the continuous operation 30.
  • the first general operation 30A starts at the reference pose 32, then the first operation 31A is executed and ends at the reference pose 32.
  • the second general operation 30B starts at the reference pose 32, and then the second operation 31B is executed. Execute and end with reference pose 32.
  • the reference pose 32 is a pose in which the difference between the end pose 33 of the first action 31A is smaller than the difference between the end pose 33 of the first action 31A and the start pose 34 of the second action 31B.
  • the control unit 10 controls the continuous operation 30 by continuously reproducing the first general-purpose motion data for the first general-purpose operation 30A and the second general-purpose motion data for the second general-purpose operation 30B.
  • the joint between the first general-purpose operation 30A and the second general-purpose operation 30B can be connected without a sense of incompatibility regardless of the order in which the first general-purpose motion data and the second general-purpose motion data are reproduced.
  • the robot 1 includes an audio output unit 3 that outputs audio, and a motion data table 22 in which a plurality of general-purpose motion data candidates that start at the reference pose 32 and end at the reference pose 32 are described. ing. Further, the control unit 10 selects the first general-purpose motion data and the second general-purpose motion data from the plurality of general-purpose motion data candidates in the motion data table 22 according to the content of the sound output from the sound output unit 3. combine.
  • the robot 1 according to the second embodiment operates by combining a plurality of first general-purpose operations 41 according to the voice 40 as shown in FIG. Alternatively, it operates by combining the first general-purpose operation 42 and the second general-purpose operation 43 according to the voice 40.
  • 4A is a timing chart showing an example of the voice and operation of the robot according to Embodiment 2 of the present invention
  • FIG. 4B shows another example of the voice and operation of the robot. It is a timing chart. In FIG. 4, the right direction is time.
  • the robot 1 combines a plurality of first general-purpose operations 41 according to the voice 40 as shown in FIG.
  • a plurality of the same operations here, the first general-purpose operation 41
  • the timing at which the output of the sound 40 ends may not coincide with the timing at which the continuous operation by the combination of the first general-purpose operations 41 ends.
  • the drive control unit 14 selects a plurality of first general-purpose operations 41 so as to match the period in which the sound 40 is output. Thereby, the timing at which the output of the sound 40 ends can coincide with the timing at which the continuous operation by the combination of the first general-purpose operations 41 ends.
  • the robot 1 combines the first general-purpose operation 42 and the second general-purpose operation 43 according to the voice 40 as shown in FIG. At this time, the timing at which the output of the sound 40 ends may coincide with the timing at which the continuous operation of the first general operation 42 and the second general operation 43 ends.
  • the drive control unit 14 selects the first general-purpose operation 42 and the second general-purpose operation 43 so as to match the period during which the sound 40 is output. To do. Thereby, the timing at which the output of the sound 40 is finished can coincide with the timing at which the continuous operation of the first general-purpose operation 42 and the second general-purpose operation 43 is finished.
  • the movable unit 5 performs a continuous operation according to the period during which the audio 40 is output by the audio output unit 3.
  • the robot 1 includes the audio output unit 3 that outputs the audio 40, and the movable unit 5 performs a continuous operation according to the period in which the audio 40 is output by the audio output unit 3. .
  • the robot 1 according to the third embodiment has different playback times from the start to the end of the first and second general-purpose motion data according to the type of emotion represented by the content of the sound output from the sound output unit 3. .
  • FIG. 5A shows an example of the voice and operation of the robot 1 when the emotion represented by the voice 40A output from the voice output unit 3 of the robot 1 according to the third embodiment of the present invention is sadness. It is a timing chart which shows.
  • FIG. 5B is a timing chart showing an example of the voice and operation of the robot 1 when the emotion represented by the voice 40B output from the voice output unit 3 of the robot 1 is pleasure. In FIG. 5, the right direction is time.
  • FIG. 5A a plurality of first general-purpose operations 50 are combined according to the voice 40A.
  • FIG. 5B a plurality of first general-purpose operations 51 are combined in accordance with the voice 40B.
  • the first general-purpose operation 50 and the first general-purpose operation 51 are operations having the same contents, but the operation times are different. Further, it is assumed that the audio 40A and the audio 40B are audio having the same reproduction time.
  • the contents of the voice 40A are contents representing sad feelings, and the contents of the voice 40B are contents representing emotions of joy.
  • the first general motion 50 is repeated three times for the voice 40A because the operation time is longer than the first general motion 51.
  • the emotion of the robot 1 is joy
  • the first general motion 51 is repeated seven times for the voice 40B because the operation time is shorter than the first general motion 50. Therefore, when the emotion of the robot 1 is sad, the operation speed of the general-purpose operation is reduced because the number of operations of the general-purpose operation is smaller than when the emotion of the robot 1 is joy. Further, when the emotion of the robot 1 is joy, the number of general-purpose operations is increased compared to the case where the emotion of the robot 1 is sad, so the operation speed of the general-purpose speed is increased.
  • the slow operation speed by the first general-purpose operation 50 is considered to be more suitable for the expression of sadness than the fast operation speed by the first general-purpose operation 51.
  • the fast operation speed by the first general-purpose operation 51 is considered to be more suitable for expressing joy than the slow operation speed by the first general-purpose operation 50.
  • multiple general-purpose motion data with the same playback time and different motion sizes may be prepared and switched according to the situation. For example, when the emotion of the robot is sadness or joy, that is, depending on the type of emotion represented by the content of the voice, the robot 1 is selected to be large or small.
  • the processing may be performed as follows. Specifically, for example, when the emotion of the robot 1 is sadness, the drive control unit 14 uses general motion data candidates 11 to 20 in which the motion of the robot 1 is small from the sadness motion data table as shown in Table 3. Any of these may be selected. Further, when the emotion of the robot 1 is joy, the drive control unit 14 selects any one of the general motion data candidates 1 to 10 having a large motion of the robot 1 from the joy motion data table as shown in Table 3. May be selected.
  • the robot 1 according to the present embodiment has different playback times from the start to the end of the first and second general-purpose motion data according to the type of emotion represented by the content of the voice.
  • the robot can be operated according to the type of emotion expressed by the voice.
  • the continuous motion 60 includes a first general motion 60A, a second general motion 60B, a third general motion 60C, and a fourth general motion 60D.
  • FIG. 6 is a timing chart showing an example of the operation of the robot 1 according to the fourth embodiment of the present invention.
  • the first general-purpose operation 60A starts with the reference pose 64, then executes the first operation 61A, and ends with the reference pose 64.
  • the second general operation 60B starts at the reference pose 64, then executes the second operation 61B, and ends at the reference pose 64.
  • the third general-purpose operation 60C starts at the reference pose 64, then executes the third operation 61C, and ends at the reference pose 65.
  • the fourth general-purpose operation 60D starts with the reference pose 65, then executes the fourth operation 61D, and ends with the reference pose 66.
  • the first general motion data for the first general motion 60A, the second general motion data for the second general motion 60B, the third general motion data for the third general motion 60C, and the fourth general motion data can be continuously reproduced.
  • the fourth general-purpose motion data for the operation 60D can be continuously reproduced, the first general-purpose operation 60A and the second general-purpose operation 60B can be connected with the reference pose 64 without a sense of incongruity.
  • the second general motion 60B and the third general motion 60C can be connected without any sense of incongruity by the reference pose 64, and the third general purpose motion 60C and the fourth general purpose motion 60D can be seamlessly connected by the reference pose 65. be able to.
  • the continuous operation includes the third general-purpose operation 60C that is continuous with the second general-purpose operation 60B, and the fourth general-purpose operation 60D that is continuous with the third general-purpose operation 60C.
  • the third general-purpose operation 60C starts at the reference pose 64 and then ends at the reference pose 65, and the fourth general-purpose operation 60D starts at the reference pose 65.
  • the control unit 10 obtains second general motion data for the second general motion 60B, third general motion data for the third general motion 60C, and fourth general motion data for the fourth general motion 60D. Play continuously in this order.
  • the second general-purpose operation 60B and the third general-purpose operation 60C can be connected with the reference pose 64 without a sense of incongruity.
  • the third general operation 60C and the fourth general operation 60D can be connected without a sense of incongruity by the reference pose 65.
  • FIG. 7A is a schematic diagram illustrating an example of an operation in a state where the robot 1 according to Embodiment 5 of the present invention is in an upright posture
  • FIG. 7B is an example of an operation in a state where the robot is sitting. It is a schematic diagram shown.
  • the robot 1 operates only the arm portion 6 in an upright posture.
  • the robot 1 operates only the arm 6 while sitting.
  • the robot 1 first lowers both the left and right arm portions 6 by raising the foot portion 7 in a posture 70.
  • the robot 1 raises both arm portions 6 while keeping the foot portion 7 upright.
  • the robot 1 lowers one of the both arm portions 6 while keeping the foot portion 7 upright.
  • the robot 1 takes the posture 70 by lowering the other of the both arm portions 6 while keeping the foot portion 7 upright.
  • the robot 1 first lowers both the left and right arms 6 in a posture 74 with the foot 7 sitting. Then, in posture 75, the robot 1 raises both arm portions 6 while keeping the foot portion 7 sitting. Next, in the posture 76, the robot 1 keeps the foot portion 7 in the sitting state and lowers one of the both arm portions 6. Thereafter, the robot 1 is in a posture 74 by lowering the other of the both arm portions 6 while keeping the foot portion 7 sitting.
  • FIG. 7 (a) and FIG. 7 (b) are compared, the operating part is only the arm part 6 and the foot part 7 is stationary.
  • the operation of the arm 6 is the same in both cases. Therefore, in both cases, the drive control unit 14 may select similar general-purpose motion data for the arm unit 6. Therefore, it is not necessary to prepare two types of general-purpose motion data when the robot 1 is in the upright posture and the sitting state, and only one general-purpose motion data for operating the arm unit 6 is prepared.
  • one general-purpose motion data may be selected from the general-purpose motion data candidates 1 to 10 described in the motion data table of an operation in which only the arm 6 enters torque.
  • the first general-purpose motion data can be configured not to move the actuator of the foot 7 when the first general-purpose operation is started at the reference pose, and the first operation is executed and ended at the reference pose. Therefore, the first general-purpose motion data representing the first motion of the same arm 6 while standing upright when the posture of the first motion is “upright” and sitting down when the posture of the first motion is “sitting”. Can be played.
  • the movable unit 5 includes the foot part 7 (first part) attached to the support part and the arm part 6 (second part) attached to the support part. Including. Further, the first motion is not related to the foot (first part) but is related to the arm (second part). In the first action, the arm is driven without driving the foot. Thus, the first general-purpose motion data is configured.
  • the first general-purpose motion data can be configured not to move the actuator of the foot when the first general-purpose operation is started at the reference pose, and the first operation is executed and ended at the reference pose. Therefore, the first general-purpose motion data representing the movement of the same arm is reproduced while standing upright when the posture of the first motion is “upright” and sitting down when the posture of the first motion is “sitting”. be able to.
  • the robot 1 includes a movable unit 5 that performs continuous operations 30, 60 in which the first general-purpose operations 30A, 41, 42, 50, 51, 60A and the second general-purpose operations 30B, 43, 60B are continuous. And a control unit (drive control unit 14) that controls the continuous operations 30 and 60, and the first general-purpose operations 30A, 41, 42, 50, 51, and 60A are started with reference poses 32 and 64, The first operations 31A and 61A are executed and finished at the reference poses 32 and 64, and the second general-purpose operations 30B, 43 and 60B are started at the reference poses 32 and 64, and then the second operations 31B and 61B are executed.
  • a control unit drive control unit 14
  • the control unit ends with the first general motion data for the first general motions 30A, 41, 42, 50, 51, 60A, 2nd pan Operation 30B, controls the continuous operation 30, 60 by continuous playback in combination with the second generic motion data for 43,60B.
  • the joint between the first general-purpose operation 30A and the second general-purpose operation 30B can be connected without a sense of incompatibility regardless of the order in which the first general-purpose motion data and the second general-purpose motion data are reproduced.
  • the robot 1 according to the aspect 2 of the present invention is the robot 1 according to the aspect 1 described above, wherein the speech unit (speech output unit 3) that outputs sound and a plurality of general-purpose motion data candidates that start at the reference pose 32 and end at the reference pose 32
  • the motion data table 22 is further described, and the control unit (drive control unit 14) determines the motion data table 22 according to the content of the voice output by the utterance unit (voice output unit 3).
  • the first general motion data and the second general motion data may be selected and combined from a plurality of general motion data candidates.
  • the robot 1 according to aspect 3 of the present invention further includes an utterance unit (speech output unit 3) that outputs the voice 40 in the above-described aspect 1 or 2, and the movable unit 5 includes the utterance unit (speech output unit 3).
  • the continuous operation 30 may be executed according to a period during which the sound 40 is output.
  • the robot 1 according to aspect 4 of the present invention ends from the start of the first and second general-purpose motion data according to the type of emotion represented by the contents of the voices 40A and 40B in the aspect 2 or 3.
  • the playback time until may be different.
  • the number of times of repeating the first general-purpose operation 50 is small, and thus a continuous movement that does not move so much is generated, and a voice 40B that represents emotions of joy is output.
  • the first general-purpose operation 51 is repeated many times, so that it is a continuous operation that moves well, so that the robot can perform an operation according to the type of emotion represented by the voice.
  • the robot 1 according to aspect 5 of the present invention is the robot 1 according to any one of the aspects 1 to 4, wherein the continuous motion 60 is divided into a third general motion 60C that is continuous with the second general motion 60B and the third general motion 60C.
  • the third general purpose operation 60C starts with the reference pose 64 and then ends with another reference pose 65, and the fourth general purpose operation 60D includes the other general reference operation 60D.
  • the control unit drives the second general purpose motion data, the third general purpose motion data for the third general purpose operation 60C, and the fourth general purpose operation 60D. You may reproduce
  • the robot 1 according to aspect 6 of the present invention is the robot 1 according to any one of the aspects 1 to 5, wherein the movable part 5 is attached to the first part (foot part 7) attached to the support part, and the support part.
  • a second portion (arm portion 6) that is a portion different from the first portion, and the first movements 31A and 61A are not related to the first portion (foot portion 7), and the second portion ( The first part 31A and 61A, the first part (foot part 7) is not driven and the second part (arm part 6) is driven in the first action 31A, 61A.
  • One general-purpose motion data may be configured.
  • the first general-purpose operation is started at the reference pose, and when the first operations 31A and 61A are executed and ended at the reference pose, the first portion (foot 7) actuator is not moved.
  • 1 general-purpose motion data can be configured, so if the postures of the first motions 31A and 61A are “upright”, they are standing upright, and if the postures of the first motions 31A and 61A are “sitting”, they are sitting and The first general-purpose motion data representing the first actions 31A and 61A of the part (arm part 6) can be reproduced.
  • the robot operating method is a movable that executes a continuous motion 30, 60 in which a first general motion 30A, 41, 42, 50, 51, 60A and a second general motion 30B, 43, 60B are continuous. And a control step for controlling the continuous movements 30 and 60.
  • the first general-purpose movements 30A, 41, 42, 50, 51 and 60A start with reference poses 32 and 64, and then the first movement 31A, 61A is executed and ends with the reference poses 32 and 64, and the second general-purpose operations 30B, 43 and 60B are started with the reference poses 32 and 64, and then the second operations 31B and 61B are executed to execute the reference poses.
  • control process includes the first general motion data for the first general operation 30A, 41, 42, 50, 51, 60A and the second general operation 30B, 43, 6 For controlling the continuous operation 30, 60 by successively reproducing the second by combining the general-purpose motion data for the B.
  • the robot according to each aspect of the present invention may be realized by a computer.
  • a computer-readable recording medium on which it is recorded also fall within the scope of the present invention.

Abstract

In this robot (1), a first general-purpose operation (30A) starts in a reference pose (32) and ends in the reference pose (32), and a second general-purpose operation (30B) starts in the reference pose (32) and ends in the reference pose (32); a control unit (10) controls continuous operation (30) by combining and continuously reproducing the first general-purpose motion data for the first general-purpose operation (30A) and second general-purpose motion data for the second general-purpose operation (30B).

Description

ロボット、ロボットの動作方法、及びプログラムRobot, robot operation method, and program
 本発明は、複数の動作を連続して実行するロボット、ロボットの動作方法、及びプログラムに関するものである。 The present invention relates to a robot that continuously executes a plurality of operations, a robot operation method, and a program.
 複数の動作を連続して実行する従来のロボットとして、例えば、特許文献1に開示されているロボットが挙げられる。 As a conventional robot that continuously executes a plurality of operations, for example, a robot disclosed in Patent Document 1 can be cited.
 特許文献1に開示されているロボットでは、上位動作プログラムが、ロボットの基本動作を規定する中位動作ルーチンを含む。中位動作ルーチンは、ロボット部位毎の動作を規定する部位動作モジュールの寄せ集めとして構成される。また、予め定められた標準的な基本動作を規定する標準動作ルーチンが、部位動作モジュールを複数組み合わせることにより用意される。そして、適当な標準動作ルーチンを選択して、これに対し、特定の部位動作モジュールの削除、入替え、及び新たな部位動作モジュールの追加の少なくともいずれかを行うことにより中位動作ルーチンが作成される。 In the robot disclosed in Patent Document 1, the higher-level operation program includes a middle-level operation routine that defines the basic operation of the robot. The intermediate operation routine is configured as a collection of part operation modules that define the operation for each robot part. Also, a standard operation routine that defines a predetermined standard basic operation is prepared by combining a plurality of part operation modules. Then, an appropriate standard motion routine is selected, and a middle motion routine is created by deleting or replacing a specific body motion module and / or adding a new body motion module. .
 これにより、新たな基本動作の追加、及び既存の基本動作の部分的な変更に対応するための、中位動作ルーチンの作成あるいは書換え等を簡単に行うことを可能としたロボットを提供することができる。 Accordingly, it is possible to provide a robot that can easily create or rewrite a middle-level operation routine to cope with the addition of a new basic operation and a partial change of an existing basic operation. it can.
日本国公開特許公報「特開2000-153479号公報(2000年6月6日公開)」Japanese Patent Publication “JP 2000-153479 A (published on June 6, 2000)” 日本国特許公報「特許第4696361号公報(2011年3月11日登録)」Japanese Patent Publication “Patent No. 4696361 (Registered on March 11, 2011)”
 しかし、特許文献1に開示されているロボットシステムにおいては、中位動作ルーチンが、予め定められた部位動作モジュールを組み合わせて構成される。このため、中位動作ルーチンと、後続する中位動作ルーチンとの間の動作のつなぎ目のところでロボットの動作に違和感が生じ得るという問題がある。 However, in the robot system disclosed in Patent Document 1, the intermediate motion routine is configured by combining predetermined part motion modules. For this reason, there is a problem that a sense of incongruity may occur in the operation of the robot at the connection point between the intermediate operation routine and the subsequent intermediate operation routine.
 特許文献2に開示されているロボットにおいては、ある姿勢、動作から目標とされる姿勢、動作までの間に遷移する姿勢、動作を表す遷移状態が複数個用意される。そして、ロボットの動作に感情を表出させる感情モデル、ロボットの動作に本能を表出させる本能モデルに基づいて、経由する遷移状態が上記複数個の遷移状態から最適に選択され、当該目標とされる姿勢、動作に遷移する。これにより、感情、本能に基づく自然な動作を自律的に行うロボットが提供される。 In the robot disclosed in Patent Document 2, a plurality of transition states representing a posture and a motion that transition between a certain posture and motion to a target posture and motion are prepared. Based on the emotion model that expresses emotion in the robot motion and the instinct model that expresses instinct in the robot motion, the transition state to be routed is optimally selected from the plurality of transition states and set as the target. Transition to posture and motion. This provides a robot that autonomously performs natural actions based on emotion and instinct.
 しかしながら、特許文献2に開示されているロボットにおいては、上記遷移状態の姿勢と遷移元状態の姿勢との間の姿勢差、及び、上記遷移状態の姿勢と遷移先状態の姿勢との間の姿勢差が考慮されていない。このため、上記姿勢差が大きいと、ロボットの動作のつなぎ目に違和感が生じるという問題がある。 However, in the robot disclosed in Patent Document 2, the posture difference between the posture in the transition state and the posture in the transition source state, and the posture between the posture in the transition state and the posture in the transition destination state Differences are not taken into account. For this reason, if the posture difference is large, there is a problem in that a sense of incongruity occurs at the joint of the robot movement.
 本発明は、前記の問題点に鑑みてなされたものであり、その目的は、動作の間のつなぎ目に違和感が生じないロボット、ロボットの動作方法、及びプログラムを実現することにある。 The present invention has been made in view of the above-described problems, and an object of the present invention is to realize a robot, a robot operation method, and a program in which a sense of incongruity does not occur at the joint between operations.
 上記の課題を解決するために、本発明の一態様に係るロボットは、第1汎用動作及び第2汎用動作が連続する連続動作を実行する可動部と、前記連続動作を制御する制御部とを備え、前記第1汎用動作は基準ポーズで開始した後、第1動作を実行して前記基準ポーズで終了し、前記第2汎用動作は前記基準ポーズで開始した後、第2動作を実行して前記基準ポーズで終了し、前記制御部が、前記第1汎用動作のための第1汎用モーションデータと、前記第2汎用動作のための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作を制御することを特徴とする。 In order to solve the above problems, a robot according to an aspect of the present invention includes a movable unit that executes a continuous operation in which a first general-purpose operation and a second general-purpose operation are continuous, and a control unit that controls the continuous operation. The first general-purpose operation starts with a reference pose, then the first operation is executed and ends with the reference pose, and the second general-purpose operation starts with the reference pose and then executes a second operation. The control unit ends at the reference pose, and the control unit continuously reproduces the first general-purpose motion data for the first general-purpose operation and the second general-purpose motion data for the second general-purpose operation. It is characterized by controlling continuous operation.
 本発明の一態様によれば、動作の間のつなぎ目に違和感が生じないロボット、ロボットの動作方法、及びプログラムを実現することができるという効果を奏する。 According to one aspect of the present invention, it is possible to realize a robot, a robot operation method, and a program that do not cause a sense of incongruity between joints.
本発明の実施の形態に係るロボットの概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the robot which concerns on embodiment of this invention. 図1に示すロボットの動作の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of operation | movement of the robot shown in FIG. 本発明の実施の形態1に係るロボットの動作の一例を示すタイミングチャートである。It is a timing chart which shows an example of operation | movement of the robot which concerns on Embodiment 1 of this invention. (a)は、本発明の実施の形態2に係るロボットの音声及び動作の一例を示すタイミングチャートであり、(b)は、ロボットの音声及び動作の他の一例を示すタイミングチャートである。(A) is a timing chart which shows an example of the audio | voice and operation | movement of a robot which concerns on Embodiment 2 of this invention, (b) is a timing chart which shows another example of the audio | voice and operation | movement of a robot. (a)は、本発明の実施の形態3に係るロボットの感情が悲しみである場合のロボットの音声及び動作の一例を示すタイミングチャートであり、(b)は、ロボットの感情が喜びである場合のロボットの音声及び動作の一例を示すタイミングチャートである。(A) is a timing chart which shows an example of the voice and operation | movement of a robot in case the emotion of the robot which concerns on Embodiment 3 of this invention is sadness, (b) is a case where the emotion of a robot is joy It is a timing chart which shows an example of the voice and operation | movement of this robot. 本発明の実施の形態4に係るロボットの動作の一例を示すタイミングチャートである。It is a timing chart which shows an example of operation of the robot concerning Embodiment 4 of the present invention. (a)は、本発明の実施の形態5に係るロボットが直立姿勢の状態における動作の一例を示す模式図であり、(b)は、ロボットが座った状態における動作の一例を示す模式図である。(A) is a schematic diagram which shows an example of the operation | movement in the state of the upright posture of the robot which concerns on Embodiment 5 of this invention, (b) is a schematic diagram which shows an example of the operation | movement in the state which the robot sat down. is there.
 〔実施の形態1〕
 以下、本発明の実施の形態について図1~図3に基づいて説明すれば、以下のとおりである。
[Embodiment 1]
Hereinafter, embodiments of the present invention will be described with reference to FIGS. 1 to 3 as follows.
 本実施の形態におけるロボットの概略構成について、図1に基づいて説明する。図1は、本実施の形態のロボットの概略構成を示すブロック図である。 Schematic configuration of the robot in the present embodiment will be described based on FIG. FIG. 1 is a block diagram showing a schematic configuration of the robot according to the present embodiment.
 本実施の形態におけるロボットは、図1に示すように、センサー2、制御部10、記憶部20、音声出力部3(発話部)、駆動部4、及び可動部5により構成されている。また、制御部10は、発話トリガ部11、発話内容決定部12、音声出力制御部13、駆動制御部14(制御部)、及び開始/終了姿勢決定部15を備えている。記憶部20は、発話内容テーブル21及びモーションデータテーブル22を備えている。可動部5は、腕部6(第1部位)及び足部7(第2部位)を備えている。 As shown in FIG. 1, the robot in this embodiment includes a sensor 2, a control unit 10, a storage unit 20, a voice output unit 3 (speech unit), a drive unit 4, and a movable unit 5. The control unit 10 includes an utterance trigger unit 11, an utterance content determination unit 12, an audio output control unit 13, a drive control unit 14 (control unit), and a start / end posture determination unit 15. The storage unit 20 includes an utterance content table 21 and a motion data table 22. The movable part 5 includes an arm part 6 (first part) and a foot part 7 (second part).
 センサー2は、例えば、ロボット1の外部からの音、光、電気信号及び接触などを感知し、それらを情報に変換し、その情報を発話トリガ部11に送信する。 The sensor 2 senses, for example, sound, light, electrical signals, and contact from the outside of the robot 1, converts them into information, and transmits the information to the utterance trigger unit 11.
 発話トリガ部11は、センサー2から情報を受信し、その情報を発話内容決定部12に送信する。 The utterance trigger unit 11 receives information from the sensor 2 and transmits the information to the utterance content determination unit 12.
 発話内容決定部12は、発話トリガ部11から情報を受信する。発話内容決定部12は、その情報に基づいて、発話内容テーブル21から発話内容データを選択し、取得する。発話内容決定部12は、発話内容テーブル21から取得した発話内容データを音声出力制御部13及び駆動制御部14に送信する。 The utterance content determination unit 12 receives information from the utterance trigger unit 11. The utterance content determination unit 12 selects and acquires utterance content data from the utterance content table 21 based on the information. The utterance content determination unit 12 transmits the utterance content data acquired from the utterance content table 21 to the voice output control unit 13 and the drive control unit 14.
 音声出力制御部13は、発話内容決定部12から発話内容データを受信する。音声出力制御部13は、発話内容決定部12から受信した発話内容データに基づいて音声出力部3に音声を出力させる。 The voice output control unit 13 receives the utterance content data from the utterance content determination unit 12. The voice output control unit 13 causes the voice output unit 3 to output voice based on the utterance content data received from the utterance content determination unit 12.
 駆動制御部14は、発話内容決定部12から発話内容データを受信する。駆動制御部14は、発話内容決定部12から受信した発話内容データに基づいて、モーションデータテーブル22から汎用モーションデータを複数選択し、取得する。 The drive control unit 14 receives the utterance content data from the utterance content determination unit 12. The drive control unit 14 selects and acquires a plurality of general-purpose motion data from the motion data table 22 based on the utterance content data received from the utterance content determination unit 12.
 汎用モーションとは、他の汎用モーションと組み合わせて一連のモーションを構成するものを言う。 General-purpose motion refers to what constitutes a series of motions in combination with other general-purpose motions.
 駆動制御部14がモーションデータテーブル22から選択する汎用モーションデータは1つであってもよい。駆動制御部14は、発話内容決定部12からから受信した発話内容データに基づいて、どの汎用モーションデータをどの順番、タイミングで再生するかを決定する。 The drive control unit 14 may select one general-purpose motion data from the motion data table 22. Based on the utterance content data received from the utterance content determination unit 12, the drive control unit 14 determines which general-purpose motion data is to be reproduced in what order and timing.
 駆動制御部14は、モーションデータテーブル22から取得した汎用モーションデータ、及び汎用モーションデータの再生順番とタイミングのデータを開始/終了姿勢決定部15に送信する。開始/終了姿勢決定部15は、駆動制御部14から受信した汎用モーションデータ、及び汎用モーションデータの再生する順番とタイミングのデータから連続動作の開始時及び終了時のロボット1の姿勢を決定する。連続動作とは、複数の汎用モーションデータを組み合わせたデータによる動作のことである。開始/終了姿勢決定部15は、連続動作の開始時及び終了時のロボット1の姿勢のデータを駆動制御部14に送信する。 The drive control unit 14 transmits the general-purpose motion data acquired from the motion data table 22 and the playback order and timing data of the general-purpose motion data to the start / end posture determination unit 15. The start / end posture determination unit 15 determines the posture of the robot 1 at the start and end of continuous operation from the general-purpose motion data received from the drive control unit 14 and the order and timing data of the general-purpose motion data to be reproduced. The continuous operation is an operation based on data obtained by combining a plurality of general-purpose motion data. The start / end posture determination unit 15 transmits the posture data of the robot 1 at the start and end of the continuous operation to the drive control unit 14.
 駆動制御部14は、汎用モーションデータ、汎用モーションデータの再生する順番とタイミング、及び連続動作の開始時及び終了時のロボット1の姿勢のデータを駆動部4に供給する。 The drive control unit 14 supplies the drive unit 4 with general-purpose motion data, the order and timing of playback of the general-purpose motion data, and the attitude data of the robot 1 at the start and end of continuous operation.
 つまり、駆動制御部14は、発話内容データに対して、複数の汎用モーションデータを選択して取得する。また、駆動制御部14は、複数の汎用モーションデータ、複数の汎用モーションデータを再生する順番とタイミング、及び連続動作の開始時及び終了時のロボット1の姿勢のデータを駆動部4に送信する。 That is, the drive control unit 14 selects and acquires a plurality of general-purpose motion data for the utterance content data. In addition, the drive control unit 14 transmits the plurality of general-purpose motion data, the order and timing of reproducing the plurality of general-purpose motion data, and the posture data of the robot 1 at the start and end of the continuous operation to the drive unit 4.
 駆動部4は、駆動制御部14から受信した複数の汎用モーションデータ、複数の汎用モーションデータの再生する順番とタイミング、及び連続動作の開始時及び終了時のロボット1の姿勢のデータに基づいて、可動部5の腕部6及び足部7を動作させる。また、可動部5は、腕部6及び足部7以外のものを備えていてもよい。 The drive unit 4 is based on the plurality of general-purpose motion data received from the drive control unit 14, the order and timing of reproduction of the plurality of general-purpose motion data, and the posture data of the robot 1 at the start and end of continuous operation. The arm part 6 and the foot part 7 of the movable part 5 are operated. Moreover, the movable part 5 may be provided with things other than the arm part 6 and the foot part 7.
 モーションデータテーブル22には、基準ポーズで開始し、開始したときの基準ポーズと同じ基準ポーズで終了する複数の汎用モーションデータ候補が記述されている。基準ポーズとは、予め定められた位置に可動部5を駆動させた状態のことである。例えば、以下の表1に示すように、モーションデータテーブル22には、汎用モーションデータ候補1~10が格納されている。汎用モーションデータ候補1~10は動作の内容がそれぞれ異なっており、再生時間もそれぞれ異なっている。つまり、汎用モーションデータ候補は、再生時間毎に準備してもよい。例えば、再生1秒の汎用モーションデータ候補、再生時間1.5秒の汎用モーションデータ候補などである。 The motion data table 22 describes a plurality of general-purpose motion data candidates that start with a reference pose and end with the same reference pose as the start pose. The reference pose is a state in which the movable part 5 is driven to a predetermined position. For example, as shown in Table 1 below, general motion data candidates 1 to 10 are stored in the motion data table 22. The general-purpose motion data candidates 1 to 10 have different operation contents and different playback times. That is, general motion data candidates may be prepared for each reproduction time. For example, a general motion data candidate with a playback time of 1 second and a general motion data candidate with a playback time of 1.5 seconds.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 また、以下の表2のように、モーションデータテーブル22には、動作の種類毎に汎用モーションデータが格納されていてもよい。例えば、表2のように、直立姿勢の状態から、所定の動作を実行して、再度直立姿勢の状態に戻る汎用動作を表す汎用モーションデータ候補1~10が記述されたテーブルを用意する。また、直立姿勢の状態から、首をかしげる姿勢の状態への汎用動作を表す汎用モーションデータ候補11~20が記述されたテーブルを用意する。さらに、首をかしげる姿勢の状態から、所定の動作を実行して、再度首をかしげる姿勢の状態に戻る汎用動作を表す汎用モーションデータ候補21~30が記述されたテーブルなどを用意する。 Also, as shown in Table 2 below, the motion data table 22 may store general-purpose motion data for each type of operation. For example, as shown in Table 2, a table is prepared in which general motion data candidates 1 to 10 representing general motions that execute a predetermined motion from an upright posture state and return to the upright posture state again are described. In addition, a table is prepared in which general motion data candidates 11 to 20 representing general motions from a standing posture state to a posture state in which the head is bent are described. Further, a table in which general motion data candidates 21 to 30 representing general-purpose motions that perform a predetermined operation from the state of the posture of raising the neck and return to the state of the posture of raising the neck again is prepared.
Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002
 本実施の形態におけるロボット1の動作の流れについて、図2に基づいて説明する。図2は、図1に示すロボット1の動作(ロボットの動作方法)の流れの一例を示すフローチャートである。 The operation flow of the robot 1 in this embodiment will be described with reference to FIG. FIG. 2 is a flowchart showing an example of the flow of the operation of the robot 1 shown in FIG. 1 (the operation method of the robot).
 まず、発話トリガ部11は、センサー2から情報を受信する(ステップS1)。 First, the utterance trigger unit 11 receives information from the sensor 2 (step S1).
 続いて、発話内容決定部12が、発話トリガ部11がセンサー2から受け取った情報に基づいて、発話内容データ、発話時間の長さを発話内容テーブル21から取得して音声出力制御部13及び駆動制御部14に送信する(ステップS2)。 Subsequently, the utterance content determination unit 12 acquires the utterance content data and the length of the utterance time from the utterance content table 21 based on the information received from the sensor 2 by the utterance trigger unit 11, and the voice output control unit 13 and the drive. It transmits to the control part 14 (step S2).
 続いて、発話内容にマッチした汎用モーションデータがモーションデータテーブル22にあるかを駆動制御部14が判断する(ステップS3)。具体的に、駆動制御部14は、発話内容決定部12から受信した発話内容データに基づいて、モーションデータテーブル22を参照し、発話内容にマッチした汎用モーションデータがあるかを判断する。 Subsequently, the drive control unit 14 determines whether there is general-purpose motion data matching the utterance content in the motion data table 22 (step S3). Specifically, the drive control unit 14 refers to the motion data table 22 based on the utterance content data received from the utterance content determination unit 12, and determines whether there is general-purpose motion data that matches the utterance content.
 続いて、発話内容にマッチした汎用モーションデータがある場合は(ステップS3でYES)、モーションデータテーブル22の発話内容にマッチしたモーションのテーブルから、発話時間分の汎用モーションデータを複数選択する(ステップS4)。具体的に、モーションデータテーブル22には、発話内容毎にモーションデータテーブルがあり、その発話内容毎のモーションデータテーブルから汎用モーションデータを駆動制御部14が複数選択する。発話内容毎のモーションデータテーブルについては、例えば、悲しみのモーションデータテーブル、及び喜びのモーションデータテーブルなどのモーションデータテーブルなどを用意すればよい。 Subsequently, if there is general-purpose motion data matching the utterance content (YES in step S3), a plurality of general-purpose motion data corresponding to the utterance time are selected from the motion table matching the utterance content in the motion data table 22 (step S3). S4). Specifically, the motion data table 22 has a motion data table for each utterance content, and the drive control unit 14 selects a plurality of general-purpose motion data from the motion data table for each utterance content. As the motion data table for each utterance content, for example, a motion data table such as a sadness motion data table and a pleasure motion data table may be prepared.
 発話内容にマッチした汎用モーションデータがない場合は(ステップS3でNO)、モーションデータテーブル22から、発話時間分の汎用モーションデータを駆動制御部14が複数選択する(ステップS5)。具体的には、モーションデータテーブル22から、発話時間に合うように汎用モーションデータを駆動制御部14が複数選択する。 If there is no general-purpose motion data matching the utterance content (NO in step S3), the drive control unit 14 selects a plurality of general-purpose motion data for the utterance time from the motion data table 22 (step S5). Specifically, the drive control unit 14 selects a plurality of general-purpose motion data from the motion data table 22 so as to match the speech time.
 続いて、発話開始と同時に汎用モーションデータも順番に再生を開始させる(ステップS6)。具体的には、音声出力制御部13が音声出力部3に音声出力の指示を出すと同時に、駆動制御部14が駆動部4に可動部5を駆動する指示を出す。 Subsequently, the playback of the general-purpose motion data is also started in order at the same time when the utterance is started (step S6). Specifically, the audio output control unit 13 issues an audio output instruction to the audio output unit 3, and the drive control unit 14 issues an instruction to drive the movable unit 5 to the drive unit 4.
 本実施の形態におけるロボット1の連続動作について、図3に基づいて説明する。図3は、本発明の実施の形態1に係るロボット1の動作の一例を示すタイミングチャートである。 The continuous operation of the robot 1 in the present embodiment will be described with reference to FIG. FIG. 3 is a timing chart showing an example of the operation of the robot 1 according to the first embodiment of the present invention.
 本実施の形態におけるロボット1は、図3に示すように、例えば、第1汎用動作30Aと第2汎用動作30Bとを組み合わせた連続動作30を行っている。図3においては、右方向を時間とする。 As shown in FIG. 3, the robot 1 in the present embodiment performs, for example, a continuous operation 30 that combines a first general-purpose operation 30A and a second general-purpose operation 30B. In FIG. 3, the right direction is time.
 第1汎用動作30Aは、図1に示すモーションデータテーブル22から駆動制御部14により選択される第1汎用モーションデータによる動作である。第2汎用動作30Bは、モーションデータテーブル22から駆動制御部14により選択される第2汎用モーションデータによる動作である。 1st general purpose operation | movement 30A is operation | movement by the 1st general purpose motion data selected by the drive control part 14 from the motion data table 22 shown in FIG. The second general-purpose operation 30B is an operation based on second general-purpose motion data selected by the drive control unit 14 from the motion data table 22.
 第1汎用動作30Aは、基準ポーズ32で開始した後、第1動作31Aを実行して、基準ポーズ32で終了する。第2汎用動作30Bは、基準ポーズ32で開始した後、第2動作31Bを実行して、基準ポーズ32で終了する。 The first general-purpose operation 30A starts with the reference pose 32, then executes the first operation 31A, and ends with the reference pose 32. The second general-purpose operation 30 </ b> B starts at the reference pose 32, then executes the second operation 31 </ b> B, and ends at the reference pose 32.
 これにより、どのように第1汎用動作30Aと第2汎用動作30Bとを組み合わせても、第1汎用動作30Aと第2汎用動作30Bとの間のつなぎ目において、基準ポーズ32でつながることになる。よって、ロボット1の動作に違和感が生じさせないようにすることができる。 Thus, no matter how the first general-purpose operation 30A and the second general-purpose operation 30B are combined, the reference pose 32 is connected at the joint between the first general-purpose operation 30A and the second general-purpose operation 30B. Therefore, it is possible to prevent the robot 1 from feeling uncomfortable.
 また、第1動作31Aの終了ポーズ33と基準ポーズ32との間の差異は、第1動作31Aの終了ポーズ33と第2動作31Bの開始ポーズ34との間の差異よりも小さい。 Also, the difference between the end pose 33 of the first action 31A and the reference pose 32 is smaller than the difference between the end pose 33 of the first action 31A and the start pose 34 of the second action 31B.
 これにより、第1動作31Aと第2動作31Bとを直接組み合わせた場合よりも、第1動作31A及び第2動作31Bの間に基準ポーズ32を設けた場合の方が、各ポーズ間の差異が小さくなる。よって、ロボット1の動作が自然になり、ロボット1の動作に違和感が生じないようにさせることができる。 As a result, the difference between each pose is greater when the reference pose 32 is provided between the first action 31A and the second action 31B than when the first action 31A and the second action 31B are directly combined. Get smaller. Therefore, the operation of the robot 1 becomes natural and it is possible to prevent the robot 1 from feeling uncomfortable.
 連続動作30は、第1汎用動作30A及び第2汎用動作30Bをそれぞれ複数組み合わせて形成してもよい。例えば、第1汎用動作30Aのみを複数組み合わせたり、第2汎用動作30Bのみを複数組み合わせたり、第1汎用動作30A及び第2汎用動作30Bを交互に組み合わせたりしてもよい。また、第1汎用動作30A及び第2汎用動作30Bをバラバラに組み合わせてもよい。 The continuous operation 30 may be formed by combining a plurality of first general-purpose operations 30A and second general-purpose operations 30B. For example, only a plurality of first general-purpose operations 30A may be combined, only a plurality of second general-purpose operations 30B may be combined, or first general-purpose operations 30A and second general-purpose operations 30B may be combined alternately. Further, the first general-purpose operation 30A and the second general-purpose operation 30B may be combined apart.
 また、可動部5は、上記のように、第1汎用動作30A及び第2汎用動作30Bが連続する連続動作30を実行する(可動工程)。駆動制御部14は、第1汎用動作30A及び第2汎用動作30Bを組み合わせた連続動作30を制御する(制御工程)。具体的には、駆動制御部14が、第1汎用動作30Aのための第1汎用モーションデータと、第2汎用動作30Bのための第2汎用モーションデータとを組み合わせて連続再生することにより連続動作30を制御する。第1汎用モーションデータとは、第1汎用動作30Aを実行するためのデータであり、第2汎用モーションデータとは、第2汎用動作30Bを実行するためのデータである。 Moreover, the movable part 5 performs the continuous operation | movement 30 in which the 1st general purpose operation | movement 30A and the 2nd general purpose operation | movement 30B continue as mentioned above (movable process). The drive control unit 14 controls the continuous operation 30 in which the first general operation 30A and the second general operation 30B are combined (control process). Specifically, the drive control unit 14 continuously reproduces a combination of the first general-purpose motion data for the first general-purpose operation 30A and the second general-purpose motion data for the second general-purpose operation 30B. 30 is controlled. The first general motion data is data for executing the first general operation 30A, and the second general motion data is data for executing the second general operation 30B.
 さらに、駆動制御部14は、音声出力部3により出力される音声の内容に応じて、モーションデータテーブル22の複数の汎用モーションデータ候補から第1汎用モーションデータと第2汎用モーションデータとを選択して組み合わせる。 Furthermore, the drive control unit 14 selects the first general-purpose motion data and the second general-purpose motion data from a plurality of general-purpose motion data candidates in the motion data table 22 according to the content of the sound output from the sound output unit 3. Combine.
 このように、本実施の形態におけるロボット1は、第1汎用動作30A及び第2汎用動作30Bが連続する連続動作30を実行する可動部5と、連続動作30を制御する制御部10とを備える。また、第1汎用動作30Aは基準ポーズ32で開始した後、第1動作31Aを実行して基準ポーズ32で終了し、第2汎用動作30Bは基準ポーズ32で開始した後、第2動作31Bを実行して基準ポーズ32で終了する。さらに、基準ポーズ32は、第1動作31Aの終了ポーズ33との間の差異が、第1動作31Aの終了ポーズ33と第2動作31Bの開始ポーズ34との間の差異よりも小さいポーズである。制御部10が、第1汎用動作30Aのための第1汎用モーションデータと、第2汎用動作30Bのための第2汎用モーションデータとを組み合わせて連続再生することにより連続動作30を制御する。 As described above, the robot 1 according to the present embodiment includes the movable unit 5 that executes the continuous operation 30 in which the first general operation 30A and the second general operation 30B are continuous, and the control unit 10 that controls the continuous operation 30. . The first general operation 30A starts at the reference pose 32, then the first operation 31A is executed and ends at the reference pose 32. The second general operation 30B starts at the reference pose 32, and then the second operation 31B is executed. Execute and end with reference pose 32. Further, the reference pose 32 is a pose in which the difference between the end pose 33 of the first action 31A is smaller than the difference between the end pose 33 of the first action 31A and the start pose 34 of the second action 31B. . The control unit 10 controls the continuous operation 30 by continuously reproducing the first general-purpose motion data for the first general-purpose operation 30A and the second general-purpose motion data for the second general-purpose operation 30B.
 上記構成によれば、第1汎用モーションデータ及び第2汎用モーションデータをどの順番で再生しても、第1汎用動作30Aと第2汎用動作30Bとのつなぎ目を違和感無くつなぐことができる。 According to the above configuration, the joint between the first general-purpose operation 30A and the second general-purpose operation 30B can be connected without a sense of incompatibility regardless of the order in which the first general-purpose motion data and the second general-purpose motion data are reproduced.
 また、本実施の形態におけるロボット1は、音声を出力する音声出力部3と、基準ポーズ32で開始し基準ポーズ32で終了する複数の汎用モーションデータ候補が記述されたモーションデータテーブル22とを備えている。また、制御部10が、音声出力部3により出力される音声の内容に応じて、モーションデータテーブル22の複数の汎用モーションデータ候補から第1汎用モーションデータと第2汎用モーションデータとを選択して組み合わせる。 In addition, the robot 1 according to the present embodiment includes an audio output unit 3 that outputs audio, and a motion data table 22 in which a plurality of general-purpose motion data candidates that start at the reference pose 32 and end at the reference pose 32 are described. ing. Further, the control unit 10 selects the first general-purpose motion data and the second general-purpose motion data from the plurality of general-purpose motion data candidates in the motion data table 22 according to the content of the sound output from the sound output unit 3. combine.
 上記構成によれば、限られた数の汎用モーションデータしか保持していない状態でも、発話内容にふさわしくバリエーションに富んだ動作をロボットにさせることができる。つまり、汎用モーションデータを組み合わせて再生させることで、膨大な種類の発話をするロボットでも、一定量の汎用モーションデータを準備しておくだけで、発話内容に沿った動作をさせることができる。 According to the above configuration, even in a state where only a limited number of general-purpose motion data is held, it is possible to cause the robot to perform various motions appropriate for the utterance content. In other words, by reproducing the combination of general-purpose motion data, even a robot that makes a huge amount of utterances can operate according to the utterance content simply by preparing a certain amount of general-purpose motion data.
 〔実施の形態2〕
 本発明の他の実施形態について、図4に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 2]
The following will describe another embodiment of the present invention with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施の形態2のロボット1は、図4に示すように、音声40に応じて第1汎用動作41を複数組み合わせることにより動作する。または、音声40に応じて第1汎用動作42及び第2汎用動作43を組み合わせることにより動作する。図4の(a)は、本発明の実施の形態2に係るロボットの音声及び動作の一例を示すタイミングチャートであり、図4の(b)は、ロボットの音声及び動作の他の一例を示すタイミングチャートである。図4においては、右方向を時間とする。 The robot 1 according to the second embodiment operates by combining a plurality of first general-purpose operations 41 according to the voice 40 as shown in FIG. Alternatively, it operates by combining the first general-purpose operation 42 and the second general-purpose operation 43 according to the voice 40. 4A is a timing chart showing an example of the voice and operation of the robot according to Embodiment 2 of the present invention, and FIG. 4B shows another example of the voice and operation of the robot. It is a timing chart. In FIG. 4, the right direction is time.
 ロボット1は、図4の(a)に示すように、音声40に応じて複数の第1汎用動作41を組み合わせている。このように、ロボット1の動作を組み合わせるときは、同じ動作(ここでは第1汎用動作41)を複数組み合わせて、その動作を繰り返してもよい。このとき、音声40の出力が終了するタイミングと、第1汎用動作41の組み合わせによる連続動作が終了するタイミングとは一致していなくてもよい。 The robot 1 combines a plurality of first general-purpose operations 41 according to the voice 40 as shown in FIG. Thus, when combining the operations of the robot 1, a plurality of the same operations (here, the first general-purpose operation 41) may be combined and the operations may be repeated. At this time, the timing at which the output of the sound 40 ends may not coincide with the timing at which the continuous operation by the combination of the first general-purpose operations 41 ends.
 また、駆動制御部14により、音声40が出力される期間に合うように、複数の第1汎用動作41を選択する。これにより、音声40の出力が終了するタイミングと、第1汎用動作41の組み合わせによる連続動作が終了するタイミングとを一致させることができる。 Also, the drive control unit 14 selects a plurality of first general-purpose operations 41 so as to match the period in which the sound 40 is output. Thereby, the timing at which the output of the sound 40 ends can coincide with the timing at which the continuous operation by the combination of the first general-purpose operations 41 ends.
 ロボット1は、図4の(b)に示すように、音声40に応じて第1汎用動作42及び第2汎用動作43を組み合わせている。このとき、音声40の出力が終了するタイミングと、第1汎用動作42及び第2汎用動作43の連続動作が終了するタイミングとは一致させてもよい。 The robot 1 combines the first general-purpose operation 42 and the second general-purpose operation 43 according to the voice 40 as shown in FIG. At this time, the timing at which the output of the sound 40 ends may coincide with the timing at which the continuous operation of the first general operation 42 and the second general operation 43 ends.
 音声40に応じて複数の第1汎用動作41を組み合わせた場合と同様に、駆動制御部14により、音声40が出力される期間に合うように、第1汎用動作42及び第2汎用動作43を選択する。これにより、音声40の出力が終了するタイミングと、第1汎用動作42及び第2汎用動作43の連続動作が終了するタイミングとを一致させることができる。 As in the case of combining a plurality of first general-purpose operations 41 according to the sound 40, the drive control unit 14 selects the first general-purpose operation 42 and the second general-purpose operation 43 so as to match the period during which the sound 40 is output. To do. Thereby, the timing at which the output of the sound 40 is finished can coincide with the timing at which the continuous operation of the first general-purpose operation 42 and the second general-purpose operation 43 is finished.
 よって、可動部5は、音声出力部3により音声40が出力される期間に応じて連続動作を実行する。 Therefore, the movable unit 5 performs a continuous operation according to the period during which the audio 40 is output by the audio output unit 3.
 このように、本実施の形態のロボット1は、音声40を出力する音声出力部3を備え、可動部5が、音声出力部3により音声40が出力される期間に応じて連続動作を実行する。 As described above, the robot 1 according to the present embodiment includes the audio output unit 3 that outputs the audio 40, and the movable unit 5 performs a continuous operation according to the period in which the audio 40 is output by the audio output unit 3. .
 上記の構成によれば、音声出力部3により音声40が出力される期間に応じて連続動作を実行するので、発話タイミングにふさわしい動作をロボット1にさせることができる。 According to the above configuration, since the continuous operation is executed according to the period during which the audio 40 is output by the audio output unit 3, it is possible to cause the robot 1 to perform an operation suitable for the utterance timing.
 〔実施の形態3〕
 本発明の他の実施形態について、図5に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 3]
The following will describe another embodiment of the present invention with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施の形態3のロボット1は、音声出力部3から出力される音声の内容により表される感情の種類に応じて、第1及び第2汎用モーションデータの開始から終了までの再生時間が異なる。図5の(a)は、本発明の実施の形態3に係るロボット1の音声出力部3から出力される音声40Aにより表される感情が悲しみである場合のロボット1の音声及び動作の一例を示すタイミングチャートである。図5の(b)は、ロボット1の音声出力部3から出力される音声40Bにより表される感情が喜びである場合のロボット1の音声及び動作の一例を示すタイミングチャートである。図5においては、右方向を時間とする。 The robot 1 according to the third embodiment has different playback times from the start to the end of the first and second general-purpose motion data according to the type of emotion represented by the content of the sound output from the sound output unit 3. . FIG. 5A shows an example of the voice and operation of the robot 1 when the emotion represented by the voice 40A output from the voice output unit 3 of the robot 1 according to the third embodiment of the present invention is sadness. It is a timing chart which shows. FIG. 5B is a timing chart showing an example of the voice and operation of the robot 1 when the emotion represented by the voice 40B output from the voice output unit 3 of the robot 1 is pleasure. In FIG. 5, the right direction is time.
 図5の(a)では、音声40Aに応じて、第1汎用動作50を複数組み合わせている。また、図5の(b)では、音声40Bに応じて、第1汎用動作51を複数組み合わせている。ここでは、第1汎用動作50及び第1汎用動作51はそれぞれ同様の内容の動作であるが、動作時間が異なるものとする。また、音声40A及び音声40Bはそれぞれ同じ再生時間の音声であるものとする。 In FIG. 5A, a plurality of first general-purpose operations 50 are combined according to the voice 40A. In FIG. 5B, a plurality of first general-purpose operations 51 are combined in accordance with the voice 40B. Here, the first general-purpose operation 50 and the first general-purpose operation 51 are operations having the same contents, but the operation times are different. Further, it is assumed that the audio 40A and the audio 40B are audio having the same reproduction time.
 音声40Aの内容は、悲しみの感情を表す内容であり、音声40Bの内容は喜びの感情を表す内容である。 The contents of the voice 40A are contents representing sad feelings, and the contents of the voice 40B are contents representing emotions of joy.
 ロボット1の感情が悲しみである場合、第1汎用動作50は、第1汎用動作51より動作時間が長いため、音声40Aに対して3回繰り返されている。また、ロボット1の感情が喜びである場合、第1汎用動作51は、第1汎用動作50より動作時間が短いため、音声40Bに対して7回繰り返されている。よって、ロボット1の感情が悲しみである場合は、ロボット1の感情が喜びである場合と比較して、汎用動作の動作回数が少なくなるので、汎用動作の動作速度は低下している。また、ロボット1の感情が喜びである場合は、ロボット1の感情が悲しみである場合と比較して、汎用動作の動作回数が多くなるので、汎用速度の動作速度は増大している。第1汎用動作50によるゆっくりとした動作速度は、第1汎用動作51による速い動作速度よりも悲しみの感情の表現にふさわしいと考えられる。そして、第1汎用動作51による速い動作速度は、第1汎用動作50によるゆっくりとした動作速度よりも喜びの表現にはふさわしいと考えられる。 When the emotion of the robot 1 is sad, the first general motion 50 is repeated three times for the voice 40A because the operation time is longer than the first general motion 51. When the emotion of the robot 1 is joy, the first general motion 51 is repeated seven times for the voice 40B because the operation time is shorter than the first general motion 50. Therefore, when the emotion of the robot 1 is sad, the operation speed of the general-purpose operation is reduced because the number of operations of the general-purpose operation is smaller than when the emotion of the robot 1 is joy. Further, when the emotion of the robot 1 is joy, the number of general-purpose operations is increased compared to the case where the emotion of the robot 1 is sad, so the operation speed of the general-purpose speed is increased. The slow operation speed by the first general-purpose operation 50 is considered to be more suitable for the expression of sadness than the fast operation speed by the first general-purpose operation 51. The fast operation speed by the first general-purpose operation 51 is considered to be more suitable for expressing joy than the slow operation speed by the first general-purpose operation 50.
 また、同じ再生時間で動作の大きさが異なる汎用モーションデータを複数用意して、状況に応じて切り替えてもよい。例えば、ロボットの感情が悲しみまたは喜びである場合など、つまり、音声の内容により表される感情の種類に応じて、ロボット1の動作が大きいものである、または小さいものであることを選択する。 Also, multiple general-purpose motion data with the same playback time and different motion sizes may be prepared and switched according to the situation. For example, when the emotion of the robot is sadness or joy, that is, depending on the type of emotion represented by the content of the voice, the robot 1 is selected to be large or small.
 以上により、例えば、悲しみの感情を表す音声が出力されるときは、第1汎用動作50を繰り返す回数が少ないために余り動かないゆっくりとした動作速度の連続動作になる。また、喜びの感情を表す音声が出力されるときは、第1汎用動作51を繰り返す回数が多いためによく動く速い動作速度の連続動作になるので、音声により表される感情の種類に応じた動作をロボットにさせることができる。 As described above, for example, when a voice representing sadness is output, since the number of times of repeating the first general-purpose operation 50 is small, it becomes a continuous operation with a slow operation speed that does not move much. In addition, when a voice representing a feeling of joy is output, since the first general-purpose operation 51 is repeated many times, it becomes a continuous motion with a fast movement speed that moves well, so that it corresponds to the type of emotion represented by the voice. The robot can be operated.
 もし、ロボット1の感情を汎用モーションデータの内容によって区別するならば、以下のように処理が行われてもよい。具体的には、例えば、駆動制御部14は、ロボット1の感情が悲しみであるとき、表3に示すように、悲しみのモーションデータテーブルから、ロボット1の動作が小さい汎用モーションデータ候補11~20のうちの何れかを選択してもよい。また、駆動制御部14は、ロボット1の感情が喜びであるとき、表3に示すように、喜びのモーションデータテーブルから、ロボット1の動作が大きい汎用モーションデータ候補1~10のうちの何れかを選択してもよい。 If the emotion of the robot 1 is distinguished according to the contents of the general-purpose motion data, the processing may be performed as follows. Specifically, for example, when the emotion of the robot 1 is sadness, the drive control unit 14 uses general motion data candidates 11 to 20 in which the motion of the robot 1 is small from the sadness motion data table as shown in Table 3. Any of these may be selected. Further, when the emotion of the robot 1 is joy, the drive control unit 14 selects any one of the general motion data candidates 1 to 10 having a large motion of the robot 1 from the joy motion data table as shown in Table 3. May be selected.
Figure JPOXMLDOC01-appb-T000003
Figure JPOXMLDOC01-appb-T000003
 このように、本実施の形態のロボット1は、音声の内容により表される感情の種類に応じて、第1及び第2汎用モーションデータの開始から終了までの再生時間が異なる。 As described above, the robot 1 according to the present embodiment has different playback times from the start to the end of the first and second general-purpose motion data according to the type of emotion represented by the content of the voice.
 上記の構成により、例えば、悲しみの感情を表す音声が出力されるときは、第1汎用動作50を繰り返す回数が少ないために余り動かない連続動作になる。また、喜びの感情を表す音声が出力されるときは、第1汎用動作51を繰り返す回数が多いためによく動く連続動作になる。このため、音声により表される感情の種類に応じた動作をロボットにさせることができる。 With the above configuration, for example, when a voice representing sadness is output, the first general-purpose operation 50 is repeated so that the continuous operation does not move much. In addition, when a voice representing a feeling of joy is output, a continuous motion that moves well because the first general-purpose motion 51 is repeated many times. For this reason, the robot can be operated according to the type of emotion expressed by the voice.
 〔実施の形態4〕
 本発明の他の実施形態について、図6に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 4]
The following will describe another embodiment of the present invention with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施の形態4のロボット1は、図6に示すように、連続動作60が、第1汎用動作60A、第2汎用動作60B、第3汎用動作60C、及び第4汎用動作60Dを含んでいる。図6は、本発明の実施の形態4に係るロボット1の動作の一例を示すタイミングチャートである。 In the robot 1 according to the fourth embodiment, as shown in FIG. 6, the continuous motion 60 includes a first general motion 60A, a second general motion 60B, a third general motion 60C, and a fourth general motion 60D. . FIG. 6 is a timing chart showing an example of the operation of the robot 1 according to the fourth embodiment of the present invention.
 第1汎用動作60Aは、基準ポーズ64で開始した後、第1動作61Aを実行して、基準ポーズ64で終了する。第2汎用動作60Bは、基準ポーズ64で開始した後、第2動作61Bを実行して、基準ポーズ64で終了する。第3汎用動作60Cは、基準ポーズ64で開始した後、第3動作61Cを実行して、基準ポーズ65で終了する。第4汎用動作60Dは、基準ポーズ65で開始した後、第4動作61Dを実行して、基準ポーズ66で終了する。 The first general-purpose operation 60A starts with the reference pose 64, then executes the first operation 61A, and ends with the reference pose 64. The second general operation 60B starts at the reference pose 64, then executes the second operation 61B, and ends at the reference pose 64. The third general-purpose operation 60C starts at the reference pose 64, then executes the third operation 61C, and ends at the reference pose 65. The fourth general-purpose operation 60D starts with the reference pose 65, then executes the fourth operation 61D, and ends with the reference pose 66.
 これにより、第1汎用動作60Aのための第1汎用モーションデータ、第2汎用動作60Bのための第2汎用モーションデータ、第3汎用動作60Cのための第3汎用モーションデータ、及び、第4汎用動作60Dのための第4汎用モーションデータを連続して再生すると、第1汎用動作60Aと第2汎用動作60Bとの間を基準ポーズ64により違和感無く繋ぐことができる。また、第2汎用動作60Bと第3汎用動作60Cとの間を基準ポーズ64により違和感無く繋ぐことができ、第3汎用動作60Cと第4汎用動作60Dとの間を基準ポーズ65により違和感無く繋ぐことができる。 Accordingly, the first general motion data for the first general motion 60A, the second general motion data for the second general motion 60B, the third general motion data for the third general motion 60C, and the fourth general motion data. When the fourth general-purpose motion data for the operation 60D is continuously reproduced, the first general-purpose operation 60A and the second general-purpose operation 60B can be connected with the reference pose 64 without a sense of incongruity. Further, the second general motion 60B and the third general motion 60C can be connected without any sense of incongruity by the reference pose 64, and the third general purpose motion 60C and the fourth general purpose motion 60D can be seamlessly connected by the reference pose 65. be able to.
 このように、本実施の形態のロボット1は、連続動作が、第2汎用動作60Bに連続する第3汎用動作60Cと、第3汎用動作60Cに連続する第4汎用動作60Dとを含む。第3汎用動作60Cは、基準ポーズ64で開始した後、基準ポーズ65で終了し、第4汎用動作60Dは、基準ポーズ65で開始する。制御部10が、第2汎用動作60Bのための第2汎用モーションデータと、第3汎用動作60Cのための第3汎用モーションデータと、第4汎用動作60Dのための第4汎用モーションデータとを、この順番に連続して再生する。 Thus, in the robot 1 according to the present embodiment, the continuous operation includes the third general-purpose operation 60C that is continuous with the second general-purpose operation 60B, and the fourth general-purpose operation 60D that is continuous with the third general-purpose operation 60C. The third general-purpose operation 60C starts at the reference pose 64 and then ends at the reference pose 65, and the fourth general-purpose operation 60D starts at the reference pose 65. The control unit 10 obtains second general motion data for the second general motion 60B, third general motion data for the third general motion 60C, and fourth general motion data for the fourth general motion 60D. Play continuously in this order.
 上記の構成により、第2、第3、及び第4汎用モーションデータを連続して再生すると、第2汎用動作60Bと第3汎用動作60Cとの間を基準ポーズ64により、違和感無く繋ぐことができる。また、第3汎用動作60Cと第4汎用動作60Dとの間を基準ポーズ65により、違和感無く繋ぐことができる。 With the above configuration, when the second, third, and fourth general-purpose motion data are continuously reproduced, the second general-purpose operation 60B and the third general-purpose operation 60C can be connected with the reference pose 64 without a sense of incongruity. . Further, the third general operation 60C and the fourth general operation 60D can be connected without a sense of incongruity by the reference pose 65.
 〔実施の形態5〕
 本発明の他の実施形態について、図7に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 5]
The following will describe another embodiment of the present invention with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施の形態5のロボット1が直立姿勢の状態及び座った状態における動作について、図7に基づいて説明する。図7の(a)は、本発明の実施の形態5に係るロボット1が直立姿勢の状態における動作の一例を示す模式図であり、(b)は、ロボットが座った状態における動作の一例を示す模式図である。 The operation of the robot 1 according to the fifth embodiment in the upright posture state and the sitting state will be described with reference to FIG. FIG. 7A is a schematic diagram illustrating an example of an operation in a state where the robot 1 according to Embodiment 5 of the present invention is in an upright posture, and FIG. 7B is an example of an operation in a state where the robot is sitting. It is a schematic diagram shown.
 ロボット1は、図7の(a)に示すように、直立姿勢の状態で、腕部6のみを動作させている。ロボット1は、図7の(b)に示すように、座った状態で、腕部6のみを動作させている。 As shown in FIG. 7A, the robot 1 operates only the arm portion 6 in an upright posture. As shown in FIG. 7B, the robot 1 operates only the arm 6 while sitting.
 図7の(a)を参照すると、ロボット1は、まず、姿勢70において、足部7を直立させて左右両方の腕部6を下げている。そして、姿勢71において、ロボット1は、足部7を直立させた状態のままで、両方の腕部6を上げる。次に、姿勢72において、ロボット1は、足部7を直立させた状態のままで、両方の腕部6のうちの一方を下げる。その後、ロボット1は、足部7を直立させた状態のままで、両方の腕部6のうちの他方を下げて姿勢70になる。 Referring to (a) of FIG. 7, the robot 1 first lowers both the left and right arm portions 6 by raising the foot portion 7 in a posture 70. In the posture 71, the robot 1 raises both arm portions 6 while keeping the foot portion 7 upright. Next, in the posture 72, the robot 1 lowers one of the both arm portions 6 while keeping the foot portion 7 upright. Thereafter, the robot 1 takes the posture 70 by lowering the other of the both arm portions 6 while keeping the foot portion 7 upright.
 図7の(b)を参照すると、ロボット1は、まず、姿勢74において、足部7を座った状態にして左右両方の腕部6を下げている。そして、姿勢75において、ロボット1は、足部7を座った状態のままにして、両方の腕部6を上げる。次に、姿勢76において、ロボット1は、足部7を座った状態のままにして、両方の腕部6のうちの一方を下げる。その後、ロボット1は、足部7を座った状態のままにして、両方の腕部6のうちの他方を下げて姿勢74になる。 Referring to FIG. 7 (b), the robot 1 first lowers both the left and right arms 6 in a posture 74 with the foot 7 sitting. Then, in posture 75, the robot 1 raises both arm portions 6 while keeping the foot portion 7 sitting. Next, in the posture 76, the robot 1 keeps the foot portion 7 in the sitting state and lowers one of the both arm portions 6. Thereafter, the robot 1 is in a posture 74 by lowering the other of the both arm portions 6 while keeping the foot portion 7 sitting.
 図7の(a)の場合と図7の(b)の場合とを比較すると、動作している部分は腕部6のみとなっており、足部7は静止している。また、その腕部6の動作は両方の場合とも同様のものである。よって、両方の場合において、駆動制御部14が、腕部6についての同様の汎用モーションデータを選択すればよい。よって、ロボット1が直立姿勢の状態と座った状態における2通りの汎用モーションデータを用意する必要がなく、腕部6を動作させるという1つの汎用モーションデータを用意するだけでよい。以下の表4に示すように、腕部6だけトルクが入る動作のモーションデータテーブルに記述された汎用モーションデータ候補1~10から、1つの汎用モーションデータを選択すればよい。このとき、足部7のアクチュエータの位置だけが異なる「直立」の基準ポーズの姿勢70及び「座り」の基準ポーズの姿勢74がある場合を考える。この場合、姿勢70から姿勢71、姿勢72を経て姿勢70になる汎用モーションデータと、姿勢74から姿勢75、姿勢76を経て姿勢74になる汎用モーションデータとを、足部7のアクチュエータを動かさないモーションデータで構成する。これにより、「直立」の姿勢なら直立したまま、「座り」の姿勢なら座ったまま図7の(a)と図7の(b)との動作で汎用モーションデータを使用することが可能となる。 7 (a) and FIG. 7 (b) are compared, the operating part is only the arm part 6 and the foot part 7 is stationary. The operation of the arm 6 is the same in both cases. Therefore, in both cases, the drive control unit 14 may select similar general-purpose motion data for the arm unit 6. Therefore, it is not necessary to prepare two types of general-purpose motion data when the robot 1 is in the upright posture and the sitting state, and only one general-purpose motion data for operating the arm unit 6 is prepared. As shown in Table 4 below, one general-purpose motion data may be selected from the general-purpose motion data candidates 1 to 10 described in the motion data table of an operation in which only the arm 6 enters torque. At this time, consider a case where there is an “upright” reference pose posture 70 and a “sitting” reference pose posture 74 that differ only in the position of the actuator of the foot 7. In this case, the general motion data that changes from posture 70 through posture 71 and posture 72 to posture 70 and the general motion data that changes from posture 74 through posture 75 and posture 76 to posture 74 are not moved. Consists of motion data. This makes it possible to use general-purpose motion data in the operations of FIGS. 7A and 7B while standing upright in the “upright” posture and sitting down in the “sitting” posture. .
Figure JPOXMLDOC01-appb-T000004
Figure JPOXMLDOC01-appb-T000004
 以上により、第1汎用動作を基準ポーズで開始し、第1動作を実行して基準ポーズで終了するときに、足部7のアクチュエータを動かさないように第1汎用モーションデータを構成できる。このため、第1動作の姿勢が「直立」の姿勢なら直立したまま、第1動作の姿勢が「座り」の姿勢なら座ったまま、同じ腕部6の第1動作を表す第1汎用モーションデータを再生することができる。 Thus, the first general-purpose motion data can be configured not to move the actuator of the foot 7 when the first general-purpose operation is started at the reference pose, and the first operation is executed and ended at the reference pose. Therefore, the first general-purpose motion data representing the first motion of the same arm 6 while standing upright when the posture of the first motion is “upright” and sitting down when the posture of the first motion is “sitting”. Can be played.
 このように、本実施の形態のロボット1では、可動部5は、支持部に取り付けられた足部7(第1部位)と、支持部に取り付けられた腕部6(第2部位)とを含む。また、第1動作が、足部(第1部位)に関連せず、腕部(第2部位)に関連する動作であり、第1動作において、足部は駆動せずに腕部を駆動するように第1汎用モーションデータが構成される。 As described above, in the robot 1 according to the present embodiment, the movable unit 5 includes the foot part 7 (first part) attached to the support part and the arm part 6 (second part) attached to the support part. Including. Further, the first motion is not related to the foot (first part) but is related to the arm (second part). In the first action, the arm is driven without driving the foot. Thus, the first general-purpose motion data is configured.
 上記の構成により、第1汎用動作を基準ポーズで開始し、第1動作を実行して基準ポーズで終了するときに、足部のアクチュエータを動かさないように第1汎用モーションデータを構成できる。このため、第1動作の姿勢が「直立」の姿勢なら直立したまま、第1動作の姿勢が「座り」の姿勢なら座ったまま、同じ腕部の動作を表す第1汎用モーションデータを再生することができる。 With the above configuration, the first general-purpose motion data can be configured not to move the actuator of the foot when the first general-purpose operation is started at the reference pose, and the first operation is executed and ended at the reference pose. Therefore, the first general-purpose motion data representing the movement of the same arm is reproduced while standing upright when the posture of the first motion is “upright” and sitting down when the posture of the first motion is “sitting”. be able to.
 〔まとめ〕
 本発明の態様1に係るロボット1は、第1汎用動作30A、41、42、50、51、60A及び第2汎用動作30B、43、60Bが連続する連続動作30、60を実行する可動部5と、前記連続動作30、60を制御する制御部(駆動制御部14)とを備え、前記第1汎用動作30A、41、42、50、51、60Aは基準ポーズ32、64で開始した後、第1動作31A、61Aを実行して前記基準ポーズ32、64で終了し、前記第2汎用動作30B、43、60Bは前記基準ポーズ32、64で開始した後、第2動作31B、61Bを実行して前記基準ポーズ32、64で終了し、前記制御部(駆動制御部14)が、前記第1汎用動作30A、41、42、50、51、60Aのための第1汎用モーションデータと、前記第2汎用動作30B、43、60Bのための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作30、60を制御する。
[Summary]
The robot 1 according to the first aspect of the present invention includes a movable unit 5 that performs continuous operations 30, 60 in which the first general- purpose operations 30A, 41, 42, 50, 51, 60A and the second general- purpose operations 30B, 43, 60B are continuous. And a control unit (drive control unit 14) that controls the continuous operations 30 and 60, and the first general- purpose operations 30A, 41, 42, 50, 51, and 60A are started with reference poses 32 and 64, The first operations 31A and 61A are executed and finished at the reference poses 32 and 64, and the second general- purpose operations 30B, 43 and 60B are started at the reference poses 32 and 64, and then the second operations 31B and 61B are executed. The control unit (drive control unit 14) ends with the first general motion data for the first general motions 30A, 41, 42, 50, 51, 60A, 2nd pan Operation 30B, controls the continuous operation 30, 60 by continuous playback in combination with the second generic motion data for 43,60B.
 上記構成によれば、第1汎用モーションデータ及び第2汎用モーションデータをどの順番で再生しても、第1汎用動作30Aと第2汎用動作30Bとのつなぎ目を違和感無くつなぐことができる。 According to the above configuration, the joint between the first general-purpose operation 30A and the second general-purpose operation 30B can be connected without a sense of incompatibility regardless of the order in which the first general-purpose motion data and the second general-purpose motion data are reproduced.
 本発明の態様2に係るロボット1は、上記態様1において、音声を出力する発話部(音声出力部3)と、前記基準ポーズ32で開始し前記基準ポーズ32で終了する複数の汎用モーションデータ候補が記述されたモーションデータテーブル22とをさらに備え、前記制御部(駆動制御部14)が、前記発話部(音声出力部3)により出力される前記音声の内容に応じて、前記モーションデータテーブル22の複数の汎用モーションデータ候補から前記第1汎用モーションデータと前記第2汎用モーションデータとを選択して組み合わせてもよい。 The robot 1 according to the aspect 2 of the present invention is the robot 1 according to the aspect 1 described above, wherein the speech unit (speech output unit 3) that outputs sound and a plurality of general-purpose motion data candidates that start at the reference pose 32 and end at the reference pose 32 The motion data table 22 is further described, and the control unit (drive control unit 14) determines the motion data table 22 according to the content of the voice output by the utterance unit (voice output unit 3). The first general motion data and the second general motion data may be selected and combined from a plurality of general motion data candidates.
 上記構成によれば、限られた数の汎用モーションデータしか保持していない状態でも、発話内容にふさわしくバリエーションに富んだ動作をロボット1にさせることができる。つまり、汎用モーションデータを組み合わせて再生させることで、膨大な種類の発話をするロボット1でも、一定量の汎用モーションデータを準備しておくだけで、発話内容に沿った動作をさせることができる。 According to the above configuration, even in a state where only a limited number of general-purpose motion data is held, it is possible to cause the robot 1 to perform a variety of motions suitable for the utterance content. That is, by reproducing the combination of the general-purpose motion data, even the robot 1 that makes a huge variety of utterances can operate according to the utterance content only by preparing a certain amount of general-purpose motion data.
 本発明の態様3に係るロボット1は、上記態様1または2において、音声40を出力する発話部(音声出力部3)をさらに備え、前記可動部5が、前記発話部(音声出力部3)により前記音声40が出力される期間に応じて前記連続動作30を実行してもよい。 The robot 1 according to aspect 3 of the present invention further includes an utterance unit (speech output unit 3) that outputs the voice 40 in the above-described aspect 1 or 2, and the movable unit 5 includes the utterance unit (speech output unit 3). The continuous operation 30 may be executed according to a period during which the sound 40 is output.
 上記の構成によれば、音声出力部3により音声40が出力される期間に応じて連続動作を実行するので、発話タイミングにふさわしい動作をロボット1にさせることができる。 According to the above configuration, since the continuous operation is executed according to the period during which the audio 40 is output by the audio output unit 3, it is possible to cause the robot 1 to perform an operation suitable for the utterance timing.
 本発明の態様4に係るロボット1は、上記態様2または3において、前記音声40A、40Bの内容により表される感情の種類に応じて、前記第1及び前記第2汎用モーションデータの開始から終了までの再生時間が異なってもよい。 The robot 1 according to aspect 4 of the present invention ends from the start of the first and second general-purpose motion data according to the type of emotion represented by the contents of the voices 40A and 40B in the aspect 2 or 3. The playback time until may be different.
 上記の構成により、例えば、悲しみの感情を表す音声40Aが出力されるときは、第1汎用動作50を繰り返す回数が少ないために余り動かない連続動作になり、喜びの感情を表す音声40Bが出力されるときは、第1汎用動作51を繰り返す回数が多いためによく動く連続動作になるので、音声により表される感情の種類に応じた動作をロボットにさせることができる。 With the above configuration, for example, when a voice 40A representing sad feelings is output, the number of times of repeating the first general-purpose operation 50 is small, and thus a continuous movement that does not move so much is generated, and a voice 40B that represents emotions of joy is output. When this is done, the first general-purpose operation 51 is repeated many times, so that it is a continuous operation that moves well, so that the robot can perform an operation according to the type of emotion represented by the voice.
 本発明の態様5に係るロボット1は、上記態様1から4のいずれかにおいて、前記連続動作60が、前記第2汎用動作60Bに連続する第3汎用動作60Cと、前記第3汎用動作60Cに連続する第4汎用動作60Dとをさらに含み、前記第3汎用動作60Cは、前記基準ポーズ64で開始した後、他の基準ポーズ65で終了し、前記第4汎用動作60Dは、前記他の基準ポーズ65で開始し、前記制御部(駆動制御部14)が、前記第2汎用モーションデータと、前記第3汎用動作60Cのための第3汎用モーションデータと、前記第4汎用動作60Dのための第4汎用モーションデータとを、この順番に連続して再生してもよい。 The robot 1 according to aspect 5 of the present invention is the robot 1 according to any one of the aspects 1 to 4, wherein the continuous motion 60 is divided into a third general motion 60C that is continuous with the second general motion 60B and the third general motion 60C. The third general purpose operation 60C starts with the reference pose 64 and then ends with another reference pose 65, and the fourth general purpose operation 60D includes the other general reference operation 60D. Starting at pause 65, the control unit (drive control unit 14) controls the second general purpose motion data, the third general purpose motion data for the third general purpose operation 60C, and the fourth general purpose operation 60D. You may reproduce | regenerate 4th general purpose motion data continuously in this order.
 上記の構成により、第1、第2、第3、及び第4汎用モーションデータを連続して再生すると、第3汎用動作60Cと第4汎用動作60Dとの間を他の基準ポーズ65により違和感無く繋ぐことができる。 With the above configuration, when the first, second, third, and fourth general-purpose motion data are continuously reproduced, there is no sense of incongruity between the third general-purpose operation 60C and the fourth general-purpose operation 60D due to another reference pose 65. Can be connected.
 本発明の態様6に係るロボット1は、上記態様1から5のいずれかにおいて、前記可動部5が、支持部に取り付けられた第1部位(足部7)と、前記支持部に取り付けられ、前記第1部位とは異なる部位である第2部位(腕部6)とを含み、前記第1動作31A、61Aが、前記第1部位(足部7)に関連せず、前記第2部位(腕部6)に関連する動作であり、前記第1動作31A、61Aにおいて、前記第1部位(足部7)は駆動せずに前記第2部位(腕部6)を駆動するように前記第1汎用モーションデータが構成されるものであってもよい。 The robot 1 according to aspect 6 of the present invention is the robot 1 according to any one of the aspects 1 to 5, wherein the movable part 5 is attached to the first part (foot part 7) attached to the support part, and the support part. A second portion (arm portion 6) that is a portion different from the first portion, and the first movements 31A and 61A are not related to the first portion (foot portion 7), and the second portion ( The first part 31A and 61A, the first part (foot part 7) is not driven and the second part (arm part 6) is driven in the first action 31A, 61A. One general-purpose motion data may be configured.
 上記の構成により、第1汎用動作を基準ポーズで開始し、第1動作31A、61Aを実行して基準ポーズで終了するときに、第1部位(足部7)のアクチュエータを動かさないように第1汎用モーションデータを構成できるので、第1動作31A、61Aの姿勢が「直立」の姿勢なら直立したまま、第1動作31A、61Aの姿勢が「座り」の姿勢なら座ったまま、同じ第2部位(腕部6)の第1動作31A、61Aを表す第1汎用モーションデータを再生することができる。 With the above configuration, the first general-purpose operation is started at the reference pose, and when the first operations 31A and 61A are executed and ended at the reference pose, the first portion (foot 7) actuator is not moved. 1 general-purpose motion data can be configured, so if the postures of the first motions 31A and 61A are “upright”, they are standing upright, and if the postures of the first motions 31A and 61A are “sitting”, they are sitting and The first general-purpose motion data representing the first actions 31A and 61A of the part (arm part 6) can be reproduced.
 本発明の態様7に係るロボットの動作方法は、第1汎用動作30A、41、42、50、51、60A及び第2汎用動作30B、43、60Bが連続する連続動作30、60を実行する可動工程と、前記連続動作30、60を制御する制御工程とを備え、前記第1汎用動作30A、41、42、50、51、60Aは基準ポーズ32、64で開始した後、第1動作31A、61Aを実行して前記基準ポーズ32、64で終了し、前記第2汎用動作30B、43、60Bは前記基準ポーズ32、64で開始した後、第2動作31B、61Bを実行して前記基準ポーズ32、64で終了し、前記制御工程が、前記第1汎用動作30A、41、42、50、51、60Aのための第1汎用モーションデータと、前記第2汎用動作30B、43、60Bのための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作30、60を制御する。 The robot operating method according to the seventh aspect of the present invention is a movable that executes a continuous motion 30, 60 in which a first general motion 30A, 41, 42, 50, 51, 60A and a second general motion 30B, 43, 60B are continuous. And a control step for controlling the continuous movements 30 and 60. The first general- purpose movements 30A, 41, 42, 50, 51 and 60A start with reference poses 32 and 64, and then the first movement 31A, 61A is executed and ends with the reference poses 32 and 64, and the second general- purpose operations 30B, 43 and 60B are started with the reference poses 32 and 64, and then the second operations 31B and 61B are executed to execute the reference poses. 32, 64, and the control process includes the first general motion data for the first general operation 30A, 41, 42, 50, 51, 60A and the second general operation 30B, 43, 6 For controlling the continuous operation 30, 60 by successively reproducing the second by combining the general-purpose motion data for the B.
 上記の構成によれば、上記態様1と同様の効果を奏する。 According to said structure, there exists an effect similar to the said aspect 1.
 本発明の各態様に係るロボットは、コンピュータによって実現してもよく、この場合には、コンピュータを上記ロボットが備える各部(ソフトウェア要素)として動作させることにより上記ロボットをコンピュータにて実現させるロボットのプログラム、及びそれを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 The robot according to each aspect of the present invention may be realized by a computer. In this case, a robot program for realizing the robot by the computer by operating the computer as each unit (software element) included in the robot. And a computer-readable recording medium on which it is recorded also fall within the scope of the present invention.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 1 ロボット
 2 センサー
 3 音声出力部(発話部)
 4 駆動部
 5 可動部
 6 腕部(第2部位)
 7 足部(第1部位)
 10 制御部
 11 発話トリガ部
 12 発話内容決定部
 13 音声出力制御部
 14 駆動制御部(制御部)
 15 開始/終了姿勢決定部
 20 記憶部
 21 発話内容テーブル
 22 モーションデータテーブル
 30・60 連続動作
 30A・41・42・50・51・60A 第1汎用動作
 30B・43・60B 第2汎用動作
 31A・61A 第1動作
 31B・61B 第2動作
 32・63・64・65・66 基準ポーズ
 33 終了ポーズ
 40・40A・40B 音声
 60C 第3汎用動作
 60D 第4汎用動作
 61C 第3動作
 61D 第4動作
 70・71・72・74・75・76 姿勢
1 Robot 2 Sensor 3 Voice output part (speech part)
4 Drive part 5 Movable part 6 Arm part (second part)
7 feet (first part)
DESCRIPTION OF SYMBOLS 10 Control part 11 Utterance trigger part 12 Utterance content determination part 13 Voice output control part 14 Drive control part (control part)
15 start / end posture determination unit 20 storage unit 21 utterance content table 22 motion data table 30/60 continuous operation 30A / 41/42/50/51 / 60A first general operation 30B / 43 / 60B second general operation 31A / 61A First operation 31B / 61B Second operation 32/63/64/65/66 Reference pause 33 End pause 40 / 40A / 40B Audio 60C Third general operation 60D Fourth general operation 61C Third operation 61D Fourth operation 70/71・ 72 ・ 74 ・ 75 ・ 76 Posture

Claims (8)

  1.  第1汎用動作及び第2汎用動作が連続する連続動作を実行する可動部と、
     前記連続動作を制御する制御部とを備え、
     前記第1汎用動作は基準ポーズで開始した後、第1動作を実行して前記基準ポーズで終了し、
     前記第2汎用動作は前記基準ポーズで開始した後、第2動作を実行して前記基準ポーズで終了し、
     前記制御部が、前記第1汎用動作のための第1汎用モーションデータと、前記第2汎用動作のための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作を制御することを特徴とするロボット。
    A movable part that performs a continuous operation in which the first general-purpose operation and the second general-purpose operation are continuous;
    A control unit for controlling the continuous operation,
    The first general operation starts at the reference pose, and then executes the first operation and ends at the reference pose.
    The second general operation starts at the reference pose, and then executes the second operation and ends at the reference pose.
    The controller controls the continuous operation by continuously reproducing the first general-purpose motion data for the first general-purpose operation and the second general-purpose motion data for the second general-purpose operation; Characteristic robot.
  2.  音声を出力する発話部と、
     前記基準ポーズで開始し前記基準ポーズで終了する複数の汎用モーションデータ候補が記述されたモーションデータテーブルとをさらに備え、
     前記制御部が、前記発話部により出力される前記音声の内容に応じて、前記モーションデータテーブルの複数の汎用モーションデータ候補から前記第1汎用モーションデータと前記第2汎用モーションデータとを選択して組み合わせる請求項1に記載のロボット。
    An utterance unit that outputs voice;
    A motion data table in which a plurality of general-purpose motion data candidates starting at the reference pose and ending at the reference pose are described;
    The control unit selects the first general-purpose motion data and the second general-purpose motion data from a plurality of general-purpose motion data candidates in the motion data table according to the content of the sound output from the speech unit. The robot according to claim 1 to be combined.
  3.  音声を出力する発話部をさらに備え、
     前記可動部が、前記発話部により前記音声が出力される期間に応じて前記連続動作を実行する請求項1または2に記載のロボット。
    It further includes an utterance unit that outputs voice,
    The robot according to claim 1, wherein the movable unit performs the continuous operation according to a period during which the voice is output by the utterance unit.
  4.  前記音声の内容により表される感情の種類に応じて、前記第1及び前記第2汎用モーションデータの開始から終了までの再生時間が異なる請求項2または3に記載のロボット。 4. The robot according to claim 2 or 3, wherein the playback time from the start to the end of the first and second general-purpose motion data varies depending on the type of emotion represented by the content of the voice.
  5.  前記連続動作が、前記第2汎用動作に連続する第3汎用動作と、前記第3汎用動作に連続する第4汎用動作とをさらに含み、
     前記第3汎用動作は、前記基準ポーズで開始した後、他の基準ポーズで終了し、
     前記第4汎用動作は、前記他の基準ポーズで開始し、
     前記制御部が、前記第2汎用モーションデータと、前記第3汎用動作のための第3汎用モーションデータと、前記第4汎用動作のための第4汎用モーションデータとを、この順番に連続して再生する請求項1から4のいずれか一項に記載のロボット。
    The continuous operation further includes a third general operation that is continuous with the second general operation, and a fourth general operation that is continuous with the third general operation;
    The third general-purpose operation starts with the reference pose and then ends with another reference pose.
    The fourth general operation starts at the other reference pose,
    The control unit sequentially outputs the second general-purpose motion data, the third general-purpose motion data for the third general-purpose operation, and the fourth general-purpose motion data for the fourth general-purpose operation in this order. The robot according to any one of claims 1 to 4, which is reproduced.
  6.  前記可動部が、支持部に取り付けられた第1部位と、前記支持部に取り付けられ、前記第1部位とは異なる部位である第2部位とを含み、
     前記第1動作が、前記第1部位に関連せず、前記第2部位に関連する動作であり、
     前記第1動作において、前記第1部位は駆動せずに前記第2部位を駆動するように前記第1汎用モーションデータが構成される請求項1から5のいずれか一項に記載のロボット。
    The movable part includes a first part attached to a support part, and a second part attached to the support part, which is a part different from the first part,
    The first action is an action related to the second part, not related to the first part,
    6. The robot according to claim 1, wherein in the first operation, the first general-purpose motion data is configured to drive the second part without driving the first part. 7.
  7.  第1汎用動作及び第2汎用動作が連続する連続動作を実行する可動工程と、
     前記連続動作を制御する制御工程とを備え、
     前記第1汎用動作は基準ポーズで開始した後、第1動作を実行して前記基準ポーズで終了し、
     前記第2汎用動作は前記基準ポーズで開始した後、第2動作を実行して前記基準ポーズで終了し、
     前記制御工程が、前記第1汎用動作のための第1汎用モーションデータと、前記第2汎用動作のための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作を制御することを特徴とするロボットの動作方法。
    A movable step of performing a continuous operation in which the first general-purpose operation and the second general-purpose operation are continuous;
    A control step for controlling the continuous operation,
    The first general operation starts at the reference pose, and then executes the first operation and ends at the reference pose.
    The second general operation starts at the reference pose, and then executes the second operation and ends at the reference pose.
    The control step controls the continuous motion by continuously reproducing the first general motion data for the first general motion and the second general motion data for the second general motion. Characteristic robot operation method.
  8.  コンピュータを、第1汎用動作及び第2汎用動作が連続する連続動作を実行する可動部と、前記連続動作を制御する制御部として機能させるためのプログラムであって、
     前記第1汎用動作は基準ポーズで開始した後、第1動作を実行して前記基準ポーズで終了し、
     前記第2汎用動作は前記基準ポーズで開始した後、第2動作を実行して前記基準ポーズで終了し、
     前記第1汎用動作のための第1汎用モーションデータと、前記第2汎用動作のための第2汎用モーションデータとを組み合わせて連続再生することにより前記連続動作を制御することを特徴とするプログラム。
    A program for causing a computer to function as a movable unit that executes a continuous operation in which a first general-purpose operation and a second general-purpose operation are continuous, and a control unit that controls the continuous operation,
    The first general operation starts at the reference pose, and then executes the first operation and ends at the reference pose.
    The second general operation starts at the reference pose, and then executes the second operation and ends at the reference pose.
    A program for controlling the continuous motion by continuously reproducing the first general motion data for the first general motion and the second general motion data for the second general motion.
PCT/JP2017/010467 2016-05-20 2017-03-15 Robot, robot operation method and program WO2017199565A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780027602.8A CN109195754A (en) 2016-05-20 2017-03-15 Robot, the method for operating of robot and program
JP2018518125A JPWO2017199565A1 (en) 2016-05-20 2017-03-15 Robot, robot operation method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-101843 2016-05-20
JP2016101843 2016-05-20

Publications (1)

Publication Number Publication Date
WO2017199565A1 true WO2017199565A1 (en) 2017-11-23

Family

ID=60325037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/010467 WO2017199565A1 (en) 2016-05-20 2017-03-15 Robot, robot operation method and program

Country Status (3)

Country Link
JP (1) JPWO2017199565A1 (en)
CN (1) CN109195754A (en)
WO (1) WO2017199565A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020060805A (en) * 2018-10-04 2020-04-16 富士通株式会社 Communication apparatus, communication method, ans communication program
WO2021117441A1 (en) * 2019-12-10 2021-06-17 ソニーグループ株式会社 Information processing device, control method for same, and program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6734316B2 (en) * 2018-03-22 2020-08-05 ファナック株式会社 Robot operation program setting device, robot, and robot control method
CN111514593A (en) * 2020-03-27 2020-08-11 实丰文化创投(深圳)有限公司 Toy dog control system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001260063A (en) * 2000-03-21 2001-09-25 Sony Corp Articulated robot and its action control method
JP2004034273A (en) * 2002-07-08 2004-02-05 Mitsubishi Heavy Ind Ltd Robot and system for generating action program during utterance of robot
JP2005193331A (en) * 2004-01-06 2005-07-21 Sony Corp Robot device and its emotional expression method
JP2015013351A (en) * 2013-07-08 2015-01-22 有限会社アイドリーマ Program for controlling robot

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000038295A1 (en) * 1998-12-21 2000-06-29 Sony Corporation Robot-charging system, robot, battery charger, method of charging robot, and recording medium
JP3555107B2 (en) * 1999-11-24 2004-08-18 ソニー株式会社 Legged mobile robot and operation control method for legged mobile robot
JP2002283259A (en) * 2001-03-27 2002-10-03 Sony Corp Operation teaching device and operation teaching method for robot device and storage medium
JP2002301674A (en) * 2001-04-03 2002-10-15 Sony Corp Leg type moving robot, its motion teaching method and storage medium
EP1254688B1 (en) * 2001-04-30 2006-03-29 Sony France S.A. autonomous robot
JP3731118B2 (en) * 2002-02-18 2006-01-05 独立行政法人科学技術振興機構 Biped walking humanoid robot
CN102310406B (en) * 2010-06-30 2013-12-18 华宝通讯股份有限公司 Automatic mechanical device and control method thereof
CN102909726B (en) * 2012-10-11 2015-01-28 上海泰熙信息科技有限公司 Behavior realizing method for service robot
CN104589348B (en) * 2014-12-25 2016-04-13 北京理工大学 The multi-modal motion transformation method of a kind of anthropomorphic robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001260063A (en) * 2000-03-21 2001-09-25 Sony Corp Articulated robot and its action control method
JP2004034273A (en) * 2002-07-08 2004-02-05 Mitsubishi Heavy Ind Ltd Robot and system for generating action program during utterance of robot
JP2005193331A (en) * 2004-01-06 2005-07-21 Sony Corp Robot device and its emotional expression method
JP2015013351A (en) * 2013-07-08 2015-01-22 有限会社アイドリーマ Program for controlling robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020060805A (en) * 2018-10-04 2020-04-16 富士通株式会社 Communication apparatus, communication method, ans communication program
JP7225654B2 (en) 2018-10-04 2023-02-21 富士通株式会社 COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM
WO2021117441A1 (en) * 2019-12-10 2021-06-17 ソニーグループ株式会社 Information processing device, control method for same, and program

Also Published As

Publication number Publication date
CN109195754A (en) 2019-01-11
JPWO2017199565A1 (en) 2019-01-10

Similar Documents

Publication Publication Date Title
WO2017199565A1 (en) Robot, robot operation method and program
JP4271193B2 (en) Communication robot control system
US7216082B2 (en) Action teaching apparatus and action teaching method for robot system, and storage medium
JP3714268B2 (en) Robot device
WO2000043167A1 (en) Robot device and motion control method
JP2010201611A (en) Robot with automatic selection of task-specific representation for imitation learning
JP5045519B2 (en) Motion generation device, robot, and motion generation method
KR20200074114A (en) Information processing apparatus, information processing method, and program
TW201631571A (en) Robot capable of dancing with musical tempo
Zeglin et al. HERB's Sure Thing: A rapid drama system for rehearsing and performing live robot theater
JP2017213612A (en) Robot and method for controlling robot
JP2004287098A (en) Method and apparatus for singing synthesis, program, recording medium, and robot device
CN113826160A (en) Noise reduction in robot-to-person communication
KR20030007866A (en) Word sequence output device
KR101539972B1 (en) Robot study system using stereo image block and method thereof
JP5447811B2 (en) Path plan generation apparatus and method, robot control apparatus and robot system
WO2002030629A1 (en) Robot apparatus, information display system, and information display method
JP3955756B2 (en) Operation control apparatus, method and program
WO2018174290A1 (en) Conversation control system, and robot control system
JP2015058493A (en) Control device, robot system, robot, robot operation information generation method, and program
Ritschel et al. Implementing parallel and independent movements for a social robot's affective expressions
JP2003271172A (en) Method and apparatus for voice synthesis, program, recording medium and robot apparatus
WO2020059342A1 (en) Robot simulator
JP4068087B2 (en) Robot, robot action plan execution device, and action plan execution program
JP7243722B2 (en) Control device and control method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2018518125

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17799000

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17799000

Country of ref document: EP

Kind code of ref document: A1