WO2020067256A1 - Control device - Google Patents

Control device Download PDF

Info

Publication number
WO2020067256A1
WO2020067256A1 PCT/JP2019/037793 JP2019037793W WO2020067256A1 WO 2020067256 A1 WO2020067256 A1 WO 2020067256A1 JP 2019037793 W JP2019037793 W JP 2019037793W WO 2020067256 A1 WO2020067256 A1 WO 2020067256A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
instruction
robot
unit
controller
Prior art date
Application number
PCT/JP2019/037793
Other languages
French (fr)
Japanese (ja)
Inventor
石川 博一
Original Assignee
日本電産株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産株式会社 filed Critical 日本電産株式会社
Priority to CN201980058577.9A priority Critical patent/CN112703093A/en
Publication of WO2020067256A1 publication Critical patent/WO2020067256A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/06Control stands, e.g. consoles, switchboards

Definitions

  • the present invention relates to a control device for controlling a robot.
  • a control device called a programming pendant for instructing an operation of a robot by an image is known.
  • Japanese Patent Laid-Open Publication No. 2007-242054 discloses that a conventional character-based linguistic expression and editing method for a robot language is changed to a graphical expression, thereby facilitating grasping of teaching contents and for beginners. Discloses a technique for easily teaching programming and work programs.
  • an object of the present invention is to provide a control device that allows a user to easily set the operation of a robot.
  • a control device for controlling a robot, an image input unit for inputting an image of a controller, an operation instruction for the robot, and the operation.
  • a library database that holds an instruction input corresponding to an instruction, a predetermined area in an image of the controller, an instruction input unit that inputs an instruction input corresponding to the area, and the instruction input that is input by the instruction input unit
  • an operation setting unit for setting the operation instruction in the library database corresponding to the operation instruction as the operation instruction corresponding to a predetermined area in the image of the controller.
  • the user can easily set the operation of the robot.
  • FIG. 3 is a diagram illustrating an example of a configuration of a robot to be controlled. It is a flowchart which shows the example of the control processing of the robot by the said control apparatus. It is a figure showing an example of an image of a controller.
  • FIG. 4 is a diagram illustrating an example of data stored in a library DB. It is a figure showing an example of specification of a field and instruction input. It is a figure showing other examples of specification of a field, and instruction input. It is a figure showing an example of an operation input screen. It is a perspective view showing other examples of a picture of a controller. It is a figure showing other examples of a picture of a controller.
  • FIG. 1 is a block diagram illustrating a configuration example of a robot control system using a control device according to an embodiment of the present invention.
  • the robot control system includes a robot 1 to be controlled, a controller 2 for controlling a motor provided in the robot 1, a camera 3 for imaging the robot 1, and the like, and a control device 10 for controlling the entire system. ing.
  • the robot 1 includes a plurality of joints 41a, 41b, 42a, 42b, 43a, 43b, and a hand 44 attached to a tip of the joint 43b.
  • the joint portion 41a is rotatably attached to the base 40.
  • the arm section has a plurality of arms 45 and 46.
  • the hand unit 44 moves, for example, a member to be processed.
  • the control device 10 includes an information processing device such as a personal computer or a tablet-type portable information terminal. As shown in FIG. 1, the control device 10 includes a control unit 11 that controls the entire control device 10, a storage unit 12 that stores programs, data, and the like, a display unit 13 that displays images and the like, And an image generation unit 14 that generates an image to be displayed on the display unit 13 in response to an instruction from the control unit 11.
  • the display unit 13 inputs data such as coordinates and images according to a user's operation on a display screen that displays images and a touch panel or the like.
  • control device 10 performs a setting for controlling the operation of the robot 1, a voice input unit 15 for inputting a voice, a text input unit 16 for inputting a text, an operation input unit 17 for inputting an operation instruction. And a setting processing unit 18.
  • the storage unit 12 includes a structure DB (database) 21 in which data indicating the structure of the robot 1 is stored, a correspondence DB 22 in which data indicating the correspondence between the structure of the robot 1 and an image of the controller is stored, A library DB23 in which commands and the like for the controller 2 for controlling the operations described above are stored. Further, the storage unit 12 stores an operation DB 24 in which the correspondence between the image of the controller and the commands in the library DB 23 is stored, and a synthesis DB 25 in which the synthesized operation is stored.
  • the setting processing unit 18 includes an image input unit 31 for inputting an image of the controller from the camera 3 or the display unit 13, an associating unit 32 for associating a region in the image with a function, and an instruction for inputting a robot operation instruction.
  • An input unit 33 and an operation setting unit 34 for setting the operation of the robot in accordance with an instruction input are provided.
  • the setting processing unit 18 includes an operation synthesizing unit 35 that synthesizes the set operation of the robot, and a selecting unit 36 that selects the operation of the robot in accordance with the operation instruction.
  • FIG. 3 is a flowchart illustrating an outline of control processing of the robot 1 by the control device 10.
  • the image input unit 31 of the processing unit 18 inputs an image of the controller.
  • the image of the controller is an image that can define the correspondence with the operation of the robot 1, such as a handwritten image input by the user on the touch panel of the display unit 13 or an image of the actual robot 1 input via the camera 3. Should be fine.
  • a handwritten image simulating the controller of a commercially available game machine shown in FIG. 4 a handwritten image simulating the structure of the robot 1, an image of the robot 1 captured by the camera 3, or the like can be used.
  • the operation setting unit 34 sets the operation of the robot 1 corresponding to a predetermined area in the image of the controller.
  • the library DB 23 stores, for example, images and keywords corresponding to commands and parameters for operating the robot 1 as shown in FIG.
  • FIG. 5 shows a command (Line) for moving the hand unit 44 of the robot 1 linearly, a command (Move) for moving the hand unit 44 at the highest speed, and respective parameters (moving destination, direction, moving speed). The example which matched the image and the keyword is shown.
  • the operation setting unit 34 selects a user from a command and a parameter indicating the operation of the robot 1 stored in the library DB 23 for a predetermined area in the controller image input by the user via the instruction input unit 33. Is selected, the selected command and parameter are set as an operation corresponding to a predetermined area in the image, and stored in the operation DB 24.
  • the instruction input unit 33 selects a predetermined area in the image of the controller according to an area input by the user via the touch panel of the display unit 13, for example.
  • the instruction input unit 33 is, for example, a handwritten character or an image input by the user via the touch panel of the display unit 13, a character corresponding to the voice input by the user via the voice input unit 15, and the text input unit 16.
  • a command and a parameter corresponding to a sentence or the like input by the user are selected from the library DB 23.
  • the recognition accuracy can be improved by learning the recognition of handwritten characters or images, the recognition of voice input by the user, the recognition of keywords in text input by the user, and the like by machine learning.
  • the operation setting unit 34 When inputting a predetermined area, a command, and a parameter by hand, the operation setting unit 34 causes the display unit 13 to display an image 61 of the controller via the image generation unit 14, as shown in FIG. 6, for example.
  • the user who sees this image receives instructions 62 a, 63 a, 64 a, 65 a indicating the area and instructions 62 b, 63 b, 64 b, 65 b indicating the parameter via the touch panel of the display unit 13 via the instruction input unit 33. input.
  • FIG. 6 shows an example of inputting an instruction to move the position of the hand 44 up, down, left, and right.
  • FIG. 6 shows a case in which an arrow is used as an instruction to indicate an area, and a character surrounded by a square is used as an instruction to indicate a parameter.
  • a character surrounded by a square is used as an instruction to indicate a parameter.
  • FIG. May be used as an instruction to indicate a region, and a character 66c associated with a leader 66b from the square 66a may be used as an instruction to indicate a parameter.
  • the indication indicating the area may be a letter or a symbol such as “x”.
  • the operation setting unit 34 determines whether or not the setting has been completed, and if not completed, repeats the processing of S2. If the setting is completed, the process proceeds to S4, and the operation using the image of the controller is started. Specifically, for example, as shown in FIG. 8, an image 71 of the controller is displayed on the display unit 13. When the user designates an area 72, 73, 74, or 75 in which an operation is set using the pointer 76 or the touch panel of the display unit 13, the selection unit 36 reads the operation setting corresponding to the designated area from the operation DB 24. Further, the control unit 11 controls the operation of the robot 1 via the controller 2 according to the setting of the operation read by the selection unit 36.
  • the selection unit 36 may cause the voice input unit 15 to recognize the user's voice, and may perform the operation in accordance with the recognition result.
  • the selection unit 36 reads, from the operation DB 24, the setting of the operation corresponding to the keyword corresponding to the character data of the recognition result by the voice input unit 15. Specifically, for example, when the character data of the recognition result is “right”, the operation setting corresponding to the area 75 is read from the operation DB 24.
  • the user may input a handwritten character by operating the touch panel of the display unit 13, and the selection unit 36 may read out the operation setting corresponding to the character data of the recognition result from the operation DB 24.
  • the user can easily set the operation of the robot by setting the operation instruction of the robot corresponding to the predetermined area in the image of the controller.
  • the operation instruction set by the operation setting unit can be supplied to the robot to control the operation.
  • the operation of the robot can be set by voice, handwritten characters, input sentences, or the like.
  • the operation of the robot can be set by inputting an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

A control device for controlling a robot is provided with: an image input unit for inputting an image of a controller; a library database holding operating instructions for the robot, and instruction inputs corresponding to the operating instructions; an instruction input unit for inputting a predetermined region within the image of the controller, and an instruction input corresponding to said region; and an operation setting unit for setting the operating instruction in the library database corresponding to the instruction input that has been input by means of the instruction input unit, as the operating instruction corresponding to the predetermined region within the image of the controller.

Description

制御装置Control device
 本発明は、ロボットの制御を行う制御装置に関する。 The present invention relates to a control device for controlling a robot.
 画像でロボットの動作を指示するいわゆるプログラミング・ペンダントと称される制御装置が知られている。例えば日本国公開公報特開2007-242054号公報には、ロボット言語に対する在来のキャラクタベースでの言語表現及び編集方法をグラフィカルな表現に変えることで、教示内容の把握を容易にするとともに、初心者にも容易にプログラミングや作業プログラムの教示を行える技術が開示されている。 A control device called a programming pendant for instructing an operation of a robot by an image is known. For example, Japanese Patent Laid-Open Publication No. 2007-242054 discloses that a conventional character-based linguistic expression and editing method for a robot language is changed to a graphical expression, thereby facilitating grasping of teaching contents and for beginners. Discloses a technique for easily teaching programming and work programs.
日本国公開公報:特開2007-242054号公報Japanese Unexamined Patent Publication: JP-A-2007-242054
 しかしながら、日本国公開公報特開2007-242054号公報に開示された技術では、溶接経路等の予め決められた処理についてロボットの移動経路を予め決められたグラフィカルな表現で指定できるが、そもそもグラフィカルな表現自体の変更は考慮されていない。このため、予め決められたグラフィカルな表現や操作に慣れる必要があり、専門性が要求されていた。
 上述の課題に鑑み、本発明は、ユーザが容易にロボットの動作を設定することができる制御装置を提供することを目的とする。
However, in the technique disclosed in Japanese Patent Application Laid-Open Publication No. 2007-242054, the movement path of the robot can be designated by a predetermined graphical representation for a predetermined process such as a welding path. Changes in the expression itself are not taken into account. For this reason, it is necessary to get used to predetermined graphical expressions and operations, and specialty is required.
In view of the above-mentioned problems, an object of the present invention is to provide a control device that allows a user to easily set the operation of a robot.
 上記課題を解決するため、本発明に係る制御装置のある態様によれば、ロボットの制御を行う制御装置であって、コントローラの画像を入力する画像入力部と、前記ロボットに対する動作指示と当該動作指示に対応する指示入力とを保持するライブラリデータベースと、前記コントローラの画像中の所定の領域と当該領域に対応する指示入力を入力する指示入力部と、前記指示入力部により入力された前記指示入力に対応する前記ライブラリデータベース中の前記動作指示を前記コントローラの画像中の所定の領域に対応する前記動作指示として設定する動作設定部と、を備えることを特徴とする制御装置が提供される。 According to one embodiment of the present invention, there is provided a control device for controlling a robot, an image input unit for inputting an image of a controller, an operation instruction for the robot, and the operation. A library database that holds an instruction input corresponding to an instruction, a predetermined area in an image of the controller, an instruction input unit that inputs an instruction input corresponding to the area, and the instruction input that is input by the instruction input unit And an operation setting unit for setting the operation instruction in the library database corresponding to the operation instruction as the operation instruction corresponding to a predetermined area in the image of the controller.
 以上の構成を有する本発明によれば、ユーザが容易にロボットの動作を設定することができる。 According to the present invention having the above configuration, the user can easily set the operation of the robot.
本発明の実施形態の制御装置を用いたロボット制御システムの構成例を示すブロック図である。It is a block diagram showing the example of composition of the robot control system using the control device of an embodiment of the present invention. 制御対象のロボットの構成の例を示す図である。FIG. 3 is a diagram illustrating an example of a configuration of a robot to be controlled. 前記制御装置によるロボットの制御処理の例を示すフローチャートである。It is a flowchart which shows the example of the control processing of the robot by the said control apparatus. コントローラの画像の例を示す図である。It is a figure showing an example of an image of a controller. ライブラリDBに格納されているデータの例を示す図である。FIG. 4 is a diagram illustrating an example of data stored in a library DB. 領域の指定と指示入力の例を示す図である。It is a figure showing an example of specification of a field and instruction input. 領域の指定と指示入力の他の例を示す図である。It is a figure showing other examples of specification of a field, and instruction input. 操作入力画面の例を示す図である。It is a figure showing an example of an operation input screen. コントローラの画像の他の例を示す斜視図である。It is a perspective view showing other examples of a picture of a controller. コントローラの画像の他の例を示す図である。It is a figure showing other examples of a picture of a controller.
 以下、添付図面を参照して、本発明を実施するための実施形態について詳細に説明する。
 図1は、本発明の実施形態に係る制御装置を用いたロボット制御システムの構成例を示すブロック図である。このロボット制御システムは、制御対象のロボット1と、ロボット1が備えるモータの制御等を行うコントローラ2と、ロボット1等の撮像を行うカメラ3と、システム全体の制御を行う制御装置10とを備えている。
Hereinafter, an embodiment for carrying out the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a block diagram illustrating a configuration example of a robot control system using a control device according to an embodiment of the present invention. The robot control system includes a robot 1 to be controlled, a controller 2 for controlling a motor provided in the robot 1, a camera 3 for imaging the robot 1, and the like, and a control device 10 for controlling the entire system. ing.
 以下、制御対象のロボット1として、例えば図2に示すように、複数の関節部とアーム部とを有する多関節ロボットを用いる場合について説明する。
 ロボット1は、複数の関節部41a,41b,42a,42b,43a,43bと、関節部43bの先端に取り付けられたハンド部44とを備えている。関節部41aは基台40に対して回動自在に取り付けられている。また、アーム部は、複数のアーム45,46を備えている。ハンド部44は、例えば加工対象の部材を挟んで移動させる。
Hereinafter, a case where an articulated robot having a plurality of joints and arms as shown in FIG. 2 will be described as the robot 1 to be controlled.
The robot 1 includes a plurality of joints 41a, 41b, 42a, 42b, 43a, 43b, and a hand 44 attached to a tip of the joint 43b. The joint portion 41a is rotatably attached to the base 40. The arm section has a plurality of arms 45 and 46. The hand unit 44 moves, for example, a member to be processed.
 制御装置10は、例えばパーソナルコンピュータやタブレット型の携帯情報端末等の情報処理装置から構成される。
 この制御装置10は、図1に示すように、制御装置10全体の制御を行う制御部11と、プログラム、データ等を格納する記憶部12と、画像等を表示する表示部13と、制御部11からの指示に応じて表示部13に表示する画像を生成する画像生成部14とを備えている。表示部13は、例えば画像を表示する表示画面と、タッチパネル等により、ユーザの操作に応じた座標・画像等のデータを入力する。また、制御装置10は、音声を入力する音声入力部15と、文章を入力する文章入力部16と、操作指示を入力する操作入力部17と、ロボット1の動作を制御するための設定を行う設定処理部18とを備えている。
The control device 10 includes an information processing device such as a personal computer or a tablet-type portable information terminal.
As shown in FIG. 1, the control device 10 includes a control unit 11 that controls the entire control device 10, a storage unit 12 that stores programs, data, and the like, a display unit 13 that displays images and the like, And an image generation unit 14 that generates an image to be displayed on the display unit 13 in response to an instruction from the control unit 11. The display unit 13 inputs data such as coordinates and images according to a user's operation on a display screen that displays images and a touch panel or the like. Further, the control device 10 performs a setting for controlling the operation of the robot 1, a voice input unit 15 for inputting a voice, a text input unit 16 for inputting a text, an operation input unit 17 for inputting an operation instruction. And a setting processing unit 18.
 記憶部12には、ロボット1の構造を示すデータが格納される構造DB(データベース)21と、ロボット1の構造とコントローラの画像との対応関係を示すデータが格納される対応DB22と、ロボット1の動作を制御するためのコントローラ2に対するコマンド等が格納されるライブラリDB23が記憶されている。また、記憶部12には、コントローラの画像とライブラリDB23のコマンド等との対応関係が格納される動作DB24、合成された動作が格納される合成DB25が記憶されている。 The storage unit 12 includes a structure DB (database) 21 in which data indicating the structure of the robot 1 is stored, a correspondence DB 22 in which data indicating the correspondence between the structure of the robot 1 and an image of the controller is stored, A library DB23 in which commands and the like for the controller 2 for controlling the operations described above are stored. Further, the storage unit 12 stores an operation DB 24 in which the correspondence between the image of the controller and the commands in the library DB 23 is stored, and a synthesis DB 25 in which the synthesized operation is stored.
 設定処理部18は、カメラ3あるいは表示部13からコントローラの画像を入力する画像入力部31と、画像中の領域と機能の対応付けを行う対応付け部32と、ロボットの動作指示を入力する指示入力部33と、指示入力に応じてロボットの動作を設定する動作設定部34とを備えている。また、設定処理部18は、設定されたロボットの動作を合成する動作合成部35と、操作指示に応じてロボットの動作を選択する選択部36とを備えている。 The setting processing unit 18 includes an image input unit 31 for inputting an image of the controller from the camera 3 or the display unit 13, an associating unit 32 for associating a region in the image with a function, and an instruction for inputting a robot operation instruction. An input unit 33 and an operation setting unit 34 for setting the operation of the robot in accordance with an instruction input are provided. In addition, the setting processing unit 18 includes an operation synthesizing unit 35 that synthesizes the set operation of the robot, and a selecting unit 36 that selects the operation of the robot in accordance with the operation instruction.
<制御処理>
 図3は、制御装置10によるロボット1の制御処理の概要を示すフローチャートである。
 この制御装置10では、まず、S1において、処理部18の画像入力部31は、コントローラの画像を入力する。このコントローラの画像は、ユーザが表示部13のタッチパネルで入力した手書きの画像、カメラ3を介して入力した実際のロボット1を撮像した画像等、ロボット1の動作との対応関係を定義し得る画像であればよい。具体的には、図4に示す市販のゲーム機のコントローラを模した手書き画像や、ロボット1の構造を模した手書き画像、あるいはカメラ3で撮像したロボット1の画像等を用いることができる。
<Control processing>
FIG. 3 is a flowchart illustrating an outline of control processing of the robot 1 by the control device 10.
In the control device 10, first, in S1, the image input unit 31 of the processing unit 18 inputs an image of the controller. The image of the controller is an image that can define the correspondence with the operation of the robot 1, such as a handwritten image input by the user on the touch panel of the display unit 13 or an image of the actual robot 1 input via the camera 3. Should be fine. Specifically, a handwritten image simulating the controller of a commercially available game machine shown in FIG. 4, a handwritten image simulating the structure of the robot 1, an image of the robot 1 captured by the camera 3, or the like can be used.
 次に、S2において、動作設定部34は、コントローラの画像中の所定の領域に対応するロボット1の動作を設定する。ライブラリDB23には、例えば図5に示すように、ロボット1を動作させるコマンドとパラメータに対応する画像、キーワードが格納されている。この図5は、ロボット1のハンド部44を直線的に移動させるコマンド(Line)と、ハンド部44を最速で移動させるコマンド(Move)と、各々のパラメータ(移動先、方向、移動速度)と画像、キーワードを対応付けた例を示している。動作設定部34は、指示入力部33を介してユーザが入力したコントローラの画像内の所定の領域に対して、ライブラリDB23に格納されているロボット1の動作を示すコマンド、パラメータから、ユーザの選択に対応するものを選択し、選択したコマンド、パラメータを当該画像内の所定の領域に対応する動作として設定し、動作DB24に格納する。 Next, in S2, the operation setting unit 34 sets the operation of the robot 1 corresponding to a predetermined area in the image of the controller. The library DB 23 stores, for example, images and keywords corresponding to commands and parameters for operating the robot 1 as shown in FIG. FIG. 5 shows a command (Line) for moving the hand unit 44 of the robot 1 linearly, a command (Move) for moving the hand unit 44 at the highest speed, and respective parameters (moving destination, direction, moving speed). The example which matched the image and the keyword is shown. The operation setting unit 34 selects a user from a command and a parameter indicating the operation of the robot 1 stored in the library DB 23 for a predetermined area in the controller image input by the user via the instruction input unit 33. Is selected, the selected command and parameter are set as an operation corresponding to a predetermined area in the image, and stored in the operation DB 24.
 指示入力部33は、例えば表示部13のタッチパネルを介してユーザが入力した領域に応じて、コントローラの画像内の所定の領域として選択する。また、指示入力部33は、例えば表示部13のタッチパネルを介してユーザが入力した手書き文字または画像、音声入力部15を介してユーザが入力した音声に対応する文字、文章入力部16を介してユーザが入力した文章等に対応するコマンドおよびパラメータをライブラリDB23から選択する。なお、機械学習により、手書き文字または画像の認識、ユーザが入力した音声の認識、ユーザが入力した文章中のキーワードの認識等を学習させることにより、認識精度を向上させることができる。 The instruction input unit 33 selects a predetermined area in the image of the controller according to an area input by the user via the touch panel of the display unit 13, for example. In addition, the instruction input unit 33 is, for example, a handwritten character or an image input by the user via the touch panel of the display unit 13, a character corresponding to the voice input by the user via the voice input unit 15, and the text input unit 16. A command and a parameter corresponding to a sentence or the like input by the user are selected from the library DB 23. The recognition accuracy can be improved by learning the recognition of handwritten characters or images, the recognition of voice input by the user, the recognition of keywords in text input by the user, and the like by machine learning.
 所定の領域とコマンドおよびパラメータを手書きで入力する場合には、動作設定部34は、例えば図6に示すように、画像生成部14を介して表示部13にコントローラの画像61を表示させる。この画像を見たユーザは、表示部13のタッチパネル等を介して、領域を示す指示62a,63a,64a,65aとパラメータを示す指示62b,63b,64b,65bを、指示入力部33を介して入力する。なお、図6は、上述のハンド44の位置を上下左右に移動させる指示を入力する例を示している。また、図6は、領域を示す指示として矢印を用い、パラメータを示す指示として四角で囲った文字を用いた場合について示しているが、例えば図7に示すように、対象となる領域を四角66aで囲んで領域を示す指示とし、この四角66aからの引き出し線66bで対応付けた文字66cを、パラメータを示す指示としてもよい。あるいは、領域を示す指示は「x」等の文字または記号等としてもよい。コマンドとパラメータを音声で入力する場合は、指示入力部33は、音声入力部15に、ユーザが入力した音声を認識させ、認識結果の文字データ中のキーワードに該当するコマンドおよびパラメータを、ライブラリDB23から取得し、ユーザが選択した領域に対応する動作として設定し、設定した結果を動作DB24に格納する。 When inputting a predetermined area, a command, and a parameter by hand, the operation setting unit 34 causes the display unit 13 to display an image 61 of the controller via the image generation unit 14, as shown in FIG. 6, for example. The user who sees this image receives instructions 62 a, 63 a, 64 a, 65 a indicating the area and instructions 62 b, 63 b, 64 b, 65 b indicating the parameter via the touch panel of the display unit 13 via the instruction input unit 33. input. FIG. 6 shows an example of inputting an instruction to move the position of the hand 44 up, down, left, and right. FIG. 6 shows a case in which an arrow is used as an instruction to indicate an area, and a character surrounded by a square is used as an instruction to indicate a parameter. For example, as shown in FIG. May be used as an instruction to indicate a region, and a character 66c associated with a leader 66b from the square 66a may be used as an instruction to indicate a parameter. Alternatively, the indication indicating the area may be a letter or a symbol such as “x”. When the command and the parameter are input by voice, the instruction input unit 33 causes the voice input unit 15 to recognize the voice input by the user, and stores the command and parameter corresponding to the keyword in the character data of the recognition result in the library DB 23. , And set as an operation corresponding to the area selected by the user, and store the set result in the operation DB 24.
 さらに、S3において、動作設定部34は、設定が終了したか否かを判定し、終了していなければS2の処理を繰り返す。設定が終了していれば、S4に進んで、コントローラの画像を用いた動作を開始する。具体的には、例えば図8に示すように、表示部13にコントローラの画像71を表示させる。ユーザがポインタ76あるいは表示部13のタッチパネルにより、動作を設定した領域72、73、74、75を指示すると、選択部36は、指示された領域に対応する動作の設定を動作DB24から読み出す。さらに、制御部11は、選択部36が読み出した動作の設定に応じて、コントローラ2を介してロボット1の動作を制御する。なお、ユーザによる動作の指示は、コントローラの画像中の領域の指示の他に、選択部36が、音声入力部15にユーザの音声を認識させ、認識結果に応じて行ってもよい。選択部36は、音声入力部15による認識結果の文字データに対応するキーワードに対応する動作の設定を、動作DB24から読み出す。具体的には、例えば認識結果の文字データが「右」である場合には、領域75に対応する動作の設定を動作DB24から読み出す。あるいは、ユーザが表示部13のタッチパネルを操作して手書き文字を入力し、選択部36が、認識結果の文字データに対応する動作の設定を動作DB24から読み出すようにしてもよい。 Further, in S3, the operation setting unit 34 determines whether or not the setting has been completed, and if not completed, repeats the processing of S2. If the setting is completed, the process proceeds to S4, and the operation using the image of the controller is started. Specifically, for example, as shown in FIG. 8, an image 71 of the controller is displayed on the display unit 13. When the user designates an area 72, 73, 74, or 75 in which an operation is set using the pointer 76 or the touch panel of the display unit 13, the selection unit 36 reads the operation setting corresponding to the designated area from the operation DB 24. Further, the control unit 11 controls the operation of the robot 1 via the controller 2 according to the setting of the operation read by the selection unit 36. In addition to the instruction of the operation by the user, in addition to the instruction of the area in the image of the controller, the selection unit 36 may cause the voice input unit 15 to recognize the user's voice, and may perform the operation in accordance with the recognition result. The selection unit 36 reads, from the operation DB 24, the setting of the operation corresponding to the keyword corresponding to the character data of the recognition result by the voice input unit 15. Specifically, for example, when the character data of the recognition result is “right”, the operation setting corresponding to the area 75 is read from the operation DB 24. Alternatively, the user may input a handwritten character by operating the touch panel of the display unit 13, and the selection unit 36 may read out the operation setting corresponding to the character data of the recognition result from the operation DB 24.
<実施形態の効果>
 以上説明したように、本実施形態によれば、コントローラの画像中の所定の領域に対応するロボットの動作指示を設定することにより、ユーザが容易にロボットの動作を設定することができる。また、本実施形態では、ユーザがコントローラの画像中の所定の領域を選択することにより、動作設定部で設定された動作指示をロボットに供給して動作を制御することができる。また、本実施形態では、上述のように、音声や手書き文字あるいは入力された文章等により、ロボットの動作を設定することができる。また、本実施形態では、上述のように、画像の入力により、ロボットの動作を設定することができる。
<Effects of Embodiment>
As described above, according to the present embodiment, the user can easily set the operation of the robot by setting the operation instruction of the robot corresponding to the predetermined area in the image of the controller. In the present embodiment, when the user selects a predetermined area in the image of the controller, the operation instruction set by the operation setting unit can be supplied to the robot to control the operation. Further, in the present embodiment, as described above, the operation of the robot can be set by voice, handwritten characters, input sentences, or the like. In the present embodiment, as described above, the operation of the robot can be set by inputting an image.
<変形例>
 上述の実施形態では、コントローラの画像として、市販のゲーム機のコントローラを模した手書き画像を用いた例について説明したが、ロボット1の動作との対応関係を定義し得る画像であれば、例えば図9に示す人型ロボットの画像や、図10に示すように、カメラ3を介して入力した実際のロボット1を撮像した画像等を用いてもよい。このようなコントローラの画像を用いた場合にも、上述と同様の動作の設定を行うことにより、容易にロボットの動作を設定することができる。
 上記した実施形態は、本発明の実現手段としての一例であり、本発明が適用される装置やシステムの構成や各種条件によって適宜修正または変更されるべきものであり、本発明は上記した実施形態に限定されるものではない。
<Modification>
In the above-described embodiment, an example is described in which a handwritten image imitating a controller of a commercially available game machine is used as the image of the controller. However, if the image can define the correspondence relationship with the operation of the robot 1, for example, FIG. For example, an image of the humanoid robot shown in FIG. 9 or an image of the actual robot 1 input via the camera 3 as shown in FIG. Even when such an image of the controller is used, the operation of the robot can be easily set by performing the same operation setting as described above.
The above embodiment is an example as a means for realizing the present invention, and should be appropriately modified or changed according to the configuration of the apparatus or system to which the present invention is applied or various conditions. However, the present invention is not limited to this.
 1…ロボット、2…コントローラ、3…カメラ、11…制御部、12…記憶部、13…表示部、14…画像生成部、15…音声入力部、16…文章入力部、17…操作入力部、18…設定処理部、21…構造DB、22…対応DB、23…ライブラリDB、24…動作DB、25…合成DB、31…画像入力部、32…対応付け部、33…指示入力部、34…動作設定部、35…動作合成部、36…選択部、41a,41b,42a,42b,43a,43b…関節部、44…ハンド部、45,46,47…アーム DESCRIPTION OF SYMBOLS 1 ... Robot, 2 ... Controller, 3 ... Camera, 11 ... Control part, 12 ... Storage part, 13 ... Display part, 14 ... Image generation part, 15 ... Voice input part, 16 ... Text input part, 17 ... Operation input part , 18 setting processing unit, 21 structure DB, 22 correspondence DB, 23 library DB, 24 operation DB, 25 synthesis DB, 31 image input unit, 32 association unit, 33 instruction input unit, 34: motion setting unit, 35: motion synthesis unit, 36: selection unit, 41a, 41b, 42a, 42b, 43a, 43b: joint unit, 44: hand unit, 45, 46, 47: arm

Claims (4)

  1.  ロボットの制御を行う制御装置であって、
     コントローラの画像を入力する画像入力部と、
     前記ロボットに対する動作指示と当該動作指示に対応する指示入力とを保持するライブラリデータベースと、
     前記コントローラの画像中の所定の領域と当該領域に対応する指示入力を入力する指示入力部と、
     前記指示入力部により入力された前記指示入力に対応する前記ライブラリデータベース中の前記動作指示を前記コントローラの画像中の所定の領域に対応する前記動作指示として設定する動作設定部と、
    を備えることを特徴とする制御装置。
    A control device for controlling a robot,
    An image input unit for inputting an image of the controller,
    A library database that stores an operation instruction to the robot and an instruction input corresponding to the operation instruction;
    An instruction input unit for inputting a predetermined area in the image of the controller and an instruction input corresponding to the area,
    An operation setting unit that sets the operation instruction in the library database corresponding to the instruction input input by the instruction input unit as the operation instruction corresponding to a predetermined area in an image of the controller;
    A control device comprising:
  2.  前記コントローラの画像を表示させ、ユーザが前記コントローラの画像中の所定の領域を選択した際に、前記動作設定部により設定された前記選択された領域に対応する動作指示を前記ロボットに供給させる選択部、
    を備えることを特徴とする請求項1記載の制御装置。
    Selecting an image of the controller to be displayed and, when a user selects a predetermined area in the image of the controller, an operation instruction corresponding to the selected area set by the operation setting unit to be supplied to the robot; Department,
    The control device according to claim 1, further comprising:
  3.  前記ライブラリデータベースは、前記動作指示に対応するキーワードを保持しており、
     前記指示入力部は、前記指示入力に対応する文字を出力し、
     前記動作設定部は、前記指示入力部からの文字と前記ライブラリデータベース中のキーワードに応じて前記設定を行う、
    ことを特徴とする請求項1または2記載の制御装置。
    The library database holds a keyword corresponding to the operation instruction,
    The instruction input unit outputs a character corresponding to the instruction input,
    The operation setting unit performs the setting according to a character from the instruction input unit and a keyword in the library database.
    The control device according to claim 1 or 2, wherein:
  4.  前記ライブラリデータベースは、前記動作指示に対応する画像を保持しており、
     前記指示入力部は、前記指示入力に対応する画像を出力し、
     前記動作設定部は、前記指示入力部からの画像と前記ライブラリデータベース中の画像に応じて前記設定を行う、
    ことを特徴とする請求項1または2に記載の制御装置。
    The library database holds images corresponding to the operation instructions,
    The instruction input unit outputs an image corresponding to the instruction input,
    The operation setting unit performs the setting according to the image from the instruction input unit and the image in the library database.
    The control device according to claim 1 or 2, wherein:
PCT/JP2019/037793 2018-09-28 2019-09-26 Control device WO2020067256A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980058577.9A CN112703093A (en) 2018-09-28 2019-09-26 Control device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018185681 2018-09-28
JP2018-185681 2018-09-28

Publications (1)

Publication Number Publication Date
WO2020067256A1 true WO2020067256A1 (en) 2020-04-02

Family

ID=69949622

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/037793 WO2020067256A1 (en) 2018-09-28 2019-09-26 Control device

Country Status (2)

Country Link
CN (1) CN112703093A (en)
WO (1) WO2020067256A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11262883A (en) * 1998-03-19 1999-09-28 Denso Corp Manual operation device for robot
JP2007072901A (en) * 2005-09-08 2007-03-22 Canon Inc Information processing apparatus and display processing method for manuscript data
JP2007242054A (en) * 1995-09-19 2007-09-20 Yaskawa Electric Corp Robot language processing apparatus
JP2016060019A (en) * 2014-09-19 2016-04-25 株式会社デンソーウェーブ Robot operating device, robot system, and robot operating program
JP2017052031A (en) * 2015-09-08 2017-03-16 株式会社デンソーウェーブ Robot operation device and robot operation method
WO2018051435A1 (en) * 2016-09-14 2018-03-22 三菱電機株式会社 Numerical control apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002144263A (en) * 2000-11-09 2002-05-21 Nippon Telegr & Teleph Corp <Ntt> Motion teaching and playback device of robot, its method and recording medium recording motion teaching and playback program of robot
JP2003334779A (en) * 2002-05-13 2003-11-25 Canon Inc Operation code generating method
EP3338969A3 (en) * 2016-12-22 2018-07-25 Seiko Epson Corporation Control apparatus, robot and robot system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007242054A (en) * 1995-09-19 2007-09-20 Yaskawa Electric Corp Robot language processing apparatus
JPH11262883A (en) * 1998-03-19 1999-09-28 Denso Corp Manual operation device for robot
JP2007072901A (en) * 2005-09-08 2007-03-22 Canon Inc Information processing apparatus and display processing method for manuscript data
JP2016060019A (en) * 2014-09-19 2016-04-25 株式会社デンソーウェーブ Robot operating device, robot system, and robot operating program
JP2017052031A (en) * 2015-09-08 2017-03-16 株式会社デンソーウェーブ Robot operation device and robot operation method
WO2018051435A1 (en) * 2016-09-14 2018-03-22 三菱電機株式会社 Numerical control apparatus

Also Published As

Publication number Publication date
CN112703093A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
KR102042115B1 (en) Method for generating robot operation program, and device for generating robot operation program
US5488689A (en) Robot operation training system
EP2923806A1 (en) Robot control device, robot, robotic system, teaching method, and program
EP1310844A1 (en) Simulation device
JP2009072833A (en) Direct instructing device of robot
JP2005052961A (en) Robot system and its controlling method
US10315305B2 (en) Robot control apparatus which displays operation program including state of additional axis
JPS6179589A (en) Operating device for robot
WO2020067256A1 (en) Control device
US6798416B2 (en) Generating animation data using multiple interpolation procedures
Pausch et al. Tailor: creating custom user interfaces based on gesture
WO2020067257A1 (en) Control device
US20230166401A1 (en) Program generation device and non-transitory computer-readable storage medium storing program
US20220281103A1 (en) Information processing apparatus, robot system, method of manufacturing products, information processing method, and recording medium
Charoenseang et al. Human–robot collaboration with augmented reality
JP7208443B2 (en) A control device capable of receiving direct teaching operations, a teaching device, and a computer program for the control device
JP2015100874A (en) Robot system
JP2009166172A (en) Simulation method and simulator for robot
JPH06324668A (en) Screen display method and display device
JP3344499B2 (en) Object operation support device
JP2955654B2 (en) Manipulator work teaching device
JP7217821B1 (en) Robot teaching system
JP2022111464A (en) Computer program, method of creating control program for robot, and system of executing processing of creating control program for robot
JP2024007645A (en) command display device
JP2747802B2 (en) Robot control method and control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19867144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19867144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP