WO2019184487A1 - 机器人的控制方法、装置和机器人 - Google Patents

机器人的控制方法、装置和机器人 Download PDF

Info

Publication number
WO2019184487A1
WO2019184487A1 PCT/CN2018/124313 CN2018124313W WO2019184487A1 WO 2019184487 A1 WO2019184487 A1 WO 2019184487A1 CN 2018124313 W CN2018124313 W CN 2018124313W WO 2019184487 A1 WO2019184487 A1 WO 2019184487A1
Authority
WO
WIPO (PCT)
Prior art keywords
control command
control
robot
control commands
group
Prior art date
Application number
PCT/CN2018/124313
Other languages
English (en)
French (fr)
Inventor
魏然
袭开俣
Original Assignee
北京进化者机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京进化者机器人科技有限公司 filed Critical 北京进化者机器人科技有限公司
Publication of WO2019184487A1 publication Critical patent/WO2019184487A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1658Programme controls characterised by programming, planning systems for manipulators characterised by programming language
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40392Programming, visual robot programming language

Definitions

  • the present application relates to the field of robot control technology, and in particular, to a control method, device, and robot for a robot.
  • robots With the continuous development of control technology, robots become completely new products independent of machines; robots can accept both pre-programmed procedures and human command to complete complex movements and serve humans.
  • the robot's control mode is usually driven by the command of the operating system; the command drives all actuators to form complex actions or behavioral outputs to complete the task.
  • the existing control methods for robots are mainly divided into two categories, one is to write code through a programming language, or a special tool software to control the robot function or behavior output; this method requires more professional personnel, and needs to be configured at the same time.
  • the right environment requires the user to have a high level of understanding of a particular robot, allowing the robot to perform complex behaviors.
  • the other type is to directly issue commands through real interaction methods such as language.
  • the control method is simple, and the robot executes the corresponding behavior to complete the command; but in this type of control, the robot usually performs relatively simple, continuous complex commands, the robot is difficult to understand, or the user It takes a long time to interact with the robot and the operation is complicated.
  • the purpose of the present application is to provide a robot control method, apparatus, and robot to simplify the control mode of the robot and improve the convenience of the user to control the operation of the robot.
  • an embodiment of the present application provides a method for controlling a robot, where the method is applied to a robot, and the method includes: collecting a plurality of control command images; grouping the plurality of control command images according to the preset group identifier; The execution logic is set to execute a control command corresponding to each group of control command images; when a control command corresponding to each group of control command images is executed, if a plurality of control command images are included in the group, a plurality of control commands corresponding to the control command images are simultaneously executed.
  • the embodiment of the present application provides a first possible implementation manner of the first aspect, wherein the preset group identifier includes at least a location where the control command image is located, a shape, a color, or a brightness of the control command image.
  • One of the preset execution logics includes at least one of sequential execution, judgment execution, loop execution, delay execution, or conditional execution.
  • the embodiment of the present application provides the second possible implementation manner of the first aspect, wherein the step of executing the control command corresponding to each group of control command images includes: identifying a control command image in the group, and obtaining Corresponding control command; driving the corresponding execution mechanism to execute the control command according to the corresponding relationship between the preset control command and the executing mechanism.
  • the embodiment of the present application provides a third possible implementation manner of the first aspect, where the method further includes: if the group corresponding to multiple control commands, determining multiple Whether there are at least two control commands in the control command are contradictory; if yes, processing is performed according to one of the following methods: stopping execution of the control command corresponding to the current group, generating error information; or randomly arranging contradictory control commands according to the arrangement The control commands are executed sequentially; or, from among the conflicting control commands, one of the control commands is randomly selected for execution.
  • the embodiment of the present application provides the fourth possible implementation manner of the first aspect, wherein determining whether the at least two control commands are contradictory among the plurality of control commands
  • the step includes: determining whether at least two control commands in the plurality of control commands correspond to the same execution mechanism, and if yes, determining that the control commands corresponding to the same execution mechanism are conflicting control commands.
  • the embodiment of the present application provides a control device for a robot, where the device is disposed on a robot, and the device includes: an acquisition module configured to collect multiple control command images; and a grouping module configured to be based on a preset group identifier, a plurality of control command images are grouped; the execution module is configured to execute a control command corresponding to each group of control command images according to a preset execution logic; and when the control command corresponding to each group of control command images is executed, if the group includes multiple controls The command image simultaneously executes control commands corresponding to a plurality of control command images.
  • the embodiment of the present application provides the first possible implementation manner of the second aspect, wherein the executing module is further configured to: identify a control command image in the group, and obtain a corresponding control command; The corresponding relationship between the control command and the executing mechanism drives the corresponding executing agency to execute the control command.
  • the embodiment of the present application provides the second possible implementation manner of the second aspect, wherein the foregoing apparatus further includes: a determining module configured to: if the group corresponds to multiple controls The command determines whether there are at least two control commands in the plurality of control commands that are contradictory; the processing module is configured to: if there are at least two control commands contradicting each other, processing according to one of the following methods: stopping execution of the control command corresponding to the current group, The error information is generated; or, the conflicting control commands are randomly arranged, and the control commands are executed in the order of arrangement; or, from the contradictory control commands, one of the control commands is randomly selected for execution.
  • the embodiment of the present application provides the third possible implementation manner of the second aspect, wherein the determining module is further configured to: determine whether at least one of the multiple control commands exists The two control commands correspond to the same execution mechanism, and if so, the control commands corresponding to the same execution mechanism are determined to be contradictory control commands.
  • an embodiment of the present application provides a robot, including a processor and an executing mechanism, where the control device of the robot is disposed in a processor.
  • the control method, device and robot of the robot provided by the embodiment of the present application firstly group the collected control command images according to the preset group identifier; and then execute the corresponding group control command image corresponding according to the preset execution logic.
  • Control command when executing a control command corresponding to each group of control command images, if a plurality of control command images are included in the group, a plurality of control commands corresponding to the control command images are simultaneously executed.
  • the complex action behavior of the robot can be segmented and described, so that the actions in the same group are simultaneously executed, which simplifies the control mode of the robot and improves the convenience of the user to control the operation of the robot.
  • FIG. 1 is a flowchart of a method for controlling a robot according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a control command image according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a method for defining a robot control logic according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a robot behavior definition according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a composition of a behavior unit when a robot greets according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a location division behavior unit according to a preset command picture according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of another location division behavior unit according to a preset command picture according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a programming template provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of steps actually performed by a robot corresponding to each behavior unit in FIG. 8 according to an embodiment of the present disclosure
  • FIG. 10 is a schematic structural diagram of a control device for a robot according to an embodiment of the present application.
  • the embodiment of the present application provides a robot control method, device and robot; the technology can be applied to the space interactive robot control, and can be applied to the physical programming.
  • the technology can be implemented in related software or hardware, which is described below by way of example.
  • Step S102 collecting a plurality of control command images
  • the control command image may be acquired by a camera device or an infrared camera device; the control command image usually includes a relatively significant recognition feature such as a pattern, a shape, a symbol, a text, and/or a barcode, so that the robot can quickly recognize the image capturing area. Whether the control command image is included in the image and accurately identifies the control command corresponding to the control command image.
  • FIG. 2 See a schematic diagram of a control command image shown in FIG. 2; the left image is an "arrow" shape, which can be used for a control command representing "forward”; the middle image is a "music symbol” shape, which can be used to represent "playing music”.
  • the control command; the image on the right is the "L1" text, which can be used to represent the control command for "walking according to the first line”.
  • control command image (which may also be referred to as a command picture) may be set to a plurality of colors and shapes, for example, a rectangle, a triangle, a circle, etc.; the control command image may also be set with one of an image, a text or a code. One or more combinations; after each control command image is recognized by the robot image, a control command is obtained, and the control command can be executed by a drive actuator.
  • an image for expressing other special meanings can also be set.
  • the above control command image may be attached to any physical object, such as a building block, a puzzle, various splicing modules or stickers, etc.; or may be displayed by an electronic display or displayed by a projection device.
  • Step S104 grouping a plurality of control command images according to a preset group identifier
  • Each group of control command images may also be referred to as a behavior unit, and the preset group identifier includes at least one of a position at which the control command image is located, a shape, a color, or a brightness of the control command image; for example, in an attachment control On the physical background or display background of the command image, an area belonging to the same behavior unit may be preset, and the control command image placed in the same area may be a division behavior unit.
  • the outline of the control command picture may be set to a different shape in advance, for example, a circle, an ellipse, a rectangle, a triangle, etc., and the control command image whose contour belongs to the same shape is a division behavior unit; for example, it may be previously
  • the background color of the control command image, the border color, or the color of the image itself is set to a plurality of colors, and the control command image belonging to the same color is a division behavior unit.
  • Step S106 executing a control command corresponding to each group of control command images according to preset execution logic; and executing a control command corresponding to each group of control command images, if multiple control command images are included in the group, and multiple control command images are simultaneously executed Corresponding control commands.
  • the preset execution logic includes at least one of sequential execution, judgment execution, loop execution, delay execution, or condition execution; the preset execution logic is execution logic between groups of control command images, that is, inter-group logic For the group, if a plurality of control command images are included in the group, the control commands corresponding to the plurality of control command images are simultaneously executed.
  • the control method of the robot provided by the embodiment of the present application firstly groups the collected control command images according to the preset group identifier; and then executes the control commands corresponding to the respective group control command images according to the preset execution logic; When a control command corresponding to each group of control command images is executed, if a plurality of control command images are included in the group, a plurality of control commands corresponding to the control command images are simultaneously executed.
  • the complex action behavior of the robot can be segmented and described, so that the actions in the same group are simultaneously executed, which simplifies the control mode of the robot and improves the convenience of the user to control the operation of the robot.
  • the embodiment of the present application further provides a control method for another robot; the method is implemented on the basis of the method shown in FIG. 1; and a schematic diagram of a method for defining the robot control logic as shown in FIG.
  • the definition method first defines a robot's behavior logic, which divides and describes the robot's behavior; and associates the robot behavior with the space entity, obtains the corresponding information through the robot image, or other sensors, and is integrated and executed by the robot.
  • a robot's behavior logic which divides and describes the robot's behavior
  • associates the robot behavior with the space entity obtains the corresponding information through the robot image, or other sensors, and is integrated and executed by the robot.
  • the definition method specifically includes the following parts:
  • one behavior unit corresponds to the above-mentioned one set of control command images
  • one action unit corresponds to the above one control command image, or one action corresponding to the control command image.
  • the robot's series of actions are composed of each actuator, and each actuator can independently move or behave on the timeline as an action unit.
  • the robot completes a complex action, and each actuator needs to operate at the same time.
  • the set of actions performed by the robot at the same time can form a combination, as a unit of robot behavior, that is, multiple action units form a behavior unit, and multiple behavior units are connected together to form a complex movement of the robot.
  • one behavior unit may simultaneously include a plurality of action units, and each action unit corresponds to a motion or an action of an actuator of the robot according to the situation of the robot, because the robot usually has multiple actuators, and these executions The actions of the mechanism can be performed at the same time without affecting each other. Therefore, a plurality of action units can be combined to form a behavior unit, and the combination of the behavior units forms a complex behavioral action.
  • the first behavior unit in the complex behavior is composed of a plurality of actuators such as a wheel, a speaker, and an arm, respectively, which perform motion or action corresponding to the action unit.
  • the complex behavior of a robot is represented by a logical combination of behavioral units.
  • the logical combination of behavioral units can be performed according to programming ideas.
  • the simplest logic is executed sequentially for multiple behavioral units, and can also be judgment logic, loop logic, delay logic or conditional logic. For example, after one behavior unit is executed, the next behavior unit (equivalent to delay logic) is sequentially executed n seconds; it is also possible to wait for user feedback after execution of behavior unit 1 is completed, and when the user feedbacks command A, behavior unit 2 is executed.
  • the behavior unit 3 (equivalent to conditional logic) is executed when the user feeds back the command B.
  • Figure 5 is a schematic diagram of the composition of the behavior unit when the robot greets; when the robot performs the hello task, it needs to execute 9 action units, namely S1 to S9; the 9 action units are divided into three behavior units, which are behavior units respectively.
  • Behavior unit 2 and behavior unit 3 wherein S1 to S3 in the behavior unit 1 are simultaneously performed, S4 to S6 in the behavior unit 2 are simultaneously performed, and S7 to S9 in the behavior unit 3 are simultaneously performed.
  • the three action units in each behavior unit are respectively completed by the wheel, the speaker and the arm. There is no contradiction between the two action units, so the robot can be successfully executed.
  • the division manner of the behavior unit may be set in multiple manners, and the specific description is as follows:
  • FIG. 6 a schematic diagram of a behavior unit according to a preset command picture position; a command picture can be attached to a regular board, and multiple rows and columns of command pictures can be inserted on the board.
  • the inserts, the robot can recognize each row and column of the drawing board, and can be preset to be a behavioral unit of the inserting picture inserted in each column, as shown in (a) of FIG. 6, or preset to
  • the command picture of the insert inserted in each line is a behavior unit; the command picture of each 2*3 or 3*2 insert matrix may be preset as a behavior unit, as shown in (b) and (c in FIG. 6). ) shown.
  • the action units corresponding to the command pictures on the inserts in the area will be executed at the same time, regardless of the specific position and order of the command pictures in the set of areas; for example, as shown in the figure
  • the three action units corresponding to the (a), 1, 2, and 3 command pictures in 6 will be executed at the same time, and then the two action units corresponding to the 4 and 5 command pictures are simultaneously executed according to the action logic, and then 6, 7 and 8 corresponding 3 action units; then as shown in (b) of Fig. 6, first 1 and 2 simultaneously, then 3, 4, 5, and 6 are executed simultaneously, and finally 7 is executed.
  • the image area may also be an irregular form as shown in FIG. 7, wherein three circles are set regions.
  • the action unit corresponding to the command picture in each circle will be executed at the same time, wherein the square may be a brick or a command picture that carries the command picture.
  • the user can clearly know that the commands corresponding to the command pictures in a specific area are executed at the same time, which is convenient for the user to program.
  • the pictures in the area are The placement will be very casual, improving the fun and convenience of the robot control method.
  • the command picture can be used as various shapes, and all the recognized command pictures are grouped according to the shape, and the action units corresponding to all the circular command pictures are simultaneously executed as one behavior unit, and the action units corresponding to all the square command pictures are used as A behavioral unit is then executed simultaneously.
  • the command picture can also have various background colors or border colors, and all the recognized command pictures are grouped according to colors, and the action units corresponding to the command pictures of the same color are simultaneously executed as one action unit.
  • This method can enable the user to clearly distinguish the group according to the shape or color of the command picture, and the position can be more random when the command pictures are arranged, thereby further improving the fun and convenience of the robot control mode.
  • the step of executing the control command corresponding to each group of control command images may be specifically implemented by: (1) identifying a control command image in the packet to obtain a corresponding control command; for example, performing edge detection. Or the above-mentioned control command image is identified in various ways such as feature extraction. (2) Driving the corresponding actuator to execute the control command according to the correspondence between the preset control command and the actuator.
  • the robot collects images of a plurality of command pictures, performs control commands corresponding to each command picture through image recognition, and distinguishes which command pictures are grouped in the image recognition process, and corresponding groups of command pictures
  • the action unit or control command is executed as a unit of behavior, and each group of action units is executed in sequence according to the behavior logic.
  • the action units corresponding to each group of command pictures after grouping will be executed at the same time to form a complex action, and each complex action can be further combined according to the behavior logic to form a complex continuous action.
  • the method further includes: if there are multiple control commands corresponding to the group, determining whether at least two control commands are contradictory among the plurality of control commands; if yes, according to the next One of the methods described is: stopping the execution of the control command corresponding to the current group, and generating the error information; or randomly arranging the contradictory control commands, executing the control commands according to the arrangement order; or randomly selecting from the conflicting control commands.
  • a control command is executed.
  • the step of determining whether at least two control commands are contradictory among the plurality of control commands may be implemented by: determining whether at least two control commands of the plurality of control commands correspond to the same execution mechanism, and if yes, determining corresponding Control commands of the same executing agency are contradictory control commands.
  • the situation that the physical picture placement does not conform to the definition cannot be performed according to actual conditions, for example, the same group of behavior units Move forward and move backwards.
  • the robot After acquiring the meanings of multiple command pictures and automatically grouping and parsing the commands into the behavior units of each group, the robot executes each behavior unit according to the logic between the preset behavior units, and simultaneously executes the behavior units when executing each behavior unit. Action unit.
  • the embodiment of the present application further provides another method for controlling a robot; the method specifically includes the following steps:
  • Step (1) Set the position area of each behavior unit in the programming board.
  • Step (2) the user inserts a tab with a command picture (the command image may also be referred to as a control command image) on the programming board, as shown in FIG. 8; the black point of the corner of the board in FIG. The robot recognizes the board.
  • a command picture the command image may also be referred to as a control command image
  • Step (3) The user sets a logical relationship between each behavior unit (ie, an action unit of each row in FIG. 8) on the robot, for example, sequentially executes the first three behavior units, and then performs judgment according to user feedback, and executes the behavior unit 4 or Behavior unit 5.
  • each behavior unit ie, an action unit of each row in FIG. 8
  • Step (4) The robot scans the drawing board and recognizes the drawing board to identify each command picture on the drawing board; automatically divides the picture into five behavior units.
  • Step (5) According to the behavior logic set in step 3, each behavior unit is executed, and the action unit corresponding to the command picture in each behavior unit is simultaneously executed.
  • one behavior unit corresponds to the above-mentioned one set of control command images
  • one action unit corresponds to the above one control command image, or one action corresponding to the control command image.
  • FIG. 9 is a flow chart showing the steps actually performed by the robot corresponding to each behavior unit in FIG. 8.
  • the meaning of each command picture can be preset, and the played voice is also pre-generated or recorded.
  • the behavioral unit 1 to the behavioral unit 3 are sequentially executed, and the behavioral unit 3 and the behavioral unit 4 and the behavioral unit 5 are conditional execution relations, if the user is within 5 seconds after the behavior unit is executed. If there is voice feedback, the behavior unit 4 is executed, and if the user does not have voice feedback within 5 seconds after the behavior unit is executed, the behavior unit 5 is executed.
  • the above method can segment and describe the complex action behaviors of the robot, so that the actions of the same group are simultaneously executed, simplifying the control mode of the robot and improving the convenience of the user to control the operation of the robot.
  • the device is disposed on a robot, and the device includes:
  • the acquiring module 100 is configured to collect multiple control command images
  • the grouping module 101 is configured to group the plurality of control command images according to the preset group identifier
  • the execution module 102 is configured to execute a control command corresponding to each group of control command images according to preset execution logic; when executing a control command corresponding to each group of control command images, if multiple control command images are included in the group, multiple executions are simultaneously performed Controls the control commands corresponding to the command image.
  • the execution module is further configured to: identify a control command image in the group, and obtain a corresponding control command; and drive the corresponding execution unit to execute the control command according to the corresponding relationship between the preset control command and the executing mechanism.
  • the device further includes: a judging module configured to determine whether there are at least two control commands in the plurality of control commands that contradict each other if the group has a plurality of control commands; and the processing module is configured to: if there are at least two control commands contradicting each other, Processing in one of the following ways: stopping execution of the control command corresponding to the current packet, generating error information; or randomly arranging contradictory control commands, executing control commands in an order of arrangement; or, from conflicting control commands, randomly Select one of the control commands to execute.
  • a judging module configured to determine whether there are at least two control commands in the plurality of control commands that contradict each other if the group has a plurality of control commands
  • the processing module is configured to: if there are at least two control commands contradicting each other, Processing in one of the following ways: stopping execution of the control command corresponding to the current packet, generating error information; or randomly arranging contradictory control commands, executing control commands in an order of arrangement; or, from conflicting control commands, randomly Select one of the
  • the determining module is further configured to: determine whether at least two control commands of the plurality of control commands correspond to the same execution mechanism, and if yes, determine that the control commands corresponding to the same execution mechanism are conflicting control commands.
  • the embodiment of the present application further provides a robot, including a processor and an executing mechanism; the control device of the robot is disposed in the processor.
  • the robot provided by the embodiment of the present application has the same technical features as the control method and apparatus of the robot provided by the above embodiments, so that the same technical problem can be solved and the same technical effect can be achieved.
  • one command picture corresponds to an action unit command
  • one or more action units form a behavior unit
  • the action unit in each action unit is simultaneously executed
  • the image recognition of the display features of the plurality of command pictures is automatically grouped for the command pictures, and the action units corresponding to the command pictures of each group are simultaneously executed as a behavior unit; each behavior unit is executed according to a preset logic; when a certain behavior unit When there are conflicts in each action unit, the preset processing mechanism can be used to ensure that the action can be performed smoothly.
  • the control method, device and robot of the robot analyze the behavior of the robot into a combination of behavior units, and the spatial interaction method can control the robot to perform complex behaviors without using a professional foundation;
  • the behavior unit composed of action units simplify the complex behavior of the robot; then use the method of command picture interaction, which is simple and easy to understand, no need to have a professional foundation, or need to use personnel to have a deep Understand; this method adopts the command picture interaction method to quickly input complex commands into the robot, realize simple command input, and execute the robot control process with complex behavior.
  • a computer program product for controlling a robot, a method, and a robot program provided by the embodiments of the present application, comprising a computer readable storage medium storing program code, the program code comprising a command configurable to perform the foregoing method embodiment
  • program code comprising a command configurable to perform the foregoing method embodiment
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • a number of commands are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本申请提供了一种机器人的控制方法、装置和机器人;其中,该方法应用于机器人,方法包括:采集多个控制命令图像;根据预设的组标识,对多个控制命令图像进行分组;按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。本申请通过对控制命令图像进行分组,可以将机器人复杂的动作行为进行分割描述,使得处于同一分组的动作同时执行,简化了机器人的控制方式,提高了用户控制机器人操作的便捷性。

Description

机器人的控制方法、装置和机器人
相关申请的交叉引用
本申请要求于2018年03月30日提交中国专利局的申请号为CN 201810296937.6、名称为“机器人的控制方法、装置和机器人”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人控制技术领域,尤其是涉及一种机器人的控制方法、装置和机器人。
背景技术
随着控制技术的不断发展,机器人独立于机器之外成为全新产品;机器人既能接受预先编排的程序,也能接受人类指挥,以完成复杂运动,服务于人类。
机器人的控制方式,通常通过操作系统的命令驱动完成;命令驱动所有执行机构组成复杂的动作或行为输出,以完成任务。现有的对机器人的控制方式主要分为两类,一类是经由编程语言编写代码,或者专门的工具软件进行控制,以实现机器人功能或行为输出;该方式需要较为专业的人员,同时需要配置合适的环境,要求用户对特定的机器人了解程度较高,使得机器人可执行复杂的行为。另一类是直接通过语言等真实交互方式下达命令,该控制方式简单,机器人执行相应行为完成命令;但这类的控制方式,机器人通常表现较为简单,连续的复杂命令,机器人难以理解,或者用户需要较长时间与机器人交互,操作复杂。
发明内容
有鉴于此,本申请的目的在于提供一种机器人的控制方法、装置和机器人,以简化机器人的控制方式,提高用户控制机器人操作的便捷性。
第一方面,本申请实施例提供了一种机器人的控制方法,方法应用于机器人,方法包括:采集多个控制命令图像;根据预设的组标识,对多个控制命令图像进行分组;按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。
结合第一方面,本申请实施例提供了第一方面的第一种可能的实施方式,其中,上述预设的组标识至少包括控制命令图像所处的位置、控制命令图像的形状、颜色或者亮度中 的一种;预设的执行逻辑至少包括顺序执行、判断执行、循环执行、延时执行或条件执行中的一种。
结合第一方面,本申请实施例提供了第一方面的第二种可能的实施方式,其中,上述执行各组控制命令图像对应的控制命令的步骤,包括:识别分组中的控制命令图像,得到对应的控制命令;根据预先设置的控制命令与执行机构的对应关系,驱动相应的执行机构执行控制命令。
结合第一方面的第二种可能的实施方式,本申请实施例提供了第一方面的第三种可能的实施方式,其中,上述方法还包括:如果分组对应有多个控制命令,判断多个控制命令中是否存在至少两个控制命令相互矛盾;如果是,按照下述方式之一处理:停止执行当前分组对应的控制命令,生成报错信息;或者,对相互矛盾的控制命令随机排列,按照排列顺序执行控制命令;或者,从相互矛盾的控制命令中,随机选择其中一个控制命令执行。
结合第一方面的第三种可能的实施方式,本申请实施例提供了第一方面的第四种可能的实施方式,其中,上述判断多个控制命令中是否存在至少两个控制命令相互矛盾的步骤,包括:判断多个控制命令中是否存在至少两个控制命令对应同一执行机构,如果是,确定对应同一执行机构的控制命令为相互矛盾的控制命令。
第二方面,本申请实施例提供了一种机器人的控制装置,装置设置于机器人,装置包括:采集模块,配置成采集多个控制命令图像;分组模块,配置成根据预设的组标识,对多个控制命令图像进行分组;执行模块,配置成按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。
结合第二方面,本申请实施例提供了第二方面的第一种可能的实施方式,其中,上述执行模块,还配置成:识别分组中的控制命令图像,得到对应的控制命令;根据预先设置的控制命令与执行机构的对应关系,驱动相应的执行机构执行控制命令。
结合第二方面的第一种可能的实施方式,本申请实施例提供了第二方面的第二种可能的实施方式,其中,上述装置还包括:判断模块,配置成如果分组对应有多个控制命令,判断多个控制命令中是否存在至少两个控制命令相互矛盾;处理模块,配置成如果存在至少两个控制命令相互矛盾,按照下述方式之一处理:停止执行当前分组对应的控制命令,生成报错信息;或者,对相互矛盾的控制命令随机排列,按照排列顺序执行控制命令;或者,从相互矛盾的控制命令中,随机选择其中一个控制命令执行。
结合第二方面的第二种可能的实施方式,本申请实施例提供了第二方面的第三种可能的实施方式,其中,上述判断模块,还用于:判断多个控制命令中是否存在至少两个控制命令对应同一执行机构,如果是,确定对应同一执行机构的控制命令为相互矛盾的控制命 令。
第三方面,本申请实施例提供了一种机器人,包括处理器和执行机构;上述机器人的控制装置设置于处理器中。
本申请实施例带来了以下有益效果:
本申请实施例提供的一种机器人的控制方法、装置和机器人,首先根据预设的组标识,对采集到的控制命令图像进行分组;再按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。该方式通过对控制命令图像进行分组,可以将机器人复杂的动作行为进行分割描述,使得处于同一分组的动作同时执行,简化了机器人的控制方式,提高了用户控制机器人操作的便捷性。
本申请的其他特征和优点将在随后的说明书中阐述,或者,部分特征和优点可以从说明书推知或毫无疑义地确定,或者通过实施本申请的上述技术即可得知。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施方式,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本申请具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种机器人的控制方法的流程图;
图2为本申请实施例提供的一种控制命令图像的示意图;
图3为本申请实施例提供的一种机器人控制逻辑的定义方法示意图;
图4为本申请实施例提供的一种机器人行为定义的示意图;
图5为本申请实施例提供的机器人打招呼时的行为单元组成示意图;
图6为本申请实施例提供的一种按照预设的命令图片的位置划分行为单元的示意图;
图7为本申请实施例提供的另一种按照预设的命令图片的位置划分行为单元的示意图;
图8为本申请实施例提供的一种编程图版的示意图;
图9为本申请实施例提供的图8中各行为单元对应的机器人实际执行的步骤流程图;
图10为本申请实施例提供的一种机器人的控制装置的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的技术方案进行清楚且完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
考虑到现有的机器人控制方式较为复杂的问题,本申请实施例提供了一种机器人的控制方法、装置和机器人;该技术可以应用于空间交互型的机器人控制中,尤其可以应用于基于实物编程的机器人控制中;该技术可以采用相关的软件或硬件实现,下面通过实施例进行描述。
参见图1所示的一种机器人的控制方法的流程图;该方法应用于机器人,该方法包括如下步骤:
步骤S102,采集多个控制命令图像;
可以通过摄像装置或红外摄像装置采集该控制命令图像;该控制命令图像通常包含图样、形状、符号、文字和/或条码等具有较为显著的识别特征,以便于机器人可以快速识别摄像采集区域内的图像中是否包含控制命令图像,并准确识别该控制命令图像对应的控制命令。
参见图2所示的一种控制命令图像的示意图;左侧图像为“箭头”形状,可以用于代表“前进”的控制命令;中部图像为“音乐符号”形状,可以用于代表“播放音乐”的控制命令;右侧图像为“L1”文字,可以用于代表“按照第1条设定线路行走”的控制命令。
具体地,上述控制命令图像(也可以称为命令图片)可以设置为多种颜色和形状,例如,矩形、三角形和圆形等;该控制命令图像上还可以设置图像,文字或编码中的一种或者多种的组合;每个控制命令图像经机器人图像识别后,得到一个控制命令,该控制命令可以由一个驱动执行机构执行相应的动作。除了用于表达控制命令的控制命令图像外,还可以设置用于表达其他特殊含义的图像。
上述控制命令图像可以附着在任意实物上,例如积木、拼图、各种拼接模块或贴纸等;还可以由电子显示屏显示或者投影设备显示。
步骤S104,根据预设的组标识,对多个控制命令图像进行分组;
其中,每组控制命令图像也可以称为一个行为单元,该预设的组标识至少包括控制命令图像所处的位置、控制命令图像的形状、颜色或者亮度中的一种;例如,在附着控制命令图像的实物背景或显示背景上,可以预先设置属于同一行为单元的区域,放置在同一区域的控制命令图像可以为划分行为单元。
再如,可以预先将控制命令图片的轮廓设置为不同的形状,例如,圆形、椭圆形、矩 形和三角形等,将轮廓属于同一形状的控制命令图像为划分行为单元;再如,可以预先将控制命令图像的底色、边框色或图像本身的颜色设置为多种颜色,将属于同一颜色的控制命令图像为划分行为单元。
步骤S106,按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。该预设的执行逻辑至少包括顺序执行、判断执行、循环执行、延时执行或条件执行中的一种;该预设的执行逻辑为各组控制命令图像之间的执行逻辑,即组间逻辑;对于组内,如果组内包括多个控制命令图像,则同时执行这多个控制命令图像对应的控制命令。
本申请实施例提供的一种机器人的控制方法,首先根据预设的组标识,对采集到的控制命令图像进行分组;再按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。该方式通过对控制命令图像进行分组,可以将机器人复杂的动作行为进行分割描述,使得处于同一分组的动作同时执行,简化了机器人的控制方式,提高了用户控制机器人操作的便捷性。
本申请实施例还提供了另一种机器人的控制方法;该方法在图1中所示方法基础上实现;如图3所示的一种机器人控制逻辑的定义方法示意图。该定义方法首先定义了一种机器人的行为逻辑,将机器人的行为做了分割与描述;并将机器人行为与空间的实体相对应,通过机器人图像,或其他传感器获取相应信息,整合后由机器人执行;从而实现通过空间实体化的简单交互,实现机器人复杂行为的过程。
该定义方法具体包括如下部分:
(1)首先需要建立机器人的行为逻辑,即对机器人的行为进行定义,具体包括对机器人的具体表现进行分类和将机器人的复杂行为分割为行为单元,每个行为单元设置为一组同时执行的动作单元的组合等。
本实施例中,一个行为单元相当于上述一组控制命令图像;一个动作单元相当于上述一个控制命令图像,或者一个控制命令图像对应的动作。
(2)明确将机器人复杂行为分割得到的行为单元中,具体执行的逻辑与步骤,具体可以参考编程语言进行逻辑定义。
(3)进行机器人命令图片化设计,将命令图片与机器人动作单元对应,并将各组实物命令图片的组合与各行为单元对应,与执行逻辑对应。
(4)通过空间交互,完成机器人获取命令图片信息的过程,并按定义的逻辑执行由各组实物命令图片对应的动作单元组成的行为单元。
参见图4所示的一种机器人行为定义的示意图;机器人的一系列动作是由每个执行机构配合组成的,每个执行机构在时间线上都可独立运动或表现,作为一个动作单元。机器人完成一个复杂动作,需要在同一时间段下,各个执行机构同时动作。而机器人同时执行的一组动作可以组成一个组合,作为机器人行为的一个单元,即多个动作单元组成一个行为单元,多个行为单元连接在一起,形成了一个机器人的复杂动作。
如图4所示,一个行为单元可以同时包括多个动作单元,根据机器人的情况,每个动作单元对应着机器人某个执行机构的运动或者动作,因为机器人通常具有多个执行机构,而这些执行机构的动作在同一时刻可以不互相影响的同时执行,因此可将多个动作单元进行组合,形成行为单元,行为单元的组合形成了复杂的行为动作。如图4所示,复杂行为中的第一个行为单元由轮子、扬声器和手臂等多个执行机构分别进行运动或者动作对应的动作单元组成。
机器人的复杂行为表现为行为单元的逻辑组合。行为单元的逻辑组合可按编程思路执行,其中最简单的逻辑为多个行为单元顺序执行,还可以为判断逻辑、循环逻辑、延时逻辑或条件逻辑等。例如,一个行为单元执行完毕后n秒顺序执行下一个行为单元(相当于延时逻辑);还可以为在行为单元1执行完毕后等待用户反馈,当用户反馈命令A时执行行为单元2,当用户反馈命令B时执行行为单元3(相当于条件逻辑)。
如图5所示为机器人打招呼时的行为单元组成示意图;机器人执行打招呼任务时,需要执行9个动作单元,即S1至S9;这9个动作单元共划分了三个行为单元,分别为行为单元1、行为单元2和行为单元3;其中,行为单元1中的S1至S3同时执行,行为单元2中的S4至S6同时执行,行为单元3中的S7至S9同时执行。每个行为单元中的三个动作单元分别由轮子、扬声器和手臂同时完成,这三个动作单元两两之间不存在矛盾冲突的地方,因此,机器人可以顺利执行完成。
本实施例中,可以通过多种方式设置行为单元的划分方式,具体描述如下:
(a)按照预设的命令图片的位置进行划分
参见图6所示的一种按照预设的命令图片的位置划分行为单元的示意图;命令图片可以附着在有规则的图板上,该图板上可以扣插多行多列的带有命令图片的插片,机器人能够识别出图板的各行各列,并且可以预先设置为插在各列的插片的命令图片为一个行为单元,如图6中的(a)所示,或者预先设置为插在各行的插片的命令图片为一个行为单元;还可以预先设置每个2*3或者3*2的插片矩阵的命令图片为一个行为单元,如图6中的(b)和(c)所示。
在图板的各区域中插接了插片后,该区域内的插片上的命令图片对应的动作单元将同时被执行,不论这组区域内各命令图片的具体位置和顺序;例如,如图6中的(a),1、2 和3命令图片对应的3个动作单元将同时被执行,之后按照行动逻辑同时执行4、5命令图片对应的2个动作单元,再同时执行6、7和8对应的3个动作单元;再如图6中的(b),先同时1和2,再同时执行3、4、5和6,最后再执行7。
另外,参见图7所示的另一种按照预设的命令图片的位置划分行为单元的示意图;图像区域还可以为如图7所示的不规则形式,其中3个圆形为设定的区域,每个圆圈中的命令图片对应的动作单元将同时被执行,其中方形可以为放置携带了命令图片的积木或者命令图片。
通过上述设置方式,用户能够明确的知晓特定区域中的命令图片对应的命令将同时执行,便于用户编程,此外,在各区域确定后,由于区域内的命令均同时执行,所以区域内的图片的摆放将可以很随意,提高了机器人控制方式的趣味性和便捷性。
(b)按照预设的命令图片的形状或者颜色进行划分
例如,命令图片可作为各种形状,将识别到的全部命令图片按照形状分组,将所有圆形的命令图片对应的动作单元作为一个行为单元同时执行,将所有方形的命令图片对应的动作单元作为一个行为单元随后同时执行。命令图片还可以具有各种底色或者边框颜色,将识别到的全部命令图片按照颜色分组,并将同颜色的命令图片对应的动作单元作为一个行为单元同时执行。另外,还可以按照预设的命令图片的明暗度(即亮度)或者其他显示特征分组。
该方式可以使用户能够明确的根据命令图片的形状或者颜色区分分组,在排布这些命令图片时位置可以更加随意,进一步提高机器人控制方式的趣味性和便捷性。
本实施例中,上述执行各组控制命令图像对应的控制命令的步骤,具体可以通过下述方法实现:(1)识别分组中的控制命令图像,得到对应的控制命令;例如,可以通过边缘检测或特征提取等多种方式识别上述控制命令图像。(2)根据预先设置的控制命令与执行机构的对应关系,驱动相应的执行机构执行控制命令。
在交互过程中,机器人对多个命令图片的图像进行采集,通过图像识别进行各命令图片对应的控制命令,并且在图像识别过程中区分出哪些命令图片为一组,将各组命令图片对应的动作单元或控制命令作为一个行为单元,同时执行,按照行为逻辑先后执行各组动作单元。分组后的每组命令图片对应的动作单元将同时执行,组成一个复杂的动作,各复杂动作可以在依照行为逻辑进一步组合形成复杂的连续性动作。
进一步地,为了进一步提高上述机器人的控制方法的稳定性,上述方法还包括:如果分组对应有多个控制命令,判断多个控制命令中是否存在至少两个控制命令相互矛盾;如果是,按照下述方式之一处理:停止执行当前分组对应的控制命令,生成报错信息;或者,对相互矛盾的控制命令随机排列,按照排列顺序执行控制命令;或者,从相互矛盾的控制 命令中,随机选择其中一个控制命令执行。
其中,判断多个控制命令中是否存在至少两个控制命令相互矛盾的步骤,可以通过下述方式实现:判断多个控制命令中是否存在至少两个控制命令对应同一执行机构,如果是,确定对应同一执行机构的控制命令为相互矛盾的控制命令。
具体地,对行为单元内部的多个动作单元(每个动作单元对应一个控制命令),可根据实际情况排除因实物图片放置不符合定义而无法执行的情况,例如,同一组行为单元中既有向前运动且又有向后运动,二者无法执行时,可以报错,也可以将不能同时执行的动作单元依设定的逻辑更改其在行为单元内部的动作逻辑,例如,先执行其中任一个再执行另一个,全部执行完成后视为该行为单元执行完毕;或者任选其中一个执行,不再执行有冲突的另一个。
机器人在获取多个命令图片的含义并自动分组和解析为各组行为单元的命令后,依照预设的各行为单元间的逻辑执行各行为单元,并且在执行各行为单元时同时执行其包含的动作单元。
本申请实施例还提供了另一种机器人的控制方法;该方法具体包括如下步骤:
步骤(1):设定编程图板中每个行为单元的位置区域。
步骤(2):用户在编程图板上插接贴有命令图片(该命令图像也可以称为控制命令图像)的插片,如图8所示;图7中图板角部的黑点用于机器人识别图板。
步骤(3):用户在机器人上设置各行为单元(即图8中各行的动作单元)间的逻辑关系,例如,顺序执行前3个行为单元,之后根据用户反馈进行判断,执行行为单元4或者行为单元5。
步骤(4):机器人扫描该图板,并识别图板,从而识别图板上的各命令图片;自动将图片划分为5个行为单元。
步骤(5):根据步骤3中设定的行为逻辑,执行各行为单元,各行为单元中的命令图片对应的动作单元同时执行。
本实施例中,一个行为单元相当于上述一组控制命令图像;一个动作单元相当于上述一个控制命令图像,或者一个控制命令图像对应的动作。
图9为图8中各行为单元对应的机器人实际执行的步骤流程图。上述步骤中,各命令图片的含义是可以为预先设定完成的,其播放的语音也是预先生成或者录制的。图9中,行为单元1至行为单元3之间为顺序执行的关系,行为单元3与行为单元4、和行为单元5之间为条件执行关系,如果用户在行为单元执行完毕后的5秒内有语音反馈,则执行行为单元4,如果用户在行为单元执行完毕后的5秒内没有语音反馈,则执行行为单元5。
上述方法通过对控制命令图像进行分组,可以将机器人复杂的动作行为进行分割描述,使得处于同一分组的动作同时执行,简化了机器人的控制方式,提高了用户控制机器人操作的便捷性。
对应于上述方法实施例,参见图10所示的一种机器人的控制装置的结构示意图;该装置设置于机器人,该装置包括:
采集模块100,配置成采集多个控制命令图像;
分组模块101,配置成根据预设的组标识,对多个控制命令图像进行分组;
执行模块102,配置成按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;执行每组控制命令图像对应的控制命令时,如果分组中包含多个控制命令图像,同时执行多个控制命令图像对应的控制命令。
上述执行模块,还配置成:识别分组中的控制命令图像,得到对应的控制命令;根据预先设置的控制命令与执行机构的对应关系,驱动相应的执行机构执行控制命令。
上述装置还包括:判断模块,配置成如果分组对应有多个控制命令,判断多个控制命令中是否存在至少两个控制命令相互矛盾;处理模块,配置成如果存在至少两个控制命令相互矛盾,按照下述方式之一处理:停止执行当前分组对应的控制命令,生成报错信息;或者,对相互矛盾的控制命令随机排列,按照排列顺序执行控制命令;或者,从相互矛盾的控制命令中,随机选择其中一个控制命令执行。
上述判断模块,还配置成:判断多个控制命令中是否存在至少两个控制命令对应同一执行机构,如果是,确定对应同一执行机构的控制命令为相互矛盾的控制命令。
本申请实施例还提供了一种机器人,包括处理器和执行机构;上述机器人的控制装置设置于处理器中。
本申请实施例提供的机器人,与上述实施例提供的机器人的控制方法和装置具有相同的技术特征,所以也能解决相同的技术问题,达到相同的技术效果。
本申请实施例所提供的机器人的控制方法、装置和机器人,一个命令图片对应一个动作单元的命令,一个或多个动作单元组成一个行为单元,每个行为单元中的动作单元同时执行;通过对多个命令图片的显示特征的图像识别,自动为命令图片分组,每个组的命令图片对应的动作单元同时执行,作为一个行为单元;各行为单元间依据预设的逻辑执行;当某行为单元内的各动作单元有冲突时,可以采用预设的处理机制进行处理,以保证动作能够顺利执行。
本申请实施例所提供的机器人的控制方法、装置和机器人,将机器人的行为解析为行为单元的组合,通过空间交互的方法,无需使用人员拥有专业基础,即可控制机器人执行 复杂的行为;该方式中,首先定义由动作单元组成的行为单元,将机器人的复杂行为简单化;再采用命令图片交互的方法,简单易懂,无需使用人员拥有专业基础,或需使用人员对机器人有很深的了解;该方式采用命令图片交互的方法能快速的将复杂的命令输入机器人,实现命令输入简单,执行行为复杂的机器人控制过程。
本申请实施例所提供的机器人的控制方法、装置和机器人的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的命令可配置成执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干命令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (10)

  1. 一种机器人的控制方法,其特征在于,所述方法应用于机器人,所述方法包括:
    采集多个控制命令图像;
    根据预设的组标识,对多个所述控制命令图像进行分组;
    按照预设的执行逻辑,执行各组控制命令图像对应的控制命令;
    执行各组控制命令图像对应的控制命令时,如果所述分组中包含多个所述控制命令图像,同时执行多个所述控制命令图像对应的控制命令。
  2. 根据权利要求1所述的方法,其特征在于,所述预设的组标识至少包括所述控制命令图像所处的位置、所述控制命令图像的形状、颜色或者亮度中的一种;
    所述预设的执行逻辑至少包括顺序执行、判断执行、循环执行、延时执行或条件执行中的一种。
  3. 根据权利要求1所述的方法,其特征在于,所述执行各组控制命令图像对应的控制命令的步骤,包括:
    识别所述分组中的控制命令图像,得到对应的控制命令;
    根据预先设置的所述控制命令与执行机构的对应关系,驱动相应的执行机构执行所述控制命令。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    如果所述分组对应有多个控制命令,判断多个所述控制命令中是否存在至少两个控制命令相互矛盾;
    如果是,按照下述方式之一处理:
    停止执行当前分组对应的控制命令,生成报错信息;
    或者,对相互矛盾的所述控制命令随机排列,按照排列顺序执行所述控制命令;
    或者,从相互矛盾的所述控制命令中,随机选择其中一个控制命令执行。
  5. 根据权利要求4所述的方法,其特征在于,所述判断多个所述控制命令中是否存在至少两个控制命令相互矛盾的步骤,包括:
    判断多个所述控制命令中是否存在至少两个所述控制命令对应同一执行机构,如果是,确定对应同一执行机构的所述控制命令为相互矛盾的控制命令。
  6. 一种机器人的控制装置,其特征在于,所述装置设置于机器人,所述装置包括:
    采集模块,配置成采集多个控制命令图像;
    分组模块,配置成根据预设的组标识,对多个所述控制命令图像进行分组;
    执行模块,配置成按照预设的执行逻辑,执行各组控制命令图像对应的控制命令; 执行每组控制命令图像对应的控制命令时,如果所述分组中包含多个所述控制命令图像,同时执行多个所述控制命令图像对应的控制命令。
  7. 根据权利要求6所述的装置,其特征在于,所述执行模块,还配置成:
    识别所述分组中的所述控制命令图像,得到对应的控制命令;
    根据预先设置的所述控制命令与执行机构的对应关系,驱动相应的执行机构执行所述控制命令。
  8. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    判断模块,配置成如果所述分组对应有多个控制命令,判断多个所述控制命令中是否存在至少两个控制命令相互矛盾;
    处理模块,配置成如果存在至少两个控制命令相互矛盾,按照下述方式之一处理:
    停止执行当前分组对应的控制命令,生成报错信息;
    或者,对相互矛盾的所述控制命令随机排列,按照排列顺序执行所述控制命令;
    或者,从相互矛盾的所述控制命令中,随机选择其中一个控制命令执行。
  9. 根据权利要求8所述的装置,其特征在于,所述判断模块,还配置成:
    判断多个所述控制命令中是否存在至少两个所述控制命令对应同一执行机构,如果是,确定对应同一执行机构的所述控制命令为相互矛盾的控制命令。
  10. 一种机器人,其特征在于,包括处理器和执行机构;权利要求6-9任一项所述的装置设置于所述处理器中。
PCT/CN2018/124313 2018-03-30 2018-12-27 机器人的控制方法、装置和机器人 WO2019184487A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810296937.6A CN108415285B (zh) 2018-03-30 2018-03-30 机器人的控制方法、装置和机器人
CN201810296937.6 2018-03-30

Publications (1)

Publication Number Publication Date
WO2019184487A1 true WO2019184487A1 (zh) 2019-10-03

Family

ID=63134501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124313 WO2019184487A1 (zh) 2018-03-30 2018-12-27 机器人的控制方法、装置和机器人

Country Status (2)

Country Link
CN (1) CN108415285B (zh)
WO (1) WO2019184487A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116583A1 (zh) * 2021-12-23 2023-06-29 华为技术有限公司 一种控制方法、读取参数的方法、相关设备及工业网络

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415285B (zh) * 2018-03-30 2020-05-19 北京进化者机器人科技有限公司 机器人的控制方法、装置和机器人
CN110253568B (zh) * 2019-05-22 2021-07-20 深圳镁伽科技有限公司 机器人控制方法、系统、设备及存储介质
CN112506349A (zh) * 2020-12-17 2021-03-16 杭州易现先进科技有限公司 一种基于投影的交互方法、装置和一种投影仪

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090032777A (ko) * 2007-09-28 2009-04-01 방현 알에프아이디, 알에프, 아이알디에이통신을 이용한 다국어및 물체인식 교육용 로봇
CN105773633A (zh) * 2016-04-14 2016-07-20 中南大学 基于人脸位置和灵敏度参数的移动机器人人机控制系统
KR101741997B1 (ko) * 2016-10-25 2017-05-31 다래비젼주식회사 로봇 암 정렬 장치 및 이를 이용한 정렬 방법
CN206883648U (zh) * 2017-03-28 2018-01-16 深圳光启合众科技有限公司 机器人
CN107598920A (zh) * 2017-08-23 2018-01-19 深圳果力智能科技有限公司 一种基于视觉控制的机械手
CN207139808U (zh) * 2017-05-27 2018-03-27 深圳光启合众科技有限公司 机器人和机器人控制系统
CN108415285A (zh) * 2018-03-30 2018-08-17 北京进化者机器人科技有限公司 机器人的控制方法、装置和机器人

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004025462A2 (en) * 2002-09-12 2004-03-25 Nline Corporation Flow control method for maximizing resource utilization of a remote system
CN104063291A (zh) * 2014-07-14 2014-09-24 河北水木寰虹机器人科技有限公司 编程机器人的控制装置
CN106095096A (zh) * 2016-06-12 2016-11-09 苏州乐派特机器人有限公司 利用实物块编程的方法及其在机器人领域的应用
CN106095438A (zh) * 2016-06-12 2016-11-09 苏州乐派特机器人有限公司 利用拼图及图像摄取分析进行实物化编程的方法及其应用

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090032777A (ko) * 2007-09-28 2009-04-01 방현 알에프아이디, 알에프, 아이알디에이통신을 이용한 다국어및 물체인식 교육용 로봇
CN105773633A (zh) * 2016-04-14 2016-07-20 中南大学 基于人脸位置和灵敏度参数的移动机器人人机控制系统
KR101741997B1 (ko) * 2016-10-25 2017-05-31 다래비젼주식회사 로봇 암 정렬 장치 및 이를 이용한 정렬 방법
CN206883648U (zh) * 2017-03-28 2018-01-16 深圳光启合众科技有限公司 机器人
CN207139808U (zh) * 2017-05-27 2018-03-27 深圳光启合众科技有限公司 机器人和机器人控制系统
CN107598920A (zh) * 2017-08-23 2018-01-19 深圳果力智能科技有限公司 一种基于视觉控制的机械手
CN108415285A (zh) * 2018-03-30 2018-08-17 北京进化者机器人科技有限公司 机器人的控制方法、装置和机器人

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116583A1 (zh) * 2021-12-23 2023-06-29 华为技术有限公司 一种控制方法、读取参数的方法、相关设备及工业网络

Also Published As

Publication number Publication date
CN108415285A (zh) 2018-08-17
CN108415285B (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2019184487A1 (zh) 机器人的控制方法、装置和机器人
CN104217066B (zh) 利用2d视图设计3d建模对象
US9639330B2 (en) Programming interface
CN108279878B (zh) 一种基于增强现实的实物编程方法及系统
JP5087532B2 (ja) 端末装置、表示制御方法および表示制御プログラム
CN103197929A (zh) 一种面向儿童的图形化编程系统和方法
WO2023024442A1 (zh) 检测方法、训练方法、装置、设备、存储介质和程序产品
CN103440033B (zh) 一种基于徒手和单目摄像头实现人机交互的方法和装置
CN105047042A (zh) 一种面向儿童实物编程方法及系统
US20160267700A1 (en) Generating Motion Data Stories
CN108876934A (zh) 关键点标注方法、装置和系统及存储介质
KR20200138074A (ko) 머신-러닝과 크라우드-소싱된 데이터 주석을 통합하기 위한 시스템 및 방법
WO2017116883A1 (en) Gestures visual builder tool
TW201610916A (zh) 根據依時一致性超像素為圖框序列產生分段遮罩之方法和裝置,以及電腦可讀式儲存媒體
CN108234821A (zh) 检测视频中的动作的方法、装置和系统
CN114327064A (zh) 一种基于手势控制的标绘方法、系统、设备及存储介质
CN108582084B (zh) 机器人的控制方法、装置和机器人
CN108582085B (zh) 控制命令的确定方法、装置和机器人
CN108582086B (zh) 机器人的控制方法、装置和机器人
US11823587B2 (en) Virtual reality system with inspecting function of assembling and disassembling and inspection method thereof
CN108509071B (zh) 屏幕上坐标防抖的方法、装置、设备及计算机可读存储介质
JPH11296571A (ja) 干渉チェック装置およびそのプログラム記録媒体
US9898256B2 (en) Translation of gesture to gesture code description using depth camera
CN114327063A (zh) 目标虚拟对象的交互方法、装置、电子设备及存储介质
RU2750278C2 (ru) Способ и аппарат для модификации контура, содержащего последовательность точек, размещенных на изображении

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18912534

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18912534

Country of ref document: EP

Kind code of ref document: A1