CN112917470A - Teaching method, device and system of manipulator, storage medium and equipment - Google Patents

Teaching method, device and system of manipulator, storage medium and equipment Download PDF

Info

Publication number
CN112917470A
CN112917470A CN201911244393.XA CN201911244393A CN112917470A CN 112917470 A CN112917470 A CN 112917470A CN 201911244393 A CN201911244393 A CN 201911244393A CN 112917470 A CN112917470 A CN 112917470A
Authority
CN
China
Prior art keywords
gesture information
teaching
current
fingers
double
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911244393.XA
Other languages
Chinese (zh)
Inventor
何国斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robotics Robotics Shenzhen Ltd
Original Assignee
Robotics Robotics Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robotics Robotics Shenzhen Ltd filed Critical Robotics Robotics Shenzhen Ltd
Priority to CN201911244393.XA priority Critical patent/CN112917470A/en
Publication of CN112917470A publication Critical patent/CN112917470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a robot teaching method, a robot teaching device, a robot teaching system, a storage medium, and a robot teaching apparatus. The teaching method of the manipulator comprises the following steps: acquiring sequence images of the teaching activities based on the double fingers; recognizing gesture information of the two fingers based on the sequence images; and generating a control instruction based on the mapping rule by combining the gesture information. By adopting the technical scheme of the invention, teaching activities of the manipulator comprising the end effector can be finished through the simple teaching behaviors of the double fingers based on the teaching object, and the teaching of the manipulator is simple, rapid and accurate.

Description

Teaching method, device and system of manipulator, storage medium and equipment
Technical Field
The present application relates to the field of manipulator teaching technologies, and in particular, to a method, an apparatus, a system, a storage medium, and a device for teaching a manipulator.
Background
With the improvement of the technological level, the whole society develops towards the direction of intellectualization and automation.
Teaching based on gestures is widely used in various fields, but no teaching is specific to a robot so far, and therefore a good teaching effect cannot be obtained for teaching of the robot.
Disclosure of Invention
In view of the above, the present invention provides a teaching method, apparatus, system, storage medium and device for a manipulator.
The invention provides a manipulator teaching method, which comprises the following steps:
acquiring sequence images of the teaching activities based on the double fingers;
recognizing gesture information of the double fingers based on the sequence images;
and generating a control instruction based on a mapping rule by combining the gesture information.
Further, the gesture information includes: "double or single finger", "0 finger", "double finger merge", "double finger separate", "double finger or single finger rotation" and/or "motion trajectory of double or single finger".
Further, the mapping rule includes: the gesture information corresponds to a control instruction for planning a motion track of the tail end of the manipulator; and/or
The gesture information corresponds to a control instruction for opening and/or closing an end effector of the manipulator; and/or
The gesture information corresponds to a control instruction of rotation of an end effector of the manipulator; and/or
The gesture information corresponds to a control instruction for the start and/or end of the teaching.
Further, the mapping rule further comprises a rule of a scale factor.
The invention provides a manipulator teaching method, which comprises the following steps:
acquiring a current image of the teaching activities based on the double fingers;
identifying current gesture information of the two fingers based on the current image;
judging whether the teaching is started or not based on the current gesture information and a mapping rule;
if so, generating a current control instruction based on the mapping rule by combining the current gesture information and the gesture information at the previous moment; and if not, executing the step of obtaining the current image.
The present invention provides a robot teaching device, including:
the image acquisition module is used for acquiring sequence images of the teaching activities based on the double fingers;
the gesture recognition module is used for recognizing gesture information of the double fingers based on the sequence images;
and the gesture generation module is used for combining the gesture information and generating a control instruction based on the mapping rule.
The present invention provides a robot teaching device, including:
the image acquisition module is used for acquiring a current image of the teaching activity based on the double fingers;
the gesture recognition module is used for recognizing current gesture information of the two fingers based on the current image;
the teaching judgment module is used for judging whether teaching is started or not based on the current gesture information;
the gesture generation module is used for generating a current control instruction based on a mapping rule by combining the current gesture information and the gesture information at the previous moment if the current control instruction is the current control instruction; and if not, executing the step of obtaining the current image.
The present invention provides a system, comprising: a manipulator, an image sensor, and a control unit;
the image sensor and the manipulator are respectively in communication connection with the control unit;
the image sensor is used for acquiring the sequence images;
the control unit is used for acquiring sequence images of the teaching activities based on the double fingers; recognizing gesture information of the double fingers based on the sequence images; generating a control instruction of the manipulator based on a mapping rule by combining the gesture information; or
The image sensor is used for acquiring the current image;
the system comprises a display unit, a display unit and a display unit, wherein the display unit is used for acquiring a current image of the double-finger-based teaching activity; identifying current gesture information of the two fingers based on the current image; judging whether the teaching is started or not based on the current gesture information and a mapping rule; if so, generating a current control instruction based on the mapping rule by combining the current gesture information and the gesture information at the previous moment; if not, executing the step of obtaining the current image;
and the manipulator is used for executing corresponding actions according to the control command or the current control command.
The invention provides a computer device comprising a memory storing a computer program and a processor implementing the manipulator teaching method of any of the above when the processor executes the computer program.
The present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the robot teaching method of any of the above.
Teaching activities for a manipulator including an end effector can be completed through simple teaching behaviors based on double fingers of a teaching object, and simple, rapid and accurate teaching for the manipulator is achieved.
Drawings
FIG. 1 is a first block diagram of a system in one embodiment;
FIG. 2 is a second block diagram of the system in one embodiment;
FIG. 3 is a block diagram of a third configuration of a system in one embodiment;
FIG. 4 is a fourth block diagram of a system in one embodiment;
FIG. 5 is a block diagram of a fifth configuration of the system in one embodiment;
FIG. 6 is a block diagram of a sixth configuration of the system in one embodiment;
FIG. 7 is a first diagram of a two-finger teaching in one embodiment;
FIG. 8 is a second diagram of the two-finger teaching in one embodiment;
FIG. 9 is a first flowchart of a robot teaching method in accordance with one embodiment;
FIG. 10 is a second flowchart of a robot teaching method according to one embodiment;
FIG. 11 is a first block diagram of a robot teaching device according to an embodiment;
FIG. 12 is a second block diagram of the robot teaching device according to the embodiment;
FIG. 13 is a block diagram of a first configuration of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, the robot teaching method provided by the present application may be applied to a system (teaching system) shown in fig. 1 to 5, where the system may include a robot 11, an image sensor 13 and a control unit 12, and the robot 11 and the image sensor 13 are in communication connection with the control unit 12 through a wired or wireless manner.
In one embodiment, the robot 11 may include: a serial robot or a parallel robot, wherein the serial robot is formed by connecting a plurality of driving units and connecting members in series, such as: a four-axis manipulator or a six-axis manipulator; the parallel manipulator is formed by connecting a plurality of driving units and connecting pieces in parallel, such as: delta robot. The robot 11 further includes an end effector 111 provided at an end of the robot, such as: the end effector is fixed to the output end of the drive unit at the end by a flange. Specifically, the end effector 111 may be, but is not limited to: clamping jaws and suction cups, which are preferred clamping jaws in the embodiment, are taken as an example for detailed description. The manipulator executes corresponding actions based on the control commands generated by the control unit.
Specifically, the control unit 12 may be a Programmable Logic Controller (PLC), a Field Programmable Gate Array (FPGA), a Computer (PC), an Industrial Personal Computer (IPC), a server, or the like. The control unit generates program instructions according to a pre-fixed program by combining manually input information, parameters or data collected by an external image sensor. Specific limitations regarding the control unit can be found in the following embodiments regarding the robot teaching method.
Specifically, the image sensor 13 may include, but is not limited to: cameras, video cameras, scanners or other devices with associated functions (e.g., motion sensing devices, mobile phones, computers), and the like. The image data acquired by the image sensor may be, but is not limited to: 2D image data (such as RGB images, black and white images, or grayscale images), 3D image data (such as depth images or point cloud images), or infrared image data. The image sensor 13 sends the acquired image of the two-finger-based teaching activity to the control unit 12.
The image sensor can be arranged at any position according to requirements, such as: is arranged on the manipulator or outside the manipulator. The present embodiment will be described in further detail with an example in which the image sensor 13 is provided outside the robot 11.
In one embodiment, the system may further include a display unit (not shown) for displaying gesture information and/or a motion trajectory of the real or virtual manipulator, and the like.
It should be noted that the above-mentioned dual fingers, the control unit, the manipulator and/or the image sensor may be a real dual finger, a control unit, a manipulator and/or an image sensor in a real environment, or may be a virtual dual finger, a control unit, a manipulator and/or an image sensor in a simulation platform, and the effect of connecting a real manipulator and the like is achieved through a simulation environment. The control unit which completes behavior training depending on the virtual environment is transplanted to the real environment to control or retrain the real double fingers, the control unit, the manipulator and/or the image sensor, so that the resources and time in the training process can be saved.
In one embodiment, as shown in fig. 9, a robot teaching method is provided, which includes the following steps, taking the method as an example for the system in fig. 1-5:
step S110, acquiring a sequence image of the teaching activities based on the double fingers;
step S120, recognizing gesture information of the two fingers based on the sequence image;
step S130 generates a control command based on the mapping rule in combination with the gesture information.
Teaching activities for a manipulator including an end effector can be completed through simple teaching behaviors based on the two fingers of the teaching object (namely teaching for trajectory planning of the manipulator and grasping, releasing, rotating, etc. of the actuator can be realized through simple teaching activities based on the two fingers), thereby realizing simple, rapid and accurate teaching for the manipulator.
For ease of understanding, the above-described method steps are described in further detail below.
Step S110, acquiring a sequence image of the teaching activities based on the double fingers;
and acquiring a sequence image which is acquired by the image sensor and directly transmitted, or transmitting the acquired sequence image to a memory or a server by the image sensor, and acquiring the sequence image from the memory or the server.
The sequence image refers to a set of a plurality of images acquired by an image sensor according to a time sequence and a preset time interval in the teaching activity process based on the double fingers.
Note that the "double finger" may be a "double finger" of any teaching target. The teaching object may be a human being or any body having two fingers and capable of teaching activities. This embodiment is described in detail with reference to a human being as a teaching target.
Step S120, recognizing gesture information of the two fingers based on the sequence image;
it should be noted that in some embodiments, gesture information may be directly recognized based on the sequence images; or in other embodiments, the pose information of the double fingers or the single finger is extracted based on the sequence image, and then the gesture information is recognized based on the pose information.
The pose information may be described by 3d coordinates (i.e., 6d pose information xyz w) of a preset coordinate system for the target object, and the motion of the rigid body in the 3-dimensional space may be described by 3d coordinates (total 6 degrees of freedom), specifically, the motion may be divided into rotation and translation, each of which has 3 degrees of freedom. The translation of the rigid body in the 3-dimensional space is a common linear transformation, and a 3x1 vector can be used for describing the translation position; while rotational gestures are commonly described in a manner including, but not limited to: rotation matrix, rotation vector, quaternion, euler angle and lie algebra.
Specifically, the gesture information or the pose information of the two fingers/the single finger in each image in the sequence of images can be recognized according to a traditional image-based target recognition method (such as an edge extraction method, a key point/key line extraction method and/or a template matching method and the like), an artificial intelligence method or a gesture analysis method carried by some body sensing devices.
In one embodiment, the extracting of the key points/key lines may be extracting 2d coordinates of the key points and/or key lines in the image. The key point may be a key point on the belonging target object; or the vertex of the bounding box of the minimum volume that encloses the object, the following describes the two cases in detail:
further, in one embodiment, the key points may be 8 vertices of a bounding box enclosing the minimum volume of the target object, i.e. 2d coordinates identifying projection points of the respective key points on the 2d image, or the model may directly output image data after labeling the projection points; in one embodiment, a central point (i.e. a total of 9 key points) of the bounding box may be further calculated in addition to the vertices, and the pose information of the double finger or the single finger is obtained based on some algorithm (e.g. PNP) in combination with the 2d information of the key points.
Further, in one embodiment, keypoints on the belonging target may be identified in a manner based on a probabilistic predictive graph. That is, each pixel of the image is predicted, and the color of the probability prediction graph has two meanings at present. Each pixel point can predict the direction of a key point relative to the pixel, the color of the probability prediction graph represents the direction, each pixel point can predict the possibility that the current pixel is the key point, and the higher the possibility is, the higher the prediction value is. It should be noted that the predicted value of the pixel points near the key point is usually high. And obtaining the pose information of the double fingers or the single finger based on certain algorithms according to the obtained key points or lines.
In one embodiment, the method for template matching can regard the double fingers as a whole, and pre-establishes an image library of template images of the double fingers under a plurality of discrete angles and pose information of the double fingers corresponding to each template image.
It should be noted that the image library may be established in any state based on "double fingers", such as: in the case of open fingers or merged fingers.
Specifically, the pose recognition purpose may be to obtain a pose of an object relative to the camera, imagine a spherical surface with the object as a center and an arbitrary radius, move the camera on the spherical surface, and take a picture of the object, where the pose of the object is related to the position of the image sensor on the spherical surface. Discretizing the sphere, wherein each point in the graph is a visual angle, and each visual angle corresponds to pose information. By such discretization, the original continuous attitude estimation problem is converted into a classification problem, namely, the object only needs to be estimated to which view angle the attitude of the object belongs.
Specifically, the template image may be a real photographed image or an image obtained based on a 3D model (e.g., a CAD model). And matching the acquired sequence image with the template image to find out the template image which can be matched, and correspondingly obtaining the pose information of the double fingers corresponding to the template image, wherein the angle is the teaching gesture information of the rotation angle of the double fingers. The accuracy of the pose identification by adopting the method depends on the discretization degree, and the finer the discretization is, the higher the pose identification accuracy is.
In one embodiment, the artificial intelligence method refers to that based on the input image, gesture information, pose information, or the above key points/key lines, etc. can be directly output.
Specifically, the network model may include, but is not limited to, a Convolutional Neural Network (CNN), and common CNN models may include, but are not limited to: LeNet, AlexNet, ZFNET, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, YOLO, SSD, BB8, YOLO-6D, Deep-6dPose, PoseCNN, Hourglass and/or CPN, and other now known or later developed network model structures.
Specifically, the training method may be supervised learning, semi-supervised learning, or unsupervised learning, or a training method developed now or in the future.
It should be noted that the gesture information may include, but is not limited to, "double or single finger", "0 finger", "double finger merge", "double finger separate", "double or single finger rotate", and/or "motion track of double or single finger". Each of which is described in further detail below.
In one embodiment, when a double finger M1 (shown in fig. 1 to 4) or a single finger M2 (shown in fig. 5) extending from a teaching object appears in the image capturing field of view of the image sensor 13, the control unit 12 may extract the outline, key point/key line, etc. of the double finger or the single finger in the image based on the above-mentioned conventional image processing method or artificial intelligence method, and then determine whether to include gesture information of "double finger" or "single finger" according to the extracted information.
As shown in fig. 6, in one embodiment, the two fingers or the single finger of the teaching object retract to appear with a fist M3 within the image capture field of view of the image sensor, such as: when a double-finger or single-finger key point/key line, a contour, or the like is not extracted from the image, it is possible to determine whether the gesture information is "0 finger".
As shown in fig. 1 or fig. 2, in one embodiment, gesture information for "double finger merge", "double finger separate". Such as: the judgment is made by identifying the keypoints/keypoints of two fingers in the image respectively. Taking the inclusion box as an example, 2d coordinates of each vertex of two minimum volume enclosure boxes respectively enclosing the two fingers can be identified based on the image, pose information of the two fingers can be obtained based on the 2d coordinates of each vertex of the enclosure boxes respectively (for example, as shown in fig. 7 or fig. 8, the center points F1 and F2 of the fingers are the origin of the coordinate system of the fingers), and further, the distance between the two fingers can be calculated, so as to generate corresponding gesture information, such as: judging whether the distance between the two fingers is greater than a certain preset value or not, and if so, regarding the two fingers as gesture information of 'two fingers separated'; and when the distance between the two fingers is smaller than a certain preset value, the gesture information of 'double-finger combination' is considered.
For another example: two contours of the two fingers can be identified based on the image, the position and attitude information of a certain point (such as a fingertip, a finger root or a finger middle) or a line preset on the contour is taken as a reference, and then whether the distance between the two fingers is larger than a certain threshold value or not is judged.
Further, in one embodiment, the gesture information of "two fingers separated" may further include gesture information of a distance (e.g., 3cm apart) of separation of two fingers obtained based on the pose information between the two fingers.
The recognition method can also be realized based on an artificial intelligence method, namely, after the image is input into the gesture recognition model, the pose information of the two fingers is output, and the gesture information of the result of merging the two fingers or separating the two fingers is further judged based on the pose information; or directly outputting gesture information for judging the result of the "merging of two fingers" or the "separating of two fingers", or the like.
As shown in FIG. 3, in one embodiment, the "double fingers" may be considered as a whole. The recognition of the gesture information of "two-finger rotation" can be based on the above template matching method, such as: the method comprises the steps of establishing a template library of template images of double fingers under a plurality of discrete angles and pose information corresponding to each image in advance, matching the acquired sequence images with the template images, finding out template images capable of completing matching and pose information corresponding to the template images, calculating gesture information of a rotation angle of the double fingers at the current moment relative to the previous moment based on the pose information of the double fingers at the current moment and the previous moment, and identifying the gesture information of a teaching rotation track of the double fingers according to the sequence images. Similarly, in addition to the gesture information of the "two-finger rotation", gesture information indicated in the same manner as the "two-finger rotation" may be realized by the "one-finger rotation".
As shown in fig. 4 or 5, in an embodiment, the "double fingers" are regarded as a whole, or only the "single finger" appears in the image, for the recognition of the motion trajectory of the "double fingers" or the "single finger", the pose and other motion information of the "double fingers" or the "single finger" at a certain time is generated by analyzing the image, and the gesture information of the teaching motion trajectory of the double fingers or the single finger can be obtained according to the sequence image; specifically, the pose information of the "double fingers" or the "single finger" can be identified based on the template matching method; or identifying the key points/key lines associated with the double fingers or the single fingers, and generating the pose information of the double fingers or the single fingers according to the key points/key lines.
It should be noted that the recognition method of each gesture information is not limited to the image-based target recognition method, the artificial intelligence method, or the gesture analysis method of some motion sensing devices listed in the above embodiments, and any method that is developed now or in the future, recognizes gesture information based on an image, or recognizes pose information of two fingers/a single finger based on an image, and generates gesture information based on the pose information, etc. all fall within the scope included in the present invention.
Step S130 generates a control command based on the mapping rule in combination with the gesture information.
Based on the preset mapping rule, the gesture information recognized according to the above embodiment may generate various control commands, which is further described in detail below.
1-5, in one embodiment, the gesture information of the "double finger" or the "single finger" may correspond to a mapping rule for teaching the start control command. That is, when it is judged that "double finger" or "single finger" appears in the image, the teaching is considered to be started.
Continuing with FIG. 6, in one embodiment, the gesture information of "finger 0" may correspond to a mapping rule of the teaching end instruction. That is, when it is judged that the "0 finger" appears in the image, the teaching is terminated.
Continuing with fig. 1 and 2, in one embodiment, the merged gesture of double finger M1 corresponds to a mapping rule of control commands for the merging of jaws 111; the gesture information of the separation of the two fingers M1 corresponds to the mapping rule of the control command for the separation of the clamping jaws 111 by the preset distance, and in one embodiment, the gesture information based on the separation distance of the two fingers corresponds to the mapping rule of the control command for the separation of the clamping jaws by the corresponding distance.
As shown in fig. 3, in one embodiment, the gesture information of the rotation of the two fingers M1 represents a control command for the entire rotation of the clamping jaw 111, so that the teaching gesture information of the two-finger rotation trajectory can be generated based on the sequence image, and the mapping rule of the control command corresponding to the clamping jaw rotation (including the rotation angle) can be further generated. In one embodiment, the gripper is flange-mounted at the output of a drive unit at the end of the robot arm, which can be driven into rotation by the drive unit. Similarly, the mapping rule may also be applied to the gesture information of "single finger rotation" described above.
As further shown in fig. 4 or 5, in one embodiment, the gesture information of the motion trajectory of the two fingers M1 or the single finger M2 represents a motion trajectory plan of the robot end 111 (e.g., with the center point of the end effector as the origin of the robot end coordinate system). Therefore, teaching gesture information of the motion trail of the double fingers or the single finger can be generated based on the sequence images, and a control instruction of the motion trail planning of the corresponding mechanical arm tail end is further generated.
The control instruction for planning the motion trajectory of the tail end of the manipulator may be a control instruction for planning the whole motion trajectory of the tail end of the manipulator from a starting point to a terminal point, or displacement, velocity/angular velocity and/or acceleration/angular acceleration of each joint of the manipulator obtained correspondingly to the motion trajectory planning.
Further, in one embodiment, the mapping rules may also include scaling factors, such as: 1: 1, such as: the gesture information shows that the two fingers rotate 50 degrees, which may represent that the clamping jaw of the manipulator rotates 50 degrees, or 1: 10 (e.g., the gesture information display moves 5cm, which may represent the actual movement of the end of the robot arm by 50 cm).
As illustrated in fig. 10, in one embodiment, a method for teaching a robot may comprise the method steps of:
step S210, acquiring a current image of the teaching activity based on the double fingers;
step S220, identifying current gesture information of the two fingers based on the current image;
step S230, based on the current gesture information and the mapping rule, judging whether the teaching activities are started;
if yes, step S240 combines the current gesture information, and generates a current control instruction based on the mapping rule; if not, the step of step S210 is executed.
Teaching activities for a manipulator including an end effector can be completed through simple teaching behaviors based on double fingers of a teaching object, and simple, rapid and accurate teaching for the manipulator is achieved.
For other relevant descriptions of the teaching method of the manipulator, reference is made to the above embodiments, and the description is not repeated here.
It should be understood that, although the steps in the flowcharts of fig. 9 or 10 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps of fig. 9 or 10 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 11, there is provided a robot teaching device including:
an image acquisition module 110, configured to acquire a sequence of images of a teaching activity based on two fingers;
a gesture recognition module 120, configured to recognize gesture information of two fingers based on the sequence image;
and the gesture generation module 130 is used for generating a control instruction of the manipulator based on the mapping rule by combining the gesture information.
In one embodiment, as shown in fig. 12, there is provided a robot teaching device including:
an image acquisition module 210 for teaching a current image of an activity based on two fingers;
a gesture recognition module 220, configured to recognize current gesture information of the two fingers based on the current image;
a gesture recognition module 220, configured to determine whether to start a teaching activity based on current gesture information;
and a schematic generating module 230, configured to, if yes, combine the current gesture information in step S240, and generate a control instruction of the manipulator based on the mapping rule, if no, perform the step of obtaining the current image.
For specific limitations of the robot teaching device, reference may be made to the limitations of the robot teaching method described above, and further description thereof is omitted here. The respective modules in the robot teaching device described above may be entirely or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 13, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the robot teaching method described above when the processor executes the computer program.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of the robot teaching method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The terms "first," "second," "third," "S110," "S120," "S130," and the like in the claims and in the description and in the drawings above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances or may occur concurrently in some cases so that the embodiments described herein may be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover non-exclusive inclusions. For example: a process, method, system, article, or robot that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but includes other steps or modules not explicitly listed or inherent to such process, method, system, article, or system.
It should be noted that the embodiments described in the specification are preferred embodiments, and the structures and modules involved are not necessarily essential to the invention, as will be understood by those skilled in the art.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A robot teaching method, characterized by comprising:
acquiring sequence images of the teaching activities based on the double fingers;
recognizing gesture information of the double fingers based on the sequence images;
and generating a control instruction based on a mapping rule by combining the gesture information.
2. The robot teaching method according to claim 1, wherein the gesture information includes: "double or single finger", "0 finger", "double finger merge", "double finger separate", "double finger or single finger rotation" and/or "motion trajectory of double or single finger".
3. The robot teaching method according to claim 1 or 2, wherein the mapping rule includes: the gesture information corresponds to a control instruction for planning a motion track of the tail end of the manipulator; and/or
The gesture information corresponds to a control instruction for opening and/or closing an end effector of the manipulator; and/or
The gesture information corresponds to a control instruction of rotation of an end effector of the manipulator; and/or
The gesture information corresponds to a control instruction for the start and/or end of the teaching.
4. The robot teaching method according to claim 3, wherein the mapping rule further includes a rule of a scale factor.
5. A robot teaching method, characterized by comprising:
acquiring a current image of the teaching activities based on the double fingers;
identifying current gesture information of the two fingers based on the current image;
judging whether the teaching is started or not based on the current gesture information and a mapping rule;
if so, generating a current control instruction based on the mapping rule by combining the current gesture information and the gesture information at the previous moment; and if not, executing the step of obtaining the current image.
6. A robot teaching device is characterized by comprising:
the image acquisition module is used for acquiring sequence images of the teaching activities based on the double fingers;
the gesture recognition module is used for recognizing gesture information of the double fingers based on the sequence images;
and the gesture generation module is used for combining the gesture information and generating a control instruction based on the mapping rule.
7. A robot teaching device is characterized by comprising:
the image acquisition module is used for acquiring a current image of the teaching activity based on the double fingers;
the gesture recognition module is used for recognizing current gesture information of the two fingers based on the current image;
the teaching judgment module is used for judging whether teaching is started or not based on the current gesture information;
the gesture generation module is used for generating a current control instruction based on a mapping rule by combining the current gesture information and the gesture information at the previous moment if the current control instruction is the current control instruction; and if not, executing the step of obtaining the current image.
8. A system, characterized in that the system comprises: a manipulator, an image sensor, and a control unit;
the image sensor and the manipulator are respectively in communication connection with the control unit;
the image sensor is used for acquiring the sequence images;
the control unit is used for acquiring sequence images of the teaching activities based on the double fingers; recognizing gesture information of the double fingers based on the sequence images; generating a control instruction of the manipulator based on a mapping rule by combining the gesture information; or
The image sensor is used for acquiring the current image;
the system comprises a display unit, a display unit and a display unit, wherein the display unit is used for acquiring a current image of the double-finger-based teaching activity; identifying current gesture information of the two fingers based on the current image; judging whether the teaching is started or not based on the current gesture information and a mapping rule; if so, generating a current control instruction based on the mapping rule by combining the current gesture information and the gesture information at the previous moment; if not, executing the step of obtaining the current image;
and the manipulator is used for executing corresponding actions according to the control command or the current control command.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the robot teaching method of any of claims 1-5 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the robot teaching method according to any one of claims 1 to 5.
CN201911244393.XA 2019-12-06 2019-12-06 Teaching method, device and system of manipulator, storage medium and equipment Pending CN112917470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244393.XA CN112917470A (en) 2019-12-06 2019-12-06 Teaching method, device and system of manipulator, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244393.XA CN112917470A (en) 2019-12-06 2019-12-06 Teaching method, device and system of manipulator, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN112917470A true CN112917470A (en) 2021-06-08

Family

ID=76161851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244393.XA Pending CN112917470A (en) 2019-12-06 2019-12-06 Teaching method, device and system of manipulator, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112917470A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471561A (en) * 2022-11-14 2022-12-13 科大讯飞股份有限公司 Object key point positioning method, cleaning robot control method and related equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011065035A1 (en) * 2009-11-24 2011-06-03 株式会社豊田自動織機 Method of creating teaching data for robot, and teaching system for robot
CN105328700A (en) * 2015-11-12 2016-02-17 东北大学 Data glove for teaching programming of robot dexterous hand
CN105722650A (en) * 2013-09-20 2016-06-29 电装波动株式会社 Robot maneuvering device, robot system, and robot maneuvering program
CN105955489A (en) * 2016-05-26 2016-09-21 苏州活力旺机器人科技有限公司 Robot gesture identification teaching apparatus and method
CN106896796A (en) * 2017-02-13 2017-06-27 上海交通大学 Industrial robot master-slave mode teaching programmed method based on data glove
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN107578023A (en) * 2017-09-13 2018-01-12 华中师范大学 Man-machine interaction gesture identification method, apparatus and system
CN107813310A (en) * 2017-11-22 2018-03-20 浙江优迈德智能装备有限公司 One kind is based on the more gesture robot control methods of binocular vision
CN108044625A (en) * 2017-12-18 2018-05-18 中南大学 A kind of robot arm control method based on the virtual gesture fusions of more Leapmotion
CN109202886A (en) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 Based on the gesture identification method and system under fixed background
US20210316449A1 (en) * 2020-04-08 2021-10-14 Fanuc Corporation Robot teaching by human demonstration

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011065035A1 (en) * 2009-11-24 2011-06-03 株式会社豊田自動織機 Method of creating teaching data for robot, and teaching system for robot
CN105722650A (en) * 2013-09-20 2016-06-29 电装波动株式会社 Robot maneuvering device, robot system, and robot maneuvering program
CN105328700A (en) * 2015-11-12 2016-02-17 东北大学 Data glove for teaching programming of robot dexterous hand
CN105955489A (en) * 2016-05-26 2016-09-21 苏州活力旺机器人科技有限公司 Robot gesture identification teaching apparatus and method
CN106896796A (en) * 2017-02-13 2017-06-27 上海交通大学 Industrial robot master-slave mode teaching programmed method based on data glove
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN109202886A (en) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 Based on the gesture identification method and system under fixed background
CN107578023A (en) * 2017-09-13 2018-01-12 华中师范大学 Man-machine interaction gesture identification method, apparatus and system
CN107813310A (en) * 2017-11-22 2018-03-20 浙江优迈德智能装备有限公司 One kind is based on the more gesture robot control methods of binocular vision
CN108044625A (en) * 2017-12-18 2018-05-18 中南大学 A kind of robot arm control method based on the virtual gesture fusions of more Leapmotion
US20210316449A1 (en) * 2020-04-08 2021-10-14 Fanuc Corporation Robot teaching by human demonstration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈青: "机械手示教系统的手势识别与抓取技术的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471561A (en) * 2022-11-14 2022-12-13 科大讯飞股份有限公司 Object key point positioning method, cleaning robot control method and related equipment

Similar Documents

Publication Publication Date Title
Zimmermann et al. Learning to estimate 3d hand pose from single rgb images
US10372228B2 (en) Method and system for 3D hand skeleton tracking
WO2021068323A1 (en) Multitask facial action recognition model training method, multitask facial action recognition method and apparatus, computer device, and storage medium
JP7071054B2 (en) Information processing equipment, information processing methods and programs
CN114097004A (en) Autonomous task performance based on visual embedding
Dornaika et al. Simultaneous facial action tracking and expression recognition using a particle filter
CN112518756B (en) Motion trajectory planning method and device for mechanical arm, mechanical arm and storage medium
CN112287730A (en) Gesture recognition method, device, system, storage medium and equipment
CN109444146A (en) A kind of defect inspection method, device and the equipment of industrial processes product
CN113614784A (en) Detecting, tracking and three-dimensional modeling of objects using sparse RGB-D SLAM and interactive perception
WO2023083030A1 (en) Posture recognition method and related device
CN112775967A (en) Mechanical arm grabbing method, device and equipment based on machine vision
CN112017226A (en) Industrial part 6D pose estimation method and computer readable storage medium
EP4290459A1 (en) Augmented reality method and related device thereof
US20230330858A1 (en) Fine-grained industrial robotic assemblies
Zhang et al. Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception
CN113551661A (en) Pose identification and track planning method, device and system, storage medium and equipment
CN112917470A (en) Teaching method, device and system of manipulator, storage medium and equipment
WO2022120670A1 (en) Movement trajectory planning method and apparatus for mechanical arm, and mechanical arm and storage medium
CN112307799A (en) Gesture recognition method, device, system, storage medium and equipment
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN114202554A (en) Mark generation method, model training method, mark generation device, model training device, mark method, mark device, storage medium and equipment
Shah et al. Gesture recognition technique: a review
CN112287955A (en) Image-based processing, training and foreground extraction method, device and system
CN112307801A (en) Posture recognition method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210608

WD01 Invention patent application deemed withdrawn after publication