WO2019114339A1 - 机械臂运动的校正方法和装置 - Google Patents

机械臂运动的校正方法和装置 Download PDF

Info

Publication number
WO2019114339A1
WO2019114339A1 PCT/CN2018/104735 CN2018104735W WO2019114339A1 WO 2019114339 A1 WO2019114339 A1 WO 2019114339A1 CN 2018104735 W CN2018104735 W CN 2018104735W WO 2019114339 A1 WO2019114339 A1 WO 2019114339A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
camera
rotation matrix
arms
base
Prior art date
Application number
PCT/CN2018/104735
Other languages
English (en)
French (fr)
Inventor
张帆
Original Assignee
南京阿凡达机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京阿凡达机器人科技有限公司 filed Critical 南京阿凡达机器人科技有限公司
Publication of WO2019114339A1 publication Critical patent/WO2019114339A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator

Definitions

  • the invention relates to the technical field of mechanical arm motion control, and in particular to a method and a device for correcting the movement of a mechanical arm.
  • the embodiment of the present application provides a method and a device for correcting the movement of a mechanical arm to solve the technical problem of high cost and low efficiency of correcting the movement of the mechanical arm in the method, and to achieve the technical effect of accurately and accurately correcting the movement of the mechanical arm. .
  • the application provides a method for correcting the movement of a robot arm, comprising:
  • Obtaining a target image of the robot arm for grasping the object to be grasped wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed on the plurality of moving arms Each of the moving arms, the base and the object to be grasped;
  • the acquiring a robot arm to capture a target image of the object to be grasped includes:
  • the target is identified as a two-dimensional code image comprising four anchor points.
  • the extracting the multiple target identifiers from the target image includes:
  • determining current pose information of each of the plurality of motion arms according to the plurality of target identifiers and the target image, respectively including:
  • the corresponding relationship between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped is established according to the plurality of target identifiers, including:
  • the camera is based on a rotation matrix and a translation vector of the camera, a rotation matrix and a translation vector of the base based on the camera, and the camera-based rotation of the to-be-obtained object is performed according to each of the plurality of motion arms.
  • a matrix and a translation vector, determining current pose information of each of the plurality of motion arms including:
  • the motion arms of each of the plurality of motion arms are based on the base Rotation matrix
  • the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
  • the current arm motion is corrected according to the current pose information of each of the plurality of motion arms, including:
  • the application also provides a device for correcting the movement of a robot arm, comprising:
  • An acquiring module configured to acquire a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively set On each of the plurality of moving arms, the base, and the object to be grasped;
  • An extracting module configured to extract the plurality of target identifiers from the target image
  • a determining module configured to respectively determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image;
  • a correction module configured to correct current arm motion according to current pose information of each of the plurality of motion arms.
  • the acquiring module includes a camera, the camera is disposed at a preset position, and the camera is configured to acquire a target image of the robot arm to grasp a to-be-grabbed object, wherein the preset position Includes areas other than the robot arm.
  • the extraction module comprises:
  • a first extracting unit configured to perform quadrilateral contour detection on the target image to extract a plurality of contour images
  • An acquiring unit configured to perform plane projective transformation on the plurality of contour images to obtain a front view of the plurality of target identifiers
  • a first determining unit configured to determine the multiple target identifiers according to a front view of the multiple target identifiers.
  • the determining module comprises:
  • a first establishing unit configured to establish, according to the plurality of target identifiers, a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped;
  • a second determining unit configured to respectively determine, according to the target image and the corresponding relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, to be captured Based on the rotation matrix and translation vector of the camera;
  • a third determining unit configured to: based on a rotation matrix and a translation vector of each camera of the plurality of motion arms, a rotation matrix and a translation vector of the base based on the camera, and a camera-based rotation matrix of the object to be grasped And a translation vector, determining current pose information of each of the plurality of moving arms.
  • the first establishing unit comprises:
  • a rotation processing subunit configured to respectively perform rotation processing on the plurality of target identifiers to obtain a plurality of rotated processed logo images
  • Determining a subunit configured to match the plurality of rotated processed identification images with the identifiers in the identification database to determine the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped Correspondence relationship.
  • the determining module comprises:
  • a fourth determining unit configured to determine each motion in the plurality of moving arms based on a rotation matrix and a translation vector of the camera, a rotation matrix and a translation vector of the to-be-obtained object based on the camera, each of the plurality of motion arms
  • the arm is based on a rotation matrix and a translation vector of the object to be grasped
  • a fifth determining unit configured to determine, according to the rotation matrix and the translation vector of the camera based on the base, the rotation matrix and the translation vector of the to-be-obtained object based on the camera, the rotation matrix and the translation vector of the object to be grasped based on the base;
  • a sixth determining unit configured to determine, according to a rotation matrix and a translation vector of the to-be grasped object, a rotation matrix and a translation vector of the to-be-obtained object based on the base, the plurality of motion arms Each of the moving arms is based on a rotation matrix of the base;
  • a seventh determining unit configured to determine current pose information of each of the plurality of moving arms based on a rotation matrix of each of the plurality of moving arms based on the base.
  • the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
  • the correction module comprises:
  • a comparison unit configured to compare current pose information of each of the plurality of motion arms with a motion path corresponding to the current robot arm motion to determine a current pose deviation of each of the plurality of motion arms ;
  • a correcting unit configured to respectively correct each of the plurality of moving arms according to a current pose deviation of each of the plurality of moving arms.
  • the solution sets the target identifier on each of the moving arm, the base, and the object to be grasped in advance, and then determines the posture of each of the moving arms according to the target identifier and the target image in the acquired target image.
  • the information can be used to correct the movement of the arm according to the pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method, and achieving high efficiency and accuracy. Correctly correct the technical effect of the movement of the robot arm.
  • FIG. 1 is a process flow diagram of a method of correcting the motion of a robot arm according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a front view of a target identifier obtained by applying a method and apparatus for correcting movement of a robot arm provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of a method and apparatus for correcting movement of a robot arm according to an embodiment of the present application
  • FIG. 4 is a structural diagram of a device for correcting the movement of a robot arm according to an embodiment of the present application
  • FIG. 5 is a structural diagram of an electronic device according to a method for correcting a motion of a robot arm provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of a method for correcting a movement of a robot arm provided by an embodiment of the present application in a scene example;
  • FIG. 7 is a schematic diagram of setting a corresponding target identifier in one of a plurality of moving arms before applying a method and apparatus for correcting movement of a robot arm provided by an embodiment of the present application in a scene example;
  • FIG. 8 is a schematic flow chart of a method and apparatus for correcting the motion of a robot arm provided by an embodiment of the present application in a scene example.
  • the existing methods mostly use high-precision equipment to collect the relative position data between the robot arm execution terminal and the object to be grasped, and then re-plan the motion path according to the relative position data between the terminal and the object to be grasped by the robot arm.
  • the movement of the robot arm is corrected by the motion path obtained by the re-planning. Since the obtained relative position data indication with the object to be grasped has a limited effect, it is not possible to separately adjust the respective movement arms, and the motion path can only be re-planned according to the position data to perform overall correction on the movement of the robot arm. Therefore, when the existing method is implemented, there is often a technical problem of correcting the movement cost of the robot arm and low efficiency.
  • the present application considers that the target identifier can be set on each of the moving arm, the base, and the object to be grasped in advance; and each arm is determined according to the target mark and the target image in the acquired target image.
  • the pose information can be used to correct the movement of the arm according to the specific pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method.
  • the technical effect of the movement of the arm can be corrected efficiently and accurately.
  • the embodiment of the present application provides a method for correcting the motion of the robot arm.
  • a method for correcting the motion of the robot arm please refer to the processing flowchart of the method for correcting the motion of the robot arm according to the embodiment of the present application shown in FIG. 1 .
  • the method for correcting the motion of the robot arm provided by the embodiment of the present application may include the following steps.
  • S11 acquiring a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed on the plurality of Each of the moving arms, the base and the object to be grasped in the moving arm.
  • the mechanical arm may specifically be a mechanical arm including a plurality of moving arms and a base.
  • the above base can be specifically fixed in the construction area. For example, it can be fixed on a work plane. In this way, when the mechanical arm is specifically moved, the movement of the mechanical arm can be completed by controlling the coordinated motion of each of the plurality of moving arms.
  • the grasping object can be grasped by coordinating the coordinated motion control arm of each of the plurality of moving arms.
  • the above mechanical arm may specifically include three moving arms and one base.
  • the base may be denoted by L1, and the three movable arms may be specifically denoted as L2, L3 and L4, respectively.
  • L1 the specific number of the plurality of moving arms is not limited in this application.
  • the grasping of the object to be grasped by the mechanical arm may be considered as a mechanical arm movement.
  • the above-described mechanical arm movement forms are merely for the purpose of better describing the embodiments of the present application.
  • other forms of motion may also be introduced according to specific situations and implementation requirements, for example, controlling the mechanical arm to mount components at a specified position, etc., as the mechanical arm motion to be corrected.
  • the application is not limited.
  • the target image may be a target image in which a robot arm moves to grab the object to be grasped.
  • the target image may also be a target image of other mechanical arm movements to grasp the object to be grasped.
  • the target image may specifically include multiple target identifiers.
  • the plurality of target identifiers may be specifically graphic identifiers respectively disposed on each of the plurality of moving arms of the robot arm, the base, and the object to be grasped.
  • the plurality of target identifiers are different from each other, and each target identifier has a unique correspondence with each of the plurality of moving arms, the base, and a target object to be grasped. Therefore, a target object can be uniquely identified by a target identifier.
  • the second moving arm L3 can be uniquely determined based on the target identification M3 without being recognized as the first moving arm L2.
  • the target identifier may specifically be a two-dimensional code image including four positioning points.
  • the four positioning points in the target identifier may be used to determine location information of the target identifier, and further, the location information of the target object with the target identifier may be determined according to the location information of the target identifier.
  • the two-dimensional code in the above target identifier is used to indicate the target object provided with the target identifier.
  • the target object provided with the target identifier may be identified as the second moving arm L3. That is, it can be understood that the four positioning points of the target identifiers set in different target objects can be the same, but the two-dimensional codes in the target identifiers are different.
  • the target mark M3 provided in the second moving arm L3 and the target mark M2 provided in the first moving arm are the same, but the two-dimensional code is different.
  • the edge of the square contour of the target identifier set on the target object is required. Parallel to the axis of the target object. Specifically, for example, when the target mark M3 is set on the second moving arm L3, the side of the square outline of the target mark M3 is required to coincide with the axial direction of the second moving arm.
  • the target identifier may be set on a corresponding target object by pasting.
  • the acquiring the target image of the object to be grasped by the robot arm may be obtained by acquiring a target image of the robot to grasp the object to be grasped by a camera disposed at the preset position.
  • the preset position may specifically be a position in a region other than the robot arm. In this way, the target image obtained by the position can completely include each of the moving arms and the base in the robot arm, and the target object such as the object to be grasped.
  • the preset position may specifically be a position where the camera can acquire a front image of a relatively complete target identifier.
  • the preset position may be above the working area of the arm.
  • the camera may be disposed at a lower left corner position above the working area, and the camera is inclined downward toward the direction of the robot arm.
  • the front image of the relatively complete target identifier of all target objects can be acquired by the camera of the preset position.
  • the preset positions listed above are only for better explaining the embodiments of the present application.
  • other suitable locations may also be selected as the preset locations according to specific conditions and construction requirements. In this regard, the application is not limited.
  • the target image needs to be processed to extract a front view of the target logo with better effect. Then, the target identifier is accurately determined according to the front view of the target identifier.
  • the specific extraction process can include the following.
  • Quadrilateral contour detection is performed on the target image to extract a plurality of contour images.
  • the following may be included: performing binary segmentation on the target image by using an adaptive threshold method to obtain a corresponding binary image; and extracting the contour image from the binary image.
  • the method may further include: performing polygon fitting on the contour image to discard the contour image that does not meet the first requirement; Exclude contour images that do not meet the second requirement. In this way, a plurality of contour images that meet the requirements can be obtained, so that a more accurate front view of the target identifier can be obtained later.
  • the contour image that does not meet the first requirement may specifically include: a non-convex polygonal contour image, a non-quadrilateral contour image, and the like.
  • the contour image that does not meet the second requirement may specifically include: a contour image of one side of the quadrilateral that is significantly smaller than the other side, a contour image with a contour perimeter or an area that is too small, and the like, for example, a contour image that is relatively elongated.
  • the coordinates of the four anchor points in the contour image may be extracted first; and the standard coordinates of the four anchor points are obtained by using the identifier database; According to the coordinates of the four positioning points in the contour image and the standard coordinates of the four positioning points, a projective transformation matrix is established; the pixel points in the contour image are respectively subjected to projective transformation using the projective transformation matrix to obtain a front view of the corresponding target identifier.
  • a projective transformation matrix is established; the pixel points in the contour image are respectively subjected to projective transformation using the projective transformation matrix to obtain a front view of the corresponding target identifier.
  • the coordinates of the four positioning points (pixels) after the (quadrilateral) contour image is converted into a front view by the projective transformation are respectively (0, 0), (0, 100). , (100, 100), (100, 0) (ie the standard coordinates of 4 anchor points).
  • the coordinates of the four anchor points in the contour image that can be determined from the existing contour image.
  • H is the projective transformation matrix
  • X is the quadratic vertex of the contour image, that is, the homogeneous (pixel) coordinates of the four anchor points in the contour image
  • X' is the homogeneous coordinate of the corresponding anchor point after the projective transformation
  • x1, x2 , x3 is an element in X
  • x1', x2', and x3' are elements in X'.
  • x and y are corresponding non-homogeneous (pixel) abscissas and ordinates;
  • u and v are the abscissa and ordinate of the non-homogeneous pixel after the projective transformation.
  • the projective transformation matrix H can be solved by the simultaneous equations; further, the projection transformation matrix H can be used to perform projective transformation on all the pixel points in the contour image, that is, the projection result can be obtained.
  • the front view corresponding to the contour image that is, the front view of the target logo (also called the plane logo image).
  • the image size of the front view of the target identifier may specifically be 100 ⁇ 100. 12-3: determining the multiple target identifiers according to the front view of the multiple target identifiers.
  • the front view of the target identifier can be obtained after the image processing to accurately determine the corresponding target identifier for subsequent analysis and use.
  • S13 Determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image.
  • the current pose information of each of the plurality of moving arms can be determined efficiently and accurately.
  • the following steps can be performed.
  • S13-1 The corresponding relationship between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped is established according to the plurality of target identifiers.
  • the foregoing correspondence may be specifically considered as an indication relationship between the target identifier and the target object. For example, it is determined that the target identification M3 indicates the second moving arm L3, that is, the correspondence relationship between the target identification M3 and the second moving arm L3 can be considered. Thus, at the time of subsequent processing, the positional relationship of the second moving arm L3 can be determined based on the positional relationship of the target mark M3.
  • S2 Matching the plurality of rotated processed identification images with the identifiers in the identification database to determine a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped.
  • the target identifier may be rotated by 90 degrees, 180 degrees, and 270, respectively, including the target identifier without rotation, and a total of The image of the four target marks, that is, the logo image after the above rotation processing.
  • the schematic diagram of the method and apparatus for correcting the motion of the robot arm provided by the embodiment of the present application shown in FIG.
  • the identifier database is configured with a preset standard identifier, and the standard identifier carries the corresponding indication information, where the indication information is used to indicate the target object corresponding to the standard identifier.
  • a plurality of different angles of the identification image are obtained by first rotating each target identifier, and the plurality of different angles of the identification image are used to search in the identification database to determine the matching standard.
  • the identification can improve the accuracy of the matching and improve the speed of searching for the standard identification that matches the target identification.
  • S13-2 determining, according to the target image and the correspondence relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, and the object to be grasped is based on the camera The rotation matrix and the translation vector.
  • the coordinate system of the position of the camera may be used as a reference coordinate system, and the coordinate system of each target object may be determined based on the positional relationship between the coordinate systems of the cameras. That is, each of the plurality of moving arms is based on the rotation matrix and translation vector of the camera, the rotation matrix and translation vector of the base based on the camera, the rotation matrix and the translation vector of the camera to be grasped, so that the following can be utilized
  • the positional relationship further determines the specific location of each target object.
  • the object to be grasped is taken as an example, and the target identifier of the object to be grasped is M1.
  • the center O1 of the M1 may be taken as the origin, and the target identifier M1 is located.
  • the point Z coordinate on the plane is 0.
  • the side length of the M1 is 80 mm, and the coordinates of the four positioning points in the coordinate system O1 of the target identifier M1 are (-40, -40, 0), (40, -40, 0), respectively. 40, 40, 0), (-40, 40, 0).
  • Each of the four anchor points on the target identifier M1 has the following conversion relationship between the pixel coordinates of the acquired target identifier on the target image and the coordinates in its own coordinate system O1:
  • (x, y, 1) represents the homogeneous coordinates of the pixel coordinates of any of the target points in the camera-based coordinate system
  • (X, Y, Z, 1) indicates that the above-mentioned positioning points are in the world coordinate system.
  • the homogeneous coordinates can be simplified as (X, Y, 0, 1), s is the scale parameter of any scale introduced, M is the internal parameter matrix of the camera, and r1, r2, r3 respectively identify the coordinate system of the target with respect to the camera.
  • Three column vectors in the rotation matrix R1 of the coordinate system, t is the translation vector.
  • the above M can be specifically obtained by calibrating the camera.
  • an equation can be determined by the pixel coordinates of an anchor point of the target identifier M1 and the coordinates of the point in the coordinate system of the corresponding target identifier.
  • the four anchor points of the target identifier can be determined by four equations.
  • the direct equations (DLT) algorithm can be used to solve the above equations, and the vectors r1, r2, r3 and t can be obtained.
  • the rotation matrix R1 is an orthogonal matrix, and the column vectors r1, r2, and r3 are mutually orthogonal unit vectors. When r1 and r2 are obtained, r3 can be obtained from the vector product of r1 and r2.
  • the rotation matrix R1 can be expressed as (r1, r2, r3), that is, the rotation matrix of the camera to be grasped based on the camera, and t is the camera-based translation vector of the object to be grasped.
  • the camera-based rotation matrix and translation vector, the base camera-based rotation matrix, and the translation vector of each of the plurality of motion arms can be separately calculated.
  • the base is based on The rotation matrix and the translation vector of the camera, the to-be-obtained object based on the rotation matrix and the translation vector of the camera, and determining the current pose information of each of the plurality of motion arms may include the following content.
  • S4 Determine current pose information of each of the plurality of moving arms based on a rotation matrix of each of the plurality of moving arms based on the base.
  • the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
  • the current Euler angles of the respective moving arms may specifically refer to Euler angles of the coordinate systems of the respective moving arms based on the base.
  • the edge of the target mark is parallel to the axis of the target object
  • the coordinate axes of the established coordinate system of the target mark and the coordinate axes of the target object itself are parallel to each other.
  • the X-axis of the coordinate system of the target mark M2 with O2 as the origin is parallel to the X-axis of the coordinate system of the moving arm L2 with the right end point OL2 of the moving arm as the origin.
  • the current Euler angle of the target coordinate's own coordinate system relative to a reference coordinate system is the same as the current Euler angle of the target object's own coordinate system with respect to the same reference coordinate system. Therefore, the Euler angle of the target mark can be used as the Euler angle of the target object as the target object such as the pose information of each of the moving arms.
  • the current pose information of each of the plurality of moving arms may be separately determined according to the following formula.
  • the above ⁇ x, ⁇ y, ⁇ z may specifically represent the Euler angles of the three coordinate axes of the moving arm based on the base coordinate system
  • the above r ij may specifically represent the elements of the i-th row and the j-th column in the rotation matrix of the moving arm based on the base.
  • r 11 is expressed as an element of the first row and the first column of the rotation matrix of the arm based on the base.
  • the current posture of the arm is corrected according to the current pose information of each of the plurality of moving arms, and specifically includes the following content:
  • the current specific pose deviation of each moving arm can be specifically corrected for each moving arm in the original moving path, thereby avoiding re-planning the moving path, and achieving timely, accurate and rapid mechanical action.
  • the purpose of the arm movement is corrected.
  • the target identifier is set on each of the moving arm, the base, and the object to be grasped in advance, and the respective motions are respectively determined according to the target identifier and the target image in the acquired target image.
  • the pose information of the arm can further correct the movement of the arm according to the pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method.
  • the technical effect of the movement of the arm can be corrected efficiently and accurately.
  • the method may further include: calibrating the camera to obtain an internal parameter matrix of the camera (ie, an internal parameter matrix), wherein The internal reference matrix includes parameters such as the distortion coefficient and focal length of the camera.
  • calibrating the camera to obtain an internal parameter matrix of the camera (ie, an internal parameter matrix), wherein The internal reference matrix includes parameters such as the distortion coefficient and focal length of the camera.
  • the calibration of the camera on the top may specifically include the following:
  • test location may be a plane
  • S4 determining, according to the coordinate position of the feature point in the world coordinate system and the corresponding pixel coordinate in the image, the internal parameter, the external parameter, the distortion coefficient, the focal length and the like of the camera to establish an internal parameter about the camera. matrix.
  • the camera may specifically be a monocular camera.
  • the monocular cameras listed above are only for better explaining the embodiments of the present application.
  • other suitable cameras may also be selected according to specific situations, for example, a binocular camera. In this regard, the application is not limited.
  • the method for correcting the movement of the robot arm provided by the embodiment of the present application is achieved by setting a target identifier on each of the moving arms, the base, and the object to be grasped in advance; and then according to the acquired target image.
  • the target identifier and the target image respectively determine the pose information of each arm, and the target arm motion can be specifically corrected according to the pose information of each arm, thereby solving the correction robot existing in the existing method.
  • the technical problem of high movement cost and low efficiency achieves the technical effect of efficiently and accurately correcting the movement of the robot arm; and by using the camera disposed outside the robot arm to acquire the target logo of each of the arm and the base of the robot arm
  • the target image determines the current pose information of each specific arm, and performs corresponding correction based on the current pose information according to the current pose information of each arm, thereby improving the correction efficiency while ensuring the correction accuracy.
  • a correction device for the movement of the robot arm is also provided in the embodiment of the present invention, as described in the following embodiments. Since the principle of solving the problem of the motion of the arm movement is similar to the method of correcting the motion of the arm, the implementation of the correction device for the movement of the arm can be referred to the implementation of the method of correcting the movement of the arm, and the repeated description will not be repeated.
  • the term "unit” or "module” may implement a combination of software and/or hardware of a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated. Please refer to FIG.
  • the device may specifically include: an acquisition module 21 , an extraction module 22 , a determination module 23 , and a correction module 24 . This structure will be specifically described.
  • the acquiring module 21 is specifically configured to acquire a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, and the target image includes a plurality of target identifiers, and the plurality of targets The identifiers are respectively disposed on each of the plurality of moving arms, the base and the object to be grasped;
  • the extracting module 22 is specifically configured to extract the plurality of target identifiers from the target image
  • the determining module 23 is specifically configured to determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image;
  • the correction module 24 is specifically configured to correct the current arm motion according to the current pose information of each of the plurality of motion arms.
  • the acquiring module 21 may specifically include a camera, and the camera may be specifically disposed at a preset position, where the camera may be specifically configured to acquire a target image of the mechanical arm to grasp the object to be grasped.
  • the preset position may specifically include an area other than the mechanical arm.
  • the extraction module 22 may specifically include the following structural units:
  • the first extracting unit may be specifically configured to perform quadrilateral contour detection on the target image to extract a plurality of contour images
  • the acquiring unit may be configured to perform plane projective transformation on the plurality of contour images to obtain a front view of the plurality of target identifiers;
  • the first determining unit may be specifically configured to determine the multiple target identifiers according to a front view of the multiple target identifiers.
  • the determining module 23 may specifically include the following structural units:
  • the first establishing unit may be configured to establish, according to the plurality of target identifiers, a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped;
  • the second determining unit may be specifically configured to determine, according to the target image and the corresponding relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, and a The grab is based on the rotation matrix and translation vector of the camera;
  • the third determining unit may be specifically configured to: based on a rotation matrix and a translation vector of each camera of the plurality of moving arms, a rotation matrix and a translation vector of the base based on the camera, and the camera-based object to be grasped
  • a rotation matrix and a translation vector determine current pose information of each of the plurality of motion arms.
  • the first establishing unit may specifically include the following structural subunits:
  • the rotation processing sub-unit may be configured to separately perform rotation processing on the plurality of target identifiers to obtain a plurality of rotated processed identification images;
  • Determining a subunit which may be used to match the plurality of rotated processed identification images with the identifiers in the identification database to determine the plurality of target identifiers and the plurality of moving arms, the base, and the to-be-captured Correspondence of objects.
  • the determining module 23 may specifically include the following structural units:
  • the fourth determining unit may be specifically configured to determine, according to the rotation matrix and the translation vector of the camera, the rotation matrix and the translation vector of the camera based on the rotation matrix and the translation vector of each of the plurality of motion arms Each of the moving arms is based on a rotation matrix and a translation vector of the object to be grasped;
  • the fifth determining unit is specifically configured to determine, according to the rotation matrix and the translation vector of the camera based on the base, the rotation matrix and the translation vector of the to-be-obtained object based on the camera, determining a rotation matrix and a translation vector of the base to be grasped based on the base ;
  • the sixth determining unit may be specifically configured to determine, according to the rotation matrix and the translation vector of the to-be-grased object, the rotation matrix and the translation vector of the to-be-obtained object based on the base, according to each of the plurality of motion arms Each of the moving arms in the moving arm is based on a rotation matrix of the base;
  • the seventh determining unit is specifically configured to determine current pose information of each of the plurality of moving arms according to a rotation matrix of each of the plurality of moving arms based on the base.
  • the current pose information of each of the plurality of moving arms may specifically include: each of the plurality of moving arms is based on a current Euler angle of the base.
  • correction module 24 may specifically include the following structural units:
  • the comparison unit may be specifically configured to compare current pose information of each of the plurality of motion arms with a motion path corresponding to the current robot arm motion to determine a current position of each of the plurality of motion arms Pose deviation
  • the correcting unit may be specifically configured to respectively correct each of the plurality of moving arms according to a current pose deviation of each of the plurality of moving arms.
  • system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • the above devices are described separately in terms of functions divided into various units.
  • the functions of each unit may be implemented in the same software or software and/or hardware when implementing the present application.
  • the apparatus for correcting the movement of the robot arm provided by the embodiment of the present application is provided by the target identification on each of the moving arms, the base, and the object to be grasped; And the determining module respectively determines the pose information of each of the moving arms according to the target identifier and the target image in the acquired target image, and further corrects the movement of the robot arm through the correction module according to the pose information of each moving arm. Therefore, the technical problem of correcting the movement cost of the robot arm and the low efficiency existing in the prior method is solved, and the technical effect of correcting the movement of the robot arm can be efficiently and accurately corrected; and the camera is obtained by using a camera disposed outside the robot arm.
  • the embodiment of the present application further provides an electronic device.
  • the electronic device may specifically include the input device 31. , processor 32, memory 33.
  • the input device 31 may be specifically configured to input a target image of the robot arm to grasp the object to be grasped, wherein the mechanical arm includes a plurality of moving arms and a base, and the target image includes a plurality of target identifiers, A plurality of target identifiers are respectively disposed on each of the plurality of moving arms, the base, and the object to be grasped.
  • the processor may be specifically configured to extract the plurality of target identifiers from the target image; and determine current positions of each of the plurality of motion arms according to the plurality of target identifiers and the target image respectively Position information; correcting the current arm movement according to the current pose information of each of the plurality of moving arms.
  • the memory 33 can be specifically used for the target image input by the input device 31, and various intermediate data and result data generated during the operation of the processor.
  • the input device may specifically be one of main devices for exchanging information between the user and the computer system.
  • the input device may include a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input pad, a voice input device, etc.; the input device is used to input raw data and a program for processing the numbers into the computer.
  • the input device can also acquire data transmitted by other modules, units, and devices.
  • the processor can be implemented in any suitable manner.
  • a processor can employ, for example, a microprocessor or processor and a computer readable medium, logic gate, switch, or application-specific integrated circuit (such as software or firmware) that can be executed by the (micro)processor.
  • the memory may specifically be a memory device for storing information in modern information technology.
  • the memory includes many levels. In the digital system, as long as the binary data can be saved, the memory can be stored; in the integrated circuit, a circuit having a storage function without a physical form is also called a memory, such as a RAM, a FIFO, etc.; Among them, a storage device having a physical form is also called a memory, such as a memory stick, a TF card, or the like.
  • the computer storage medium based on the above-mentioned method for correcting the movement of the robot arm is further provided in the application embodiment, and the computer storage medium stores computer program instructions, which are implemented when the computer program instructions are executed: acquiring a mechanical arm Determining a target image of the grab, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed in each of the plurality of moving arms An arm, a base, and a object to be grasped; extracting the plurality of target identifiers from the target image; determining, according to the plurality of target identifiers and the target image, respectively, each of the plurality of moving arms Current pose information; correcting current arm motion based on current pose information of each of the plurality of motion arms.
  • the storage medium includes but is not limited to a random access memory (RAM), a read-only memory (ROM), a cache (Cache), a hard disk (Hard Disk Drive, HDD), or a memory card (MemoryCard). ).
  • the memory can be used to store computer program instructions.
  • the network communication unit may be an interface for performing network connection communication in accordance with a standard stipulated by the communication protocol.
  • the method and apparatus for correcting the movement of the robot arm provided by the present application is used to correct the motion of the robot arm including the plurality of motion arms to grasp the object to be grasped.
  • the scenario of the method for correcting the movement of the robot arm provided by the embodiment of the present application in a scenario example shown in FIG. 6 , and the schematic diagram of capturing the object to be grasped by the robot arm according to the following content: .
  • a unique plane mark (ie, a target mark) may be separately set on each of the plurality of moving arms of the robot arm, the base, and the object to be grasped, and the specific setting is ensured.
  • the edge of the plane mark is parallel to the axis of the target object.
  • the plane flag when the plane flag is specifically arranged, refer to the motion of one of the plurality of moving arms before applying the method and device for correcting the motion of the robot arm provided by the embodiment of the present application in a scene example shown in FIG. 7 .
  • the arm sets a schematic diagram of the corresponding target identifier. Take the plane mark M2 on the moving arm L2 as an example.
  • the plane mark M2 can be specifically fixed to a position on the moving arm L2 that can be photographed by the camera.
  • the center O2 of M2 can be used as the origin of the plane marker M2 coordinate system.
  • the right end point O L2 of the moving arm is used as the origin of the moving arm L2 coordinate system.
  • the plane mark M2 when the plane mark M2 is arranged, it is necessary to make the plane mark M2 coordinate system O2 and the coordinate axis of the moving arm L2 coordinate system O L2 respectively parallel (that is, the side of the plane mark is parallel to the axis of the target object).
  • the plane marks M1, M3, and M4 are respectively disposed on the object to be grasped, the moving arm L3, and the base L1.
  • the origin of the coordinate system of the plane mark may be set to O1, O3, and O4, respectively.
  • the camera after the flat surface mark is laid out, it is necessary to arrange the camera at a position other than the robot arm.
  • the camera can be disposed above the working platform, so that the camera can acquire the target image including the plane markers M1, M2, M3, and M4.
  • the origin of the coordinate system of the camera may be set to O5.
  • S1 Obtain a video stream by using a camera, and obtain a target image including plane markers M1, M2, M3, and M4 through the video stream.
  • S2 Perform detection of the identifier (ie, the plane flag or the target identifier) according to the target image, and determine the ID of the identifier (ie, establish a correspondence between the target identifier and the target object).
  • the identifier ie, the plane flag or the target identifier
  • S5 Perform pose correction (or correction) on the motion of the robot arm.
  • the rotation matrix R and the translation vector t (based on the base) can be separately calculated according to the identifier, and the pose of the moving arm can be further determined.
  • the position of the moving arm L2 is calculated as an example to illustrate how to specifically determine the posture of each moving arm. Specifically, the following may be included.
  • the coordinate systems O1 and O2 of the plane markers M1 and M2 can be calculated relative to the camera coordinate system O5, and the rotation matrices R1 and R2 and the translation vectors t1 and t2.
  • the known matrix [R1, t1] is the transformation matrix of the plane marker M1 coordinate system O1 with respect to the camera coordinate system O5, and the inverse matrix [R1, t1] -1 of the matrix is obtained as the camera coordinate system O5 relative to the plane marker.
  • the known matrix [R2, t2] is the transformation matrix of the plane marker M2 coordinate system O2 with respect to the camera coordinate system O5, then the matrix [R2, t2] [R1, t1] -1 is the plane marker M2 coordinate system O2 A transformation matrix with respect to the plane marker M1 coordinate system O1.
  • the conversion relationship between the two coordinate systems is constant, that is, the coordinate system O1 of the plane marker M1 coordinate system O1 relative to the base of the robot arm.
  • the rotation matrix R0 and the translation vector t0 are constant values.
  • the matrix [R0, t0] is a transformation matrix of the plane marker M1 coordinate system O1 with respect to the robot arm base coordinate system O4, and the transformation matrix of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base can be expressed as [ R2,t2][R1,t1] -1 [R0,t0].
  • the matrix composed of the first three columns of the conversion matrix is the rotation matrix R4 of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base, and the fourth column is the translation vector t4.
  • S2 Calculate the Euler angle of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base according to the rotation matrix.
  • the coordinate system O2 can be obtained from the coordinate matrix O2 of the plane marker M2 with respect to the rotation matrix R3 of the base coordinate system O4.
  • the rotation angles ⁇ x, ⁇ y, ⁇ z of the three coordinate axes XYZ of the coordinate system O4 can be specifically calculated as follows:
  • the sine and cosine functions of the Euler angles of the three axes can be recorded as: sx, cx, sy, cy, sz, cz, and if rotated sequentially around the x-axis, the y-axis, and the z-axis, the transformation matrix of the transformation can be Expressed as:
  • the i-row j-column element of the plane matrix M2 coordinate system O2 with respect to the rotation matrix R4 of the base coordinate system O4 is r ij
  • the value of the Euler angle can be derived by using a trigonometric function.
  • the above process has calculated the coordinate system O2 of the plane mark M2 with respect to the robot arm. Since the Euler angles ⁇ x, ⁇ y, and ⁇ z of the three coordinate axes of the coordinate system O4, the Euler angles of the coordinate system OL2 of the moving arm L2 with respect to the three coordinate axes of the robot base coordinate system O4 are also ⁇ x, ⁇ y, and ⁇ z. . Therefore, the pose of the arm L2 in the arm base coordinate system O4 can be uniquely determined according to the Euler angle of the plane mark M2 obtained above, that is, the current pose of the arm L2.
  • the pose of the moving arm L3 can be calculated in a similar manner. However, it is also possible to determine the pose of L3 more quickly based on the relative position of the moving arms L3 and L2 based on the pose of L2. Specifically, since the moving arm L3 is connected to the moving arm L2, the above process has calculated the posture of the moving arm L2 under the robot arm base coordinate system O4, and only needs to calculate the posture of the moving arm L3 in the coordinate frame of the moving arm L2. Then, the posture of the moving arm L3 in the robot arm base coordinate system O4 can be determined.
  • the rotation matrix R4 and the translation vector t4 of the plane marker M3 coordinate system O3 relative to the plane marker M2 coordinate system O2 may be first calculated: wherein the known matrix [R2, t2] is the plane marker M2 coordinate system.
  • the inverse matrix [R2, t2] -1 of the matrix is the transformation matrix of the camera coordinate system O5 with respect to the plane marker M2 coordinate system O2 with respect to the transformation matrix of the camera coordinate system O5. Since the matrix [R3, t3] is the transformation matrix of the plane marker M2 coordinate system O2 with respect to the camera coordinate system O5, the matrix [R3, t3][R2, t2] -1 is the plane marker M3 coordinate system O3.
  • the Euler angles ⁇ x, ⁇ y, and ⁇ z of the coordinate axes O3 of the plane marker M3 with respect to the three coordinate axes of the coordinate system O2 of the plane marker M2 can be calculated.
  • the coordinate system O3 of the plane mark M3 is parallel to the coordinate axes of the coordinate system OL3 of the moving arm L3
  • the coordinate system O2 of the plane mark M2 is parallel to the coordinate axes of the coordinate system OL2 of the moving arm L2, respectively. Therefore, the Euler angles of the coordinate system OL2 of the moving arm L2 with respect to the three coordinate axes of the coordinate system OL3 of the moving arm L3 are also ⁇ x, ⁇ y, and ⁇ z. Thereby, it is possible to determine the pose information of each arm based on the base of the robot arm.
  • the current posture measured in the above-mentioned real time can be utilized. That is, the current Euler angle corrects the specific execution process of each moving arm in the robot arm in real time, so that the overall execution precision of the mechanical arm can be improved and the execution error can be reduced.
  • the method and apparatus for correcting the movement of the robot arm provided by the embodiment of the present application are verified by the above-mentioned scenario example, because the target identifier is set on each of the moving arm, the base, and the object to be grasped in advance; and then according to the acquired target image
  • the target identifier and the target image respectively determine the pose information of each arm, and then the target arm information can be specifically corrected according to the pose information of each arm, and the corrected robot arm motion cost existing in the existing method is solved.
  • the technical problems of high and low efficiency achieve the technical effect of efficiently and accurately correcting the movement of the arm.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, classes, and the like that perform particular tasks or implement particular abstract data types.
  • the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

一种机械臂运动的校正方法和装置,该方法包括:获取机械臂抓取待抓取物的目标图像,目标图像包括多个目标标识;根据目标标识和目标图像,确定各个运动臂的当前位姿信息;根据各个运动臂的当前位姿信息,对当前机械臂运动进行相应校正。

Description

机械臂运动的校正方法和装置
本申请要求2017年12月11日提交的申请号为:201711305455.4、发明名称为“机械臂运动的校正方法和装置”的中国专利申请的优先权,其全部内容合并在此。
技术领域
本发明涉及机械臂运动控制技术领域,特别是涉及一种机械臂运动的校正方法和装置。
背景技术
随着自动化、智能化等相关技术的发展,机械臂在工业生产、日常生活中的应用越来越普广泛。同时,随着人们生产水平、生活水平的提高,人们对于机械臂的精度要求也越来越高。
目前为了提高机械臂的精度,降低机械臂运动过程中的误差,现有方法大多是采用高精度的设备采集机械臂执行终端与待抓取物之间的相对位置数据,再根据机械臂执行终端与待抓取物之间的相对位置数据重新规划运动路径,利用重新规划得到的运动路径对机械臂的运动进行校正。因此,现有方法具体实施时,往往存在校正机械臂运动成本高、效率低的技术问题。
针对上述问题,目前尚未提出有效的解决方案。
发明内容
本申请实施方式提供了一种机械臂运动的校正方法和装置,以解决有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果。
本申请提供了一种机械臂运动的校正方法,包括:
获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;
从所述目标图像中提取所述多个目标标识;
根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;
根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
在一个实施方式中,所述获取机械臂抓取待抓取物的目标图像,包括:
通过设置于预设位置处的摄像头获取所述机械臂抓取待抓取物的目标图像,其中,所述预设位置包括除机械臂以外的区域。
在一个实施方式中,所述目标标识为包括4个定位点的二维码图像。
在一个实施方式中,所述从所述目标图像中提取所述多个目标标识,包括:
对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像;
对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图;
根据所述多个目标标识的正视图,确定所述多个目标标识。
在一个实施方式中,根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息,包括:
根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系;
根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量;
根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所 述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系,包括:
对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;
将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
在一个实施方式中,所述根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息,包括:
根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;
根据所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;
根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于底座的旋转矩阵;
根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,所述多个运动臂中各个运动臂的当前位姿信息包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。
在一个实施方式中,根据所述多个运动臂中各个运动臂的当前位姿信息, 对当前机械臂运动进行校正,包括:
将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;
根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
本申请还提供了一种机械臂运动的校正装置,包括:
获取模块,用于获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;
提取模块,用于从所述目标图像中提取所述多个目标标识;
确定模块,用于根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;
校正模块,用于根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
在一个实施方式中,所述获取模块包括摄像头,所述摄像头设置于预设位置处,所述摄像头用于获取所述机械臂抓取待抓取物的目标图像,其中,所述预设位置包括除机械臂以外的区域。
在一个实施方式中,所述提取模块包括:
第一提取单元,用于对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像;
获取单元,用于对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图;
第一确定单元,用于根据所述多个目标标识的正视图,确定所述多个目标标识。
在一个实施方式中,所述确定模块包括:
第一建立单元,用于根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系;
第二确定单元,用于根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量;
第三确定单元,用于根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,所述第一建立单元包括:
旋转处理子单元,用于对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;
确定子单元,用于将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
在一个实施方式中,所述确定模块包括:
第四确定单元,用于根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;
第五确定单元,用于根据所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;
第六确定单元,用于根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于底座的旋转矩阵;
第七确定单元,用于根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,所述多个运动臂中各个运动臂的当前位姿信息包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。
在一个实施方式中,所述校正模块包括:
比较单元,用于将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;
校正单元,用于根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
在本申请实施方式中,由于该方案通过事先在各个运动臂、底座、待抓取物上设置目标标识;再根据所获取的目标图像中的目标标识以及目标图像分别确定各个运动臂的位姿信息,进而可以根据各个运动臂的位姿信息,对机械臂运动进行针对性的校正,从而解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果。
附图说明
下面将以明确易懂的方式,结合附图说明优选实施方式,对本发明的机械臂运动的校正方法和装置的上述特性、技术特征、优点及其实现方式予以进一步说明。
图1是根据本申请实施方式提供的机械臂运动的校正方法的处理流程图;
图2是应用本申请实施方式提供的机械臂运动的校正方法和装置获得的目标标识的正视图的示意图;
图3是应用本申请实施方式提供的机械臂运动的校正方法和装置对目标标识进行旋转处理的示意图;
图4是根据本申请实施方式提供的机械臂运动的校正装置的组成结构图;
图5是基于本申请实施方式提供的机械臂运动的校正方法的电子设备组成结构图;
图6是在一个场景示例中应用本申请实施方式提供的机械臂运动的校正方法和装置对机械臂抓取待抓取物的场景示意图;
图7是在一个场景示例中应用本申请实施方式提供的机械臂运动的校正方法和装置之前在多个运动臂中的一个运动设置相应的目标标识的示意图;
图8是在一个场景示例中应用本申请实施方式提供的机械臂运动的校正方法和装置的流程示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
考虑到现有方法大多是采用高精度的设备采集机械臂执行终端与待抓取物之间的相对位置数据,再根据机械臂执行终端与待抓取物之间的相对位置数据重新规划运动路径,利用重新规划得到的运动路径对机械臂的运动进行校正。由于所获取的与待抓取物之间的相对位置数据指示效果有限,无法分别针对各个运动臂进行对应调整,只能根据上述位置数据重新规划运动路径以对机械臂的运动进行整体校正。因此,现有方法具体实施时往往会存在校正机械臂运动成本高、效率低的技术问题。针对产生上述技术问题的根本原因,本申请考虑可以通过事先在各个运动臂、底座、待抓取物上设置目标标识;再根据所获取的目标图像中的目标标识和目标图像分别确定各个运动臂的位姿信息,进 而可以根据各个运动臂具体的位姿信息,对机械臂运动进行针对性的校正,从而解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果。
基于上述思考思路,本申请实施例提供了一种机械臂运动的校正方法。具体请参阅图1所示的根据本申请实施方式提供的机械臂运动的校正方法的处理流程图。本申请实施例提供的机械臂运动的校正方法,具体实施时,可以包括以下步骤。
S11:获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上。
在一个实施方式中,上述机械臂具体可以是包括多个运动臂和底座的机械臂。其中,上述底座具体可以固定在施工区域中。例如,可以固定在一个工作平面上。如此,机械臂具体运动时可以通过控制多个运动臂中各个运动臂的协调运动完成机械臂的运动。例如,可以通过协调多个运动臂中各个运动臂的协调运动控制机械臂抓取待抓取物。具体的,例如上述机械臂具体可以包括3个运动臂以及一个底座。其中,上述底座可以记为L1,上述3个运动臂具体可以分别记为L2、L3和L4。当然,需要说明的是,对上述多个运动臂的具体个数本申请不作限定。
在一个实施方式中,上述机械臂抓取待抓取物可以认为是一种机械臂运动。当然,需要说明的是,上述所列举的机械臂运动形式只是为了更好地说本申请实施方式。具体实施时,也可以根据具体情况和实施要求,引入其他形式的运动,例如,控制机械臂在指定位置安装零部件等,作为待校正的机械臂运动。对此,本申请不作限定。
在一个实施方式中,上述目标图像具体可以是一个机械臂运动抓取待抓取物的目标图像。当然,根据具体情况,上述目标图像具体也可以是其他机械臂 运动抓取待抓取物的目标图像。
在一个实施方式中,上述目标图像具体可以包括多个目标标识。其中,上述多个目标标识具体可以是事先分别设于机械臂的多个运动臂中各个运动臂、底座和待抓取物上的图形标志。其中,上述多个目标标识互不相同,每一个目标标识与上述多个运动臂中各个运动臂、底座和待抓取物中的一个目标对象存在唯一的对应关系。因此,通过一个目标标识,可以唯一识别出一个目标对象。例如,根据目标标识M3可以唯一确定出第二运动臂L3,而不会识别成第一运动臂L2。
在一个实施方式中,上述目标标识具体可以是包括4个定位点的二维码图像。其中,上述目标标识中的4个定位点可以用于确定目标标识的位置信息,进而可以根据目标标识的位置信息确定设有该目标标识的目标对象的位置信息。上述目标标识中的二维码用于指示设有该目标标识的目标对象。具体的,例如,可以根据目标标识M3中的二维码,识别出设有该目标标识的目标对象为第二运动臂L3。即,可以这么理解,设于不同目标对象的目标标识的4个定位点可以相同,但目标标识中的二维码不相同。例如,设于第二运动臂L3的目标标识M3和设于第一运动臂的目标标识M2的定位点相同,但二维码不同。
在一个实施方式中,需要补充的是,在目标对象上设置对应的目标标识时,为了便于后续的数据分析和数据处理,具体实施时,要求设置于目标对象上的目标标识的正方形轮廓的边与目标对象的轴线平行。具体的,例如,在第二运动臂L3上设置目标标识M3时,要求目标标识M3的正方形轮廓的边与第二运动臂的轴线方向一致。
在一个实施方式中,具体实施时,可以通过粘贴的方式将上述目标标识设于对应的目标对象上。
在一个实施方式中,上述获取机械臂抓取待抓取物的目标图像,具体可以是通过设置于预设位置处的摄像头获取所述机械臂抓取待抓取物的目标图像。 其中,上述预设位置具体可以是机械臂以外的区域中的位置。如此,通过该位置获得的目标图像中可以完整地包含有机械臂中各个运动臂和底座,以及待抓取物等目标对象。
在一个实施方式中,上述预设位置具体可以是摄像头能够获取相对较为完整的目标标识的正面图像的位置。具体的,上述预设位置可以是机械臂工作区域的上方。具体的,例如,摄像头可以设置在工作区域的上方的左下角位置,且摄像头向着机械臂的方向向下呈一定夹角。如此,通过上述预设位置的摄像头可以获取包含所有目标对象的相对较完整的目标标识的正面图像。当然,需要说明的是,上述所列举的预设位置只是为了更好地说明本申请实施方式。具体实施时,也可以根据具体情况和施工要求,选择其他合适的位置作为上述预设位置。对此,本申请不作限定。
S12:从所述目标图像中提取所述多个目标标识。
在一个实施方式中,为了后续能够准确地根据目标标识的图像计算出各个运动臂的当前位姿信息,具体实施时,需要上述目标图像进行处理,以提取得到效果较好的目标标识的正视图,再根据目标标识的正视图准确地确定出目标标识。具体的提取过程可以包括以下内容。
12-1:对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像。
在本实施方式中,具体实施时,可以包括以下内容:利用自适应性阈值法对目标图像进行二值分割,得到对应的二值图像;从所述二值图像中提取轮廓图像。
在本实施方式中,在从所述二值图像中提取轮廓图像后,所述方法具体还可以包括:对轮廓图像进行多边形拟合,以舍弃不符合第一要求的轮廓图像;在通过限制条件剔除不符合第二要求的轮廓图像。如此,可以获得符合要求的多个轮廓图像,以便后续可以得到更加准确的目标标识的正视图。
在本实施方式中,上述不符合第一要求的轮廓图像具体可以包括:非凸多 边形轮廓图像、非四边形轮廓图像等。上述不符合第二要求的轮廓图像具体可以包括:四边形的一边明显小于其余边的轮廓图像、轮廓周长或面积过小的轮廓图像等,例如,形状比较瘦长的的轮廓图像。
12-2:对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图。
在一个实施方式中,具体实施时,为了获得较为准确的目标标识的正视图,具体实施时,可以先提取轮廓图像中的4个定位点坐标;通过标识数据库获取4个定位点的标准坐标;根据轮廓图像中的4个定位点坐标和4个定位点的标准坐标,建立射影变换矩阵;利用射影变换矩阵对轮廓图像中的像素点分别进行射影变换,以得到对应的目标标识的正视图。具体可以参阅图2所示的应用本申请实施方式提供的机械臂运动的校正方法和装置获得的目标标识的正视图的示意图。其中,图2左侧为轮廓图像,图2右侧为目标标识的正视图。如此,后续可以根据目标标识的正视图准确地识别确定出对应的目标标识。
在本实施方式中,具体的,例如,可以根据标识数据库,确定(四边形)轮廓图像经过射影变换转换成为正视图后的四个定位点(像素)坐标分别为(0,0),(0,100),(100,100),(100,0)(即4个定位点的标准坐标)。根据已有的轮廓图像可以确定的轮廓图像中四个定位点的坐标。具体实施时,对于每一个定位点对应有如下平面射影变换方程:
Figure PCTCN2018104735-appb-000001
其中,H为射影变换矩阵,X为轮廓图像中四边形顶点即轮廓图像中四个定位点的齐次(像素)坐标,X'为射影变换后的对应的定位点的齐次坐标,x1、x2、x3为X中的元素,x1'、x2'、x3'为X'中的元素。
Figure PCTCN2018104735-appb-000002
其中,x、y为对应的非齐次(像素)横坐标、纵坐标;
Figure PCTCN2018104735-appb-000003
其中u、v为射影变换后的非齐次像素横坐标、纵坐标。
根据上述四个定位点中各个定位点的对应关系,可联立方程组求解出射影变换矩阵H;进而可以对轮廓图像中的所有像素点利用上述射影变换矩阵H进行射影变换,即可以得到该轮廓图像对应的正视图,即该目标标识的正视图(也称平面标识图像)。其中,该目标标识的正视图的图像大小具体可以为100×100。12-3:根据所述多个目标标识的正视图,确定所述多个目标标识。
在本实施方式中,具体实施时,可以图像处理后得到目标标识的正视图准确地确定出其中对应的目标标识,以便后续分析、使用。
S13:根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,为了能够减少计算的复杂度,高效、准确地确定多个运动臂中各个运动臂的当前位姿信息,具体实施时,可以按照以下步骤执行。
S13-1:根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
在一个实施方式中,上述对应关系具体可以认为是目标标识与目标对象的指示关系。例如,确定目标标识M3指示的是第二运动臂L3,即可以认为建立了目标标识M3和第二运动臂L3的对应关系。如此,后续处理时,可以根据目 标标识M3的位置关系来确定第二运动臂L3的位置关系。
在一个实施方式中,为了更加准确地建立上述对应关系,具体实施时,可以按照以下方式执行。
S1:对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;
S2:将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
在本实施方式中,以一个目标标识的正视图所确定的目标标识为例,可以对该个目标标识分别进行90度、180度、270的旋转,包括没有进行旋转的目标标识,一共可以获得4个目标标识的图像,即上述旋转处理后的标识图像。具体可以参阅图3所示的应用本申请实施方式提供的机械臂运动的校正方法和装置对目标标识进行旋转处理的示意图。再将上述4个旋转处理后的标识图像分别与标识数据库中存储的标识即标准标识进行匹配,如果从数据库中搜索到一个标准标识与上述4个旋转处理后的标识图像中的至少一个匹配,则确认该标识为该目标标识的标准标识,根据该标准标识的指示信息确定该目标标识对应的目标对象,从而建立标准标识与目标标识的对应关系。其中,上述标识数据库存储有预先设置好的标准标识,上述标准标识携带有对应的指示信息,上述指示信息用于指示该标准标识对应的目标对象。
在本实施方式中,需要说明的是,通过先将每一个目标标识进行旋转处理得到多个不同角度的标识图像,利用多个不同角度的标识图像在标识数据库中进行搜索以确定相匹配的标准标识,可以改善匹配的准确度,提高搜索到与目标标识匹配的标准标识的速度。
S13-2:根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量。
在本实施方式中,由于考虑到摄像头的位置是固定的,具体可以以摄像头所在位置的坐标系为基准坐标系,进而可以分别确定各个目标对象的坐标系基于摄像头的坐标系间的位置关系,即:多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量,以便后续实施时,可以利用上述位置关系进一步确定各个目标对象的具体位置。
在本实施方式中,具体的,可以以待抓取物为例,待抓取物的目标标识为M1,针对M1自身的坐标系,可以以M1的中心O1为原点,则该目标标识M1所在的平面上的点Z轴坐标都为0。具体可以设M1的边长为80mm,则其四个定位点在目标标识M1自身的坐标系O1中的坐标分别为(-40,-40,0)、(40,-40,0)、(40,40,0)、(-40,40,0)。目标标识M1上的四个定位点中的每个定位点在采集到的目标标识在目标图像上的像素坐标与在其在自身坐标系O1中坐标存在以下转换关系:
Figure PCTCN2018104735-appb-000004
其中,(x,y,1)表示目标标识M1中任一定位点在基于摄像头的坐标系中像素坐标的齐次坐标,(X,Y,Z,1)表示上述定位点在世界坐标系中的齐次坐标,具体可以简化为(X,Y,0,1),s为引入的任意尺度比例参数,M为摄像头内部参数矩阵,r1、r2、r3分别标识目标标识坐标系相对于摄像头的坐标系的旋转矩阵R1中的三个列向量,t为平移向量。其中,上述M具体可以通过对摄像头进行标定获得。
按照上述方式,通过目标标识M1的一个定位点的像素坐标及其该点在对应的目标标识的坐标系中坐标可以确定出一个方程。目标标识的四个定位点对应 可以确定出四个方程组对应,通过直接线性变换(DLT)算法对上述方程组进行求解,可以得到向量r1、r2、r3与t。其中,旋转矩阵R1是正交矩阵,其列向量r1、r2、r3是相互正交的单位向量,当求出r1、r2时,r3可由r1、r2的向量积得到。旋转矩阵R1可以表示为(r1,r2,r3)即为待抓取物基于摄像头的旋转矩阵,t即为待抓取物基于摄像头的平移向量。
同理,可以分别计算出多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量。
S13-3:根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,为了能够准确地计算出各个运动臂的当前位姿信息,具体实施时,上述根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息,可以包括以下内容。
S1:根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;
S2:根据所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;
S3:根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于底座的旋转矩阵;
S4:根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多 个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,上述多个运动臂中各个运动臂的当前位姿信息包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。其中,上述各个运动臂的当前欧拉角具体可以是指各个运动臂当前基于底座的坐标系的欧拉角。
在本实施方式中,需要说明的是,由于目标标识的边与目标对象的轴线平行,因此所建立的目标标识的自身坐标系的坐标轴与目标对象自身的坐标系的坐标轴分别相互平行。例如,目标标识M2的以O2为原点的坐标系的X轴,与运动臂L2以运动臂右侧端点OL2为原点的坐标系的X轴平行。目标标识的自身坐标系相对于某一基准坐标系的当前欧拉角与目标对象的自身坐标系相对于同一基准坐标系的当前欧拉角相同。因此,可以使用目标标识的欧拉角作为目标对象的欧拉角,作为目标对象例如各个运动臂的位姿信息。
在一个实施方式中,具体实施时,可以按照以下公式分别确定多个运动臂中各个运动臂的当前位姿信息。
Figure PCTCN2018104735-appb-000005
Figure PCTCN2018104735-appb-000006
Figure PCTCN2018104735-appb-000007
其中,上述θx,θy,θz具体可以表示运动臂基于底座坐标系的三个坐标轴的欧拉角,上述r ij具体可以表示运动臂基于底座的旋转矩阵中第i行第j列的元素,例如,r 11表示为运动臂基于底座的旋转矩阵中第1行第1列的元素。
S14:根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
在一个实施方式中,具体实施时,上述根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正,具体可以包括以下内容:
S1:将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;
S2:根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
如此,可以针对各个运动臂的当前具体的位姿偏差,在原有的运动路径中针对各个运动臂分别进行针对性地校正,避免了重新规划运动路径,达到了可以及时、准确、快速地对机械臂运动进行校正的目的。
在本申请实施例中,相较于现有技术,通过事先在各个运动臂、底座、待抓取物上设置目标标识;再根据所获取的目标图像中的目标标识以及目标图像分别确定各个运动臂的位姿信息,进而可以根据各个运动臂的位姿信息,对机械臂运动进行针对性的校正,从而解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果。
在一个实施方式中,在获取机械臂抓取待抓取物的目标图像之前,所述方 法具体还可以包括:对摄像头进行标定,以获取摄像头的内参矩阵(即内部参数矩阵),其中,所述内参矩阵包括摄像头的畸变系数、焦距等参数。以便后续处理时,可以准确地确定出多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量。
在一个实施方式中,上对摄像头进行标定具体可以包括以下内容:
S1:将一个目标标识的设置于测试位置,其中,上述测试位置具体可以是一个平面;
S2:从多个不同的角度获取多个关于目标标识的图像;
S3:从所述多个关于目标标识的图像中检测出图像中的特征点;
S4:根据所述特征点在世界坐标系中坐标位置及其在图像中对应的像素坐标,联立方程组求出摄像头的内参数、外参数及畸变系数、焦距等参数,以建立关于该摄像头的内参矩阵。
在一个实施方式中,上述摄像头具体可以是单目摄像头。当然,需要说明的是,上述所列举的单目摄像头只是为了更好地说明本申请实施方式。具体实施时,也可以根据具体情况选择使用其他合适的摄像头,例如,双目摄像头。对此,本申请不作限定。
从以上的描述中,可以看出,本申请实施例提供的机械臂运动的校正方法方法,由于通过事先在各个运动臂、底座、待抓取物上设置目标标识;再根据所获取的目标图像中的目标标识以及目标图像分别确定各个运动臂的位姿信息,进而可以根据各个运动臂的位姿信息,对机械臂运动进行针对性的校正,从而解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果;又通过利用设置在机械臂之外的摄像头获取带有机械臂中各个运动臂、底座的目标标识的目标图像,进而确定出具体的各个运动臂的当前位姿信息,并根据各个运动臂当前的位姿信息 基于原有的运动路径进行对应校正,从而在保证校正精度的同时,提高了校正的效率。
基于同一发明构思,本发明实施例中还提供了一种机械臂运动的校正装置,如下面的实施例所述。由于机械臂运动的校正装置解决问题的原理与机械臂运动的校正方法相似,因此机械臂运动的校正装置的实施可以参见机械臂运动的校正方法的实施,重复之处不再赘述。以下所使用的,术语“单元”或者“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。请参阅图4,是本申请实施例提供的机械臂运动的校正装置的一种组成结构图,该装置具体可以包括:获取模块21、提取模块22、确定模块23和校正模块24,下面对该结构进行具体说明。
获取模块21,具体可以用于获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;
提取模块22,具体可以用于从所述目标图像中提取所述多个目标标识;
确定模块23,具体可以用于根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;
校正模块24,具体可以用于根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
在一个实施方式中,所述获取模块21具体可以包括摄像头,所述摄像头具体可以设置于预设位置处,所述摄像头具体可以用于获取所述机械臂抓取待抓取物的目标图像,其中,所述预设位置具体可以包括除机械臂以外的区域。
在一个实施方式中,所述提取模块22具体可以包括以下结构单元:
第一提取单元,具体可以用于对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像;
获取单元,具体可以用于对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图;
第一确定单元,具体可以用于根据所述多个目标标识的正视图,确定所述多个目标标识。
在一个实施方式中,所述确定模块23具体可以包括以下结构单元:
第一建立单元,具体可以用于根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系;
第二确定单元,具体可以用于根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量;
第三确定单元,具体可以用于根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,所述第一建立单元具体可以包括以下结构子单元:
旋转处理子单元,具体可以用于对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;
确定子单元,具体可以用于将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
在一个实施方式中,所述确定模块23具体可以包括以下结构单元:
第四确定单元,具体可以用于根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;
第五确定单元,具体可以用于根据所述底座基于摄像头的旋转矩阵和平移 向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;
第六确定单元,具体可以用于根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于底座的旋转矩阵;
第七确定单元,具体可以用于根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多个运动臂中各个运动臂的当前位姿信息。
在一个实施方式中,所述多个运动臂中各个运动臂的当前位姿信息具体可以包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。
在一个实施方式中,所述校正模块24具体可以包括以下结构单元:
比较单元,具体可以用于将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;
校正单元,具体可以用于根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,上述实施方式阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。为了描述的方便,在本说明书中,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
此外,在本说明书中,诸如第一和第二这样的形容词仅可以用于将一个元素或动作与另一元素或动作进行区分,而不必要求或暗示任何实际的这种关系 或顺序。在环境允许的情况下,参照元素或部件或步骤(等)不应解释为局限于仅元素、部件、或步骤中的一个,而可以是元素、部件、或步骤中的一个或多个等。
从以上的描述中,可以看出,本申请实施例提供的机械臂运动的校正方法装置,由于通过事先在各个运动臂、底座、待抓取物上设置目标标识;再通过获取模块、提取模块和确定模块根据所获取的目标图像中的目标标识以及目标图像分别确定各个运动臂的位姿信息,进而可以根据各个运动臂的位姿信息,通过校正模块对机械臂运动进行针对性的校正,从而解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果;又通过利用设置在机械臂之外的摄像头获取带有机械臂中各个运动臂、底座的目标标识的目标图像,进而确定出具体的各个运动臂的当前位姿信息,并根据各个运动臂当前的位姿信息基于原有的运动路径进行对应校正,从而在保证校正精度的同时,提高了校正的效率。
本申请实施方式还提供了一种电子设备,具体可以参阅图5所示的基于本申请实施方式提供的机械臂运动的校正方法的电子设备组成结构图,所述电子设备具体可以包括输入设备31、处理器32、存储器33。其中,所述输入设备31具体可以用于输入机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上。所述处理器具体可以用于从所述目标图像中提取所述多个目标标识;根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。所述存储器33具体可以用于输入设备31输入的目标图像,以及处理器运行过程中产生的各种中间数据和结果数据。
在本实施方式中,所述输入设备具体可以是用户和计算机系统之间进行信 息交换的主要装置之一。所述输入设备可以包括键盘、鼠标、摄像头、扫描仪、光笔、手写输入板、语音输入装置等;输入设备用于把原始数据和处理这些数的程序输入到计算机中。所述输入设备还可以获取接收其他模块、单元、设备传输过来的数据。所述处理器可以按任何适当的方式实现。例如,处理器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(ApplicationSpecificIntegratedCircuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式等等。所述存储器具体可以是现代信息技术中用于保存信息的记忆设备。所述存储器包括很多层次,在数字系统中,只要能保存二进制数据的都可以是存储器;在集成电路中,一个没有实物形式的具有存储功能的电路也叫存储器,如RAM、FIFO等;在系统中,具有实物形式的存储设备也叫存储器,如内存条、TF卡等。
在本实施方式中,该电子设备具体实现的功能和效果,可以与其它实施方式对照解释,在此不再赘述。
本说申请实施方式中还提供了一种基于上述机械臂运动的校正方法的计算机存储介质,所述计算机存储介质存储有计算机程序指令,在所述计算机程序指令被执行时实现:获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;从所述目标图像中提取所述多个目标标识;根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
在本实施方式中,上述存储介质包括但不限于随机存取存储器(RandomAccessMemory,RAM)、只读存储器(Read-OnlyMemory,ROM)、缓存(Cache)、硬盘(HardDiskDrive,HDD)或者存储卡(MemoryCard)。所述存储器可 以用于存储计算机程序指令。网络通信单元可以是依照通信协议规定的标准设置的,用于进行网络连接通信的接口。
在本实施方式中,该计算机存储介质存储的程序指令具体实现的功能和效果,可以与其它实施方式对照解释,在此不再赘述。
在一个具体实施场景示例中,应用本申请提供的机械臂运动的校正方法和装置对某包含多个运动臂的机械臂抓取待抓取物的运动进行校正。具体实施时过程,具体可以参阅图6所示的在一个场景示例中应用本申请实施方式提供的机械臂运动的校正方法和装置对机械臂抓取待抓取物的场景示意图,按照以下内容执行。
在本实施方式中,具体实施前,可以事先分别在机械臂的多个运动臂中的各个运动臂、底座和待抓取物上设置唯一的平面标志(即目标标识),具体设置时要保证平面标志的边与目标对象的轴线平行。
在本实施方式中,具体布设平面标志时可以参阅图7所示的在一个场景示例中,应用本申请实施方式提供的机械臂运动的校正方法和装置之前,在多个运动臂中的一个运动臂设置相应的目标标识的示意图。以在运动臂L2上布设平面标志M2为例。平面标志M2具体可以固定在运动臂L2上能被摄像头拍摄到的位置。其中,可以以M2的中心O2作为平面标志M2坐标系的原点。以运动臂的右侧端点O L2作为运动臂L2坐标系的原点。如图可知在布设平面标识M2时,需要使平面标志M2坐标系O2与运动臂L2坐标系O L2的坐标轴分别平行(即使得平面标志的边与目标对象的轴线平行)。按照相同的方法,分别将平面标志M1、M3、M4分别布设于待抓取物、运动臂L3、底座L1上。其中,上述平面标志的坐标系原点分别可以设为O1、O3、O4。
在本实施方式中,在布设完上述平面标志后,还需要将摄像头布设于机械臂以外的位置。具体的,可以将摄像头布设于工作平台的上方,如此,摄像头可以获取包括上述平面标志M1、M2、M3、M4的目标图像。其中,上述摄像 头的坐标系的原点可以设为O5。
布设完上述平面标志和摄像头后可以参阅图8所示的在一个场景示例中应用本申请实施方式提供的机械臂运动的校正方法和装置的流程示意图,按照以下步骤对机械臂抓取待抓取物的运动进行实时校正。
S1:通过摄像头获取视频流,并通过上述视频流,获取包含有平面标志M1、M2、M3、M4的目标图像。
S2:根据目标图像,进行标识物(即平面标志或目标标识)检测,确定标识物的ID(即建立目标标识与目标对象的对应关系)。
S3:根据标识物分别计算(基于底座的)旋转矩阵R和平移向量t。
S4:根据旋转矩阵R和平移向量t计算运动臂的位姿(即位姿信息),进而确定机械臂的位姿。
S5:对机械臂的运动进行位姿校正(或矫正)。
在本实施方式中,可以根据标识物分别计算(基于底座的)旋转矩阵R和平移向量t,进而可以进一步确定出运动臂的位姿。具体的,以计算运动臂L2的位姿为例说明如何具体确定各个运动臂的位姿。具体可以包括以下内容。
S1:计算平面标志物M2坐标系O2相对于底座坐标系O4的旋转矩阵R4及平移向量t4。
具体实施时,可以计算出平面标志物M1、M2的坐标系O1、O2相对于摄像头坐标系O5旋转矩阵R1、R2及平移向量t1、t2。已知矩阵[R1,t1]即为平面标志物M1坐标系O1相对于摄像头坐标系O5的转换矩阵,求该矩阵的逆矩阵[R1,t1] -1即为摄像头坐标系O5相对于平面标志物M1坐标系O1的转换矩阵。已知矩阵[R2,t2]即为平面标志物M2坐标系O2相对于摄像头坐标系O5的转换矩阵,则矩阵[R2,t2][R1,t1] -1即为平面标志物M2坐标系O2相对于平面标志物M1坐标系O1的转换矩阵。
又由于机械臂的底座与平面标志物M1在工作平面上的位置相对固定,所以 两个坐标系之间的转换关系为常量,即平面标志物M1坐标系O1相对于机械臂底座的坐标系O4的旋转矩阵R0与平移向量t0为定值。矩阵[R0,t0]为平面标志物M1坐标系O1相对于机械臂底座坐标系O4的转换矩阵,则平面标志物M2坐标系O2相对于机械臂底座的坐标系O4的转换矩阵可以表示为[R2,t2][R1,t1] -1[R0,t0]。则该转换矩阵的前三列组成的矩阵即为平面标志物M2坐标系O2相对于机械臂底座的坐标系O4的旋转矩阵R4,第四列即为平移向量t4。
S2:根据旋转矩阵计算平面标志物M2坐标系O2相对于机械臂底座的坐标系O4的欧拉角。
具体实施时,由于任何一个旋转都可以表示为依次绕着三个旋转轴旋转三个角度,由平面标志物M2的坐标系O2相对于底座坐标系O4的旋转矩阵R3可以求得坐标系O2相对于坐标系O4的三个坐标轴X-Y-Z的旋转角θx、θy﹑θz,具体计算可以按照以下方式执行:
可以先单独绕一个坐标轴旋转θ角的旋转矩阵为:
Figure PCTCN2018104735-appb-000008
Figure PCTCN2018104735-appb-000009
Figure PCTCN2018104735-appb-000010
具体的,可以记三个轴欧拉角的正弦和余弦函数分别为:sx、cx、sy、cy、sz、cz,如果依次绕x轴、y轴、z轴旋转,该变换的旋转矩阵可以表示为:
Figure PCTCN2018104735-appb-000011
进而,可以设平面标志物M2坐标系O2相对于底座坐标系O4的旋转矩阵R4的i行j列元素为r ij,根据旋转矩阵的表达式,利用三角函数可以推导出欧拉角的数值为:
Figure PCTCN2018104735-appb-000012
Figure PCTCN2018104735-appb-000013
Figure PCTCN2018104735-appb-000014
进一步,由于平面标志M2固定在运动臂L2上且平面标志M2的坐标系O2与运动臂L2的坐标系OL2的坐标轴分别平行,以上过程已经计算出平面标志M2的坐标系O2相对于机械臂坐标系O4的三个坐标轴的欧拉角θx、θy﹑θz,所以运动臂L2的坐标系OL2相对于机械臂底座坐标系O4的三个坐标轴的欧拉角也为θx、θy﹑θz。因此,可以根据以上求得的平面标志M2的欧拉角度唯一确定运动臂L2在机械臂底座坐标系O4中的位姿,即为运动臂L2的当前位姿。
按照相似的方法可以计算得到运动臂L3的位姿。但是也可以根据运动臂L3和L2的相对关系,基于L2的位姿更加快速地确定L3的位姿。具体的,由于运动臂L3连接在运动臂L2上,以上过程已经计算出运动臂L2在机械臂底座坐标系O4下的位姿,只需要计算出运动臂L3在运动臂L2坐标系下的姿态,就可以确定运动臂L3在机械臂底座坐标系O4中的位姿。
具体实施时,可以先计算平面标志物M3坐标系O3相对于平面标志物M2坐标系O2的旋转矩阵R4及平移向量t4:其中,已知矩阵[R2,t2]即为平面标志物M2坐标系O2相对于摄像机坐标系O5的转换矩阵,求该矩阵的逆矩阵[R2,t2] -1即为摄像机坐标系O5相对于平面标志物M2坐标系O2的转换矩阵。又由于矩阵[R3,t3]即为平面标志物M2坐标系O2相对于摄像机坐标系O5的转换矩阵,则矩阵 [R3,t3][R2,t2] -1即为平面标志物M3坐标系O3相对于平面标志物M2坐标系O2的转换矩阵。
与以上过程同理,可以算出平面标志M3的坐标系O3相对于平面标志M2的坐标系O2的三个坐标轴的欧拉角θx、θy﹑θz。已知平面标志M3的坐标系O3与运动臂L3的坐标系OL3的坐标轴分别平行,平面标志M2的坐标系O2与运动臂L2的坐标系OL2的坐标轴分别平行。所以,运动臂L2的坐标系OL2相对于运动臂L3的坐标系OL3的三个坐标轴的欧拉角也为θx、θy﹑θz。从而可以确定出各个运动臂基于机械臂底座的位姿信息。
在本实施方式中,在确定出各个运动臂相对于或者基于机械臂底座的位姿,即运动臂的坐标系相对于底座坐标系的欧拉角后,可以利用上述实时测得的当前位姿,即当前欧拉角对机械臂中各个运动臂具体执行过程进行实时矫正,从而能够较好地提高机械臂整体的执行精度,减小执行误差。
通过上述场景示例,验证了本申请实施方式提供的机械臂运动的校正方法和装置,由于通过事先在各个运动臂、底座、待抓取物上设置目标标识;再根据所获取的目标图像中的目标标识以及目标图像分别确定各个运动臂的位姿信息,进而可以根据各个运动臂的位姿信息,对机械臂运动进行针对性的校正,确实解决了现有方法中存在的校正机械臂运动成本高、效率低的技术问题,达到能够高效、精确地校正机械臂运动的技术效果。
尽管本申请内容中提到不同的具体实施方式,但是,本申请并不局限于必须是行业标准或实施例所描述的情况等,某些行业标准或者使用自定义方式或实施例描述的实施基础上略加修改后的实施方案也可以实现上述实施例相同、等同或相近、或变形后可预料的实施效果。应用这些修改或变形后的数据获取、处理、输出、判断方式等的实施例,仍然可以属于本申请的可选实施方案范围之内。
本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以 外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内部包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构、类等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例采用递进的方式描述,各个实施例之间相同或相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。
虽然通过实施例描绘了本申请,本领域普通技术人员知道,本申请有许多变形和变化而不脱离本申请的精神,希望所附的实施方式包括这些变形和变化而不脱离本申请。

Claims (10)

  1. 一种机械臂运动的校正方法,其特征在于,包括:
    获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;
    从所述目标图像中提取多个目标标识;
    根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;
    根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
  2. 根据权利要求1所述的方法,其特征在于,所述获取机械臂抓取待抓取物的目标图像,包括:
    通过设置于预设位置处的摄像头获取所述机械臂抓取待抓取物的目标图像,其中,所述预设位置包括除机械臂以外的区域。
  3. 根据权利要求1所述的方法,其特征在于,所述目标标识为包括4个定位点的二维码图像。
  4. 根据权利要求2所述的方法,其特征在于,从所述目标图像中提取所述多个目标标识,包括:
    对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像;
    对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图;
    根据所述多个目标标识的正视图,确定所述多个目标标识。
  5. 根据权利要求2所述的方法,其特征在于,根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息,包括:
    根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底 座、待抓取物的对应关系;
    根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量;
    根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
  6. 根据权利要求5所述的方法,其特征在于,根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系,包括:
    对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;
    将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息,包括:
    根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;
    根据所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;
    根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个 运动臂基于底座的旋转矩阵;
    根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多个运动臂中各个运动臂的当前位姿信息。
  8. 根据权利要求7所述的方法,其特征在于,所述多个运动臂中各个运动臂的当前位姿信息包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。
  9. 根据权利要求1所述的方法,其特征在于,根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正,包括:
    将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;
    根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
  10. 一种机械臂运动的校正装置,其特征在于,包括:
    获取模块,用于获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;
    提取模块,用于从所述目标图像中提取所述多个目标标识;
    确定模块,用于根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;
    校正模块,用于根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
PCT/CN2018/104735 2017-12-11 2018-09-08 机械臂运动的校正方法和装置 WO2019114339A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711305455.4 2017-12-11
CN201711305455.4A CN107813313A (zh) 2017-12-11 2017-12-11 机械臂运动的校正方法和装置

Publications (1)

Publication Number Publication Date
WO2019114339A1 true WO2019114339A1 (zh) 2019-06-20

Family

ID=61606459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104735 WO2019114339A1 (zh) 2017-12-11 2018-09-08 机械臂运动的校正方法和装置

Country Status (2)

Country Link
CN (1) CN107813313A (zh)
WO (1) WO2019114339A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111702755A (zh) * 2020-05-25 2020-09-25 淮阴工学院 一种基于多目立体视觉的机械臂智能控制系统

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107813313A (zh) * 2017-12-11 2018-03-20 南京阿凡达机器人科技有限公司 机械臂运动的校正方法和装置
CN108297079B (zh) * 2018-03-30 2023-10-13 中山市中科智能制造研究院有限公司 一种蛇形机械臂及其姿态变化的获取方法
WO2019201423A1 (en) * 2018-04-17 2019-10-24 Abb Schweiz Ag Method for controlling a robot arm
CN108994830A (zh) * 2018-07-12 2018-12-14 上海航天设备制造总厂有限公司 用于打磨机器人离线编程的系统标定方法
CN110741413B (zh) * 2018-11-29 2023-06-06 深圳市瑞立视多媒体科技有限公司 一种刚体配置方法及光学动作捕捉方法
CN109397249B (zh) * 2019-01-07 2020-11-06 重庆大学 基于视觉识别的二维码定位抓取机器人系统的方法
CN109829439B (zh) * 2019-02-02 2020-12-29 京东方科技集团股份有限公司 一种对头部运动轨迹预测值的校准方法及装置
CN113613850B (zh) * 2019-06-17 2022-08-12 西门子(中国)有限公司 一种坐标系校准方法、装置和计算机可读介质
KR102400965B1 (ko) * 2019-11-25 2022-05-25 재단법인대구경북과학기술원 로봇 시스템 및 그 보정 방법
CN111002311A (zh) * 2019-12-17 2020-04-14 上海嘉奥信息科技发展有限公司 基于光学定位仪的多段位移修正机械臂定位方法及系统
CN111360832B (zh) * 2020-03-18 2021-04-20 南华大学 提高破拆机器人末端工具远程对接精度的方法
WO2022037356A1 (zh) * 2020-08-19 2022-02-24 北京术锐技术有限公司 机器人系统以及控制方法
CN112164112B (zh) * 2020-09-14 2024-05-17 北京如影智能科技有限公司 一种获取机械臂位姿信息的方法及装置
CN114619441B (zh) * 2020-12-10 2024-03-26 北京极智嘉科技股份有限公司 机器人、二维码位姿检测的方法
CN113240739B (zh) * 2021-04-29 2023-08-11 三一重机有限公司 一种挖掘机、属具的位姿检测方法、装置及存储介质
CN115153855B (zh) * 2022-07-29 2023-05-05 中欧智薇(上海)机器人有限公司 一种微型机械臂的定位对准方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003117861A (ja) * 2001-10-15 2003-04-23 Denso Corp ロボットの位置補正システム
CN103400131A (zh) * 2013-08-16 2013-11-20 徐宁 一种图像识别中的校正装置及其方法
CN105844277A (zh) * 2016-03-22 2016-08-10 江苏木盟智能科技有限公司 标签识别方法和装置
CN106945049A (zh) * 2017-05-12 2017-07-14 深圳智能博世科技有限公司 一种仿人机器人关节零位校准的方法
CN107813313A (zh) * 2017-12-11 2018-03-20 南京阿凡达机器人科技有限公司 机械臂运动的校正方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6418774B1 (en) * 2001-04-17 2002-07-16 Abb Ab Device and a method for calibration of an industrial robot
CN101093553A (zh) * 2007-07-19 2007-12-26 成都博古天博科技有限公司 一种二维码系统及其识别方法
CN101430768B (zh) * 2007-11-07 2013-05-15 成都市思博睿科技有限公司 一种二维条码的定位方法
JP2011209064A (ja) * 2010-03-29 2011-10-20 Fuji Xerox Co Ltd 物品認識装置及びこれを用いた物品処理装置
JP5586015B2 (ja) * 2010-06-10 2014-09-10 国立大学法人 東京大学 逆運動学を用いた動作・姿勢生成方法及び装置
CN104778491B (zh) * 2014-10-13 2017-11-07 刘整 用于信息处理的图像码及生成与解析其的装置与方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003117861A (ja) * 2001-10-15 2003-04-23 Denso Corp ロボットの位置補正システム
CN103400131A (zh) * 2013-08-16 2013-11-20 徐宁 一种图像识别中的校正装置及其方法
CN105844277A (zh) * 2016-03-22 2016-08-10 江苏木盟智能科技有限公司 标签识别方法和装置
CN106945049A (zh) * 2017-05-12 2017-07-14 深圳智能博世科技有限公司 一种仿人机器人关节零位校准的方法
CN107813313A (zh) * 2017-12-11 2018-03-20 南京阿凡达机器人科技有限公司 机械臂运动的校正方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111702755A (zh) * 2020-05-25 2020-09-25 淮阴工学院 一种基于多目立体视觉的机械臂智能控制系统
CN111702755B (zh) * 2020-05-25 2021-08-17 淮阴工学院 一种基于多目立体视觉的机械臂智能控制系统

Also Published As

Publication number Publication date
CN107813313A (zh) 2018-03-20

Similar Documents

Publication Publication Date Title
WO2019114339A1 (zh) 机械臂运动的校正方法和装置
US11049280B2 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
JP6573354B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN110480637B (zh) 一种基于Kinect传感器的机械臂零件图像识别抓取方法
JP2015090560A (ja) 画像処理装置、画像処理方法
CN111195897B (zh) 用于机械手臂系统的校正方法及装置
CN106845354B (zh) 零件视图库构建方法、零件定位抓取方法及装置
CN104626169A (zh) 基于视觉与机械综合定位的机器人抓取零件的方法
JP6885856B2 (ja) ロボットシステムおよびキャリブレーション方法
CN112476489B (zh) 基于自然特征的柔性机械臂同步测量方法及系统
JP2014029664A (ja) 比較画像範囲生成方法、位置姿勢検出方法、比較画像範囲生成装置、位置姿勢検出装置、ロボット、ロボットシステム、比較画像範囲生成プログラム及び位置姿勢検出プログラム
JP2016170050A (ja) 位置姿勢計測装置、位置姿勢計測方法及びコンピュータプログラム
JP6922348B2 (ja) 情報処理装置、方法、及びプログラム
JP2009216503A (ja) 三次元位置姿勢計測方法および装置
JP2008309595A (ja) オブジェクト認識装置及びそれに用いられるプログラム
CN113172636B (zh) 一种自动手眼标定方法、装置及存储介质
JP2010184300A (ja) 姿勢変更システムおよび姿勢変更方法
US11478922B2 (en) Robot teaching device and robot system
CN114187312A (zh) 目标物的抓取方法、装置、系统、存储介质及设备
JP2021021577A (ja) 画像処理装置及び画像処理方法
CN115797332B (zh) 基于实例分割的目标物抓取方法和设备
JP6766229B2 (ja) 位置姿勢計測装置及び方法
WO2012076979A1 (en) Model-based pose estimation using a non-perspective camera
JP7178802B2 (ja) 2次元位置姿勢推定装置及び2次元位置姿勢推定方法
Pop et al. Robot vision application for bearings identification and sorting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18887387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18887387

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18887387

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 09.12.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18887387

Country of ref document: EP

Kind code of ref document: A1