WO2019114339A1 - 机械臂运动的校正方法和装置 - Google Patents
机械臂运动的校正方法和装置 Download PDFInfo
- Publication number
- WO2019114339A1 WO2019114339A1 PCT/CN2018/104735 CN2018104735W WO2019114339A1 WO 2019114339 A1 WO2019114339 A1 WO 2019114339A1 CN 2018104735 W CN2018104735 W CN 2018104735W WO 2019114339 A1 WO2019114339 A1 WO 2019114339A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- camera
- rotation matrix
- arms
- base
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1692—Calibration of manipulator
Definitions
- the invention relates to the technical field of mechanical arm motion control, and in particular to a method and a device for correcting the movement of a mechanical arm.
- the embodiment of the present application provides a method and a device for correcting the movement of a mechanical arm to solve the technical problem of high cost and low efficiency of correcting the movement of the mechanical arm in the method, and to achieve the technical effect of accurately and accurately correcting the movement of the mechanical arm. .
- the application provides a method for correcting the movement of a robot arm, comprising:
- Obtaining a target image of the robot arm for grasping the object to be grasped wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed on the plurality of moving arms Each of the moving arms, the base and the object to be grasped;
- the acquiring a robot arm to capture a target image of the object to be grasped includes:
- the target is identified as a two-dimensional code image comprising four anchor points.
- the extracting the multiple target identifiers from the target image includes:
- determining current pose information of each of the plurality of motion arms according to the plurality of target identifiers and the target image, respectively including:
- the corresponding relationship between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped is established according to the plurality of target identifiers, including:
- the camera is based on a rotation matrix and a translation vector of the camera, a rotation matrix and a translation vector of the base based on the camera, and the camera-based rotation of the to-be-obtained object is performed according to each of the plurality of motion arms.
- a matrix and a translation vector, determining current pose information of each of the plurality of motion arms including:
- the motion arms of each of the plurality of motion arms are based on the base Rotation matrix
- the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
- the current arm motion is corrected according to the current pose information of each of the plurality of motion arms, including:
- the application also provides a device for correcting the movement of a robot arm, comprising:
- An acquiring module configured to acquire a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively set On each of the plurality of moving arms, the base, and the object to be grasped;
- An extracting module configured to extract the plurality of target identifiers from the target image
- a determining module configured to respectively determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image;
- a correction module configured to correct current arm motion according to current pose information of each of the plurality of motion arms.
- the acquiring module includes a camera, the camera is disposed at a preset position, and the camera is configured to acquire a target image of the robot arm to grasp a to-be-grabbed object, wherein the preset position Includes areas other than the robot arm.
- the extraction module comprises:
- a first extracting unit configured to perform quadrilateral contour detection on the target image to extract a plurality of contour images
- An acquiring unit configured to perform plane projective transformation on the plurality of contour images to obtain a front view of the plurality of target identifiers
- a first determining unit configured to determine the multiple target identifiers according to a front view of the multiple target identifiers.
- the determining module comprises:
- a first establishing unit configured to establish, according to the plurality of target identifiers, a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped;
- a second determining unit configured to respectively determine, according to the target image and the corresponding relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, to be captured Based on the rotation matrix and translation vector of the camera;
- a third determining unit configured to: based on a rotation matrix and a translation vector of each camera of the plurality of motion arms, a rotation matrix and a translation vector of the base based on the camera, and a camera-based rotation matrix of the object to be grasped And a translation vector, determining current pose information of each of the plurality of moving arms.
- the first establishing unit comprises:
- a rotation processing subunit configured to respectively perform rotation processing on the plurality of target identifiers to obtain a plurality of rotated processed logo images
- Determining a subunit configured to match the plurality of rotated processed identification images with the identifiers in the identification database to determine the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped Correspondence relationship.
- the determining module comprises:
- a fourth determining unit configured to determine each motion in the plurality of moving arms based on a rotation matrix and a translation vector of the camera, a rotation matrix and a translation vector of the to-be-obtained object based on the camera, each of the plurality of motion arms
- the arm is based on a rotation matrix and a translation vector of the object to be grasped
- a fifth determining unit configured to determine, according to the rotation matrix and the translation vector of the camera based on the base, the rotation matrix and the translation vector of the to-be-obtained object based on the camera, the rotation matrix and the translation vector of the object to be grasped based on the base;
- a sixth determining unit configured to determine, according to a rotation matrix and a translation vector of the to-be grasped object, a rotation matrix and a translation vector of the to-be-obtained object based on the base, the plurality of motion arms Each of the moving arms is based on a rotation matrix of the base;
- a seventh determining unit configured to determine current pose information of each of the plurality of moving arms based on a rotation matrix of each of the plurality of moving arms based on the base.
- the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
- the correction module comprises:
- a comparison unit configured to compare current pose information of each of the plurality of motion arms with a motion path corresponding to the current robot arm motion to determine a current pose deviation of each of the plurality of motion arms ;
- a correcting unit configured to respectively correct each of the plurality of moving arms according to a current pose deviation of each of the plurality of moving arms.
- the solution sets the target identifier on each of the moving arm, the base, and the object to be grasped in advance, and then determines the posture of each of the moving arms according to the target identifier and the target image in the acquired target image.
- the information can be used to correct the movement of the arm according to the pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method, and achieving high efficiency and accuracy. Correctly correct the technical effect of the movement of the robot arm.
- FIG. 1 is a process flow diagram of a method of correcting the motion of a robot arm according to an embodiment of the present application
- FIG. 2 is a schematic diagram of a front view of a target identifier obtained by applying a method and apparatus for correcting movement of a robot arm provided by an embodiment of the present application;
- FIG. 3 is a schematic diagram of a method and apparatus for correcting movement of a robot arm according to an embodiment of the present application
- FIG. 4 is a structural diagram of a device for correcting the movement of a robot arm according to an embodiment of the present application
- FIG. 5 is a structural diagram of an electronic device according to a method for correcting a motion of a robot arm provided by an embodiment of the present application
- FIG. 6 is a schematic diagram of a method for correcting a movement of a robot arm provided by an embodiment of the present application in a scene example;
- FIG. 7 is a schematic diagram of setting a corresponding target identifier in one of a plurality of moving arms before applying a method and apparatus for correcting movement of a robot arm provided by an embodiment of the present application in a scene example;
- FIG. 8 is a schematic flow chart of a method and apparatus for correcting the motion of a robot arm provided by an embodiment of the present application in a scene example.
- the existing methods mostly use high-precision equipment to collect the relative position data between the robot arm execution terminal and the object to be grasped, and then re-plan the motion path according to the relative position data between the terminal and the object to be grasped by the robot arm.
- the movement of the robot arm is corrected by the motion path obtained by the re-planning. Since the obtained relative position data indication with the object to be grasped has a limited effect, it is not possible to separately adjust the respective movement arms, and the motion path can only be re-planned according to the position data to perform overall correction on the movement of the robot arm. Therefore, when the existing method is implemented, there is often a technical problem of correcting the movement cost of the robot arm and low efficiency.
- the present application considers that the target identifier can be set on each of the moving arm, the base, and the object to be grasped in advance; and each arm is determined according to the target mark and the target image in the acquired target image.
- the pose information can be used to correct the movement of the arm according to the specific pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method.
- the technical effect of the movement of the arm can be corrected efficiently and accurately.
- the embodiment of the present application provides a method for correcting the motion of the robot arm.
- a method for correcting the motion of the robot arm please refer to the processing flowchart of the method for correcting the motion of the robot arm according to the embodiment of the present application shown in FIG. 1 .
- the method for correcting the motion of the robot arm provided by the embodiment of the present application may include the following steps.
- S11 acquiring a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed on the plurality of Each of the moving arms, the base and the object to be grasped in the moving arm.
- the mechanical arm may specifically be a mechanical arm including a plurality of moving arms and a base.
- the above base can be specifically fixed in the construction area. For example, it can be fixed on a work plane. In this way, when the mechanical arm is specifically moved, the movement of the mechanical arm can be completed by controlling the coordinated motion of each of the plurality of moving arms.
- the grasping object can be grasped by coordinating the coordinated motion control arm of each of the plurality of moving arms.
- the above mechanical arm may specifically include three moving arms and one base.
- the base may be denoted by L1, and the three movable arms may be specifically denoted as L2, L3 and L4, respectively.
- L1 the specific number of the plurality of moving arms is not limited in this application.
- the grasping of the object to be grasped by the mechanical arm may be considered as a mechanical arm movement.
- the above-described mechanical arm movement forms are merely for the purpose of better describing the embodiments of the present application.
- other forms of motion may also be introduced according to specific situations and implementation requirements, for example, controlling the mechanical arm to mount components at a specified position, etc., as the mechanical arm motion to be corrected.
- the application is not limited.
- the target image may be a target image in which a robot arm moves to grab the object to be grasped.
- the target image may also be a target image of other mechanical arm movements to grasp the object to be grasped.
- the target image may specifically include multiple target identifiers.
- the plurality of target identifiers may be specifically graphic identifiers respectively disposed on each of the plurality of moving arms of the robot arm, the base, and the object to be grasped.
- the plurality of target identifiers are different from each other, and each target identifier has a unique correspondence with each of the plurality of moving arms, the base, and a target object to be grasped. Therefore, a target object can be uniquely identified by a target identifier.
- the second moving arm L3 can be uniquely determined based on the target identification M3 without being recognized as the first moving arm L2.
- the target identifier may specifically be a two-dimensional code image including four positioning points.
- the four positioning points in the target identifier may be used to determine location information of the target identifier, and further, the location information of the target object with the target identifier may be determined according to the location information of the target identifier.
- the two-dimensional code in the above target identifier is used to indicate the target object provided with the target identifier.
- the target object provided with the target identifier may be identified as the second moving arm L3. That is, it can be understood that the four positioning points of the target identifiers set in different target objects can be the same, but the two-dimensional codes in the target identifiers are different.
- the target mark M3 provided in the second moving arm L3 and the target mark M2 provided in the first moving arm are the same, but the two-dimensional code is different.
- the edge of the square contour of the target identifier set on the target object is required. Parallel to the axis of the target object. Specifically, for example, when the target mark M3 is set on the second moving arm L3, the side of the square outline of the target mark M3 is required to coincide with the axial direction of the second moving arm.
- the target identifier may be set on a corresponding target object by pasting.
- the acquiring the target image of the object to be grasped by the robot arm may be obtained by acquiring a target image of the robot to grasp the object to be grasped by a camera disposed at the preset position.
- the preset position may specifically be a position in a region other than the robot arm. In this way, the target image obtained by the position can completely include each of the moving arms and the base in the robot arm, and the target object such as the object to be grasped.
- the preset position may specifically be a position where the camera can acquire a front image of a relatively complete target identifier.
- the preset position may be above the working area of the arm.
- the camera may be disposed at a lower left corner position above the working area, and the camera is inclined downward toward the direction of the robot arm.
- the front image of the relatively complete target identifier of all target objects can be acquired by the camera of the preset position.
- the preset positions listed above are only for better explaining the embodiments of the present application.
- other suitable locations may also be selected as the preset locations according to specific conditions and construction requirements. In this regard, the application is not limited.
- the target image needs to be processed to extract a front view of the target logo with better effect. Then, the target identifier is accurately determined according to the front view of the target identifier.
- the specific extraction process can include the following.
- Quadrilateral contour detection is performed on the target image to extract a plurality of contour images.
- the following may be included: performing binary segmentation on the target image by using an adaptive threshold method to obtain a corresponding binary image; and extracting the contour image from the binary image.
- the method may further include: performing polygon fitting on the contour image to discard the contour image that does not meet the first requirement; Exclude contour images that do not meet the second requirement. In this way, a plurality of contour images that meet the requirements can be obtained, so that a more accurate front view of the target identifier can be obtained later.
- the contour image that does not meet the first requirement may specifically include: a non-convex polygonal contour image, a non-quadrilateral contour image, and the like.
- the contour image that does not meet the second requirement may specifically include: a contour image of one side of the quadrilateral that is significantly smaller than the other side, a contour image with a contour perimeter or an area that is too small, and the like, for example, a contour image that is relatively elongated.
- the coordinates of the four anchor points in the contour image may be extracted first; and the standard coordinates of the four anchor points are obtained by using the identifier database; According to the coordinates of the four positioning points in the contour image and the standard coordinates of the four positioning points, a projective transformation matrix is established; the pixel points in the contour image are respectively subjected to projective transformation using the projective transformation matrix to obtain a front view of the corresponding target identifier.
- a projective transformation matrix is established; the pixel points in the contour image are respectively subjected to projective transformation using the projective transformation matrix to obtain a front view of the corresponding target identifier.
- the coordinates of the four positioning points (pixels) after the (quadrilateral) contour image is converted into a front view by the projective transformation are respectively (0, 0), (0, 100). , (100, 100), (100, 0) (ie the standard coordinates of 4 anchor points).
- the coordinates of the four anchor points in the contour image that can be determined from the existing contour image.
- H is the projective transformation matrix
- X is the quadratic vertex of the contour image, that is, the homogeneous (pixel) coordinates of the four anchor points in the contour image
- X' is the homogeneous coordinate of the corresponding anchor point after the projective transformation
- x1, x2 , x3 is an element in X
- x1', x2', and x3' are elements in X'.
- x and y are corresponding non-homogeneous (pixel) abscissas and ordinates;
- u and v are the abscissa and ordinate of the non-homogeneous pixel after the projective transformation.
- the projective transformation matrix H can be solved by the simultaneous equations; further, the projection transformation matrix H can be used to perform projective transformation on all the pixel points in the contour image, that is, the projection result can be obtained.
- the front view corresponding to the contour image that is, the front view of the target logo (also called the plane logo image).
- the image size of the front view of the target identifier may specifically be 100 ⁇ 100. 12-3: determining the multiple target identifiers according to the front view of the multiple target identifiers.
- the front view of the target identifier can be obtained after the image processing to accurately determine the corresponding target identifier for subsequent analysis and use.
- S13 Determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image.
- the current pose information of each of the plurality of moving arms can be determined efficiently and accurately.
- the following steps can be performed.
- S13-1 The corresponding relationship between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped is established according to the plurality of target identifiers.
- the foregoing correspondence may be specifically considered as an indication relationship between the target identifier and the target object. For example, it is determined that the target identification M3 indicates the second moving arm L3, that is, the correspondence relationship between the target identification M3 and the second moving arm L3 can be considered. Thus, at the time of subsequent processing, the positional relationship of the second moving arm L3 can be determined based on the positional relationship of the target mark M3.
- S2 Matching the plurality of rotated processed identification images with the identifiers in the identification database to determine a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped.
- the target identifier may be rotated by 90 degrees, 180 degrees, and 270, respectively, including the target identifier without rotation, and a total of The image of the four target marks, that is, the logo image after the above rotation processing.
- the schematic diagram of the method and apparatus for correcting the motion of the robot arm provided by the embodiment of the present application shown in FIG.
- the identifier database is configured with a preset standard identifier, and the standard identifier carries the corresponding indication information, where the indication information is used to indicate the target object corresponding to the standard identifier.
- a plurality of different angles of the identification image are obtained by first rotating each target identifier, and the plurality of different angles of the identification image are used to search in the identification database to determine the matching standard.
- the identification can improve the accuracy of the matching and improve the speed of searching for the standard identification that matches the target identification.
- S13-2 determining, according to the target image and the correspondence relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, and the object to be grasped is based on the camera The rotation matrix and the translation vector.
- the coordinate system of the position of the camera may be used as a reference coordinate system, and the coordinate system of each target object may be determined based on the positional relationship between the coordinate systems of the cameras. That is, each of the plurality of moving arms is based on the rotation matrix and translation vector of the camera, the rotation matrix and translation vector of the base based on the camera, the rotation matrix and the translation vector of the camera to be grasped, so that the following can be utilized
- the positional relationship further determines the specific location of each target object.
- the object to be grasped is taken as an example, and the target identifier of the object to be grasped is M1.
- the center O1 of the M1 may be taken as the origin, and the target identifier M1 is located.
- the point Z coordinate on the plane is 0.
- the side length of the M1 is 80 mm, and the coordinates of the four positioning points in the coordinate system O1 of the target identifier M1 are (-40, -40, 0), (40, -40, 0), respectively. 40, 40, 0), (-40, 40, 0).
- Each of the four anchor points on the target identifier M1 has the following conversion relationship between the pixel coordinates of the acquired target identifier on the target image and the coordinates in its own coordinate system O1:
- (x, y, 1) represents the homogeneous coordinates of the pixel coordinates of any of the target points in the camera-based coordinate system
- (X, Y, Z, 1) indicates that the above-mentioned positioning points are in the world coordinate system.
- the homogeneous coordinates can be simplified as (X, Y, 0, 1), s is the scale parameter of any scale introduced, M is the internal parameter matrix of the camera, and r1, r2, r3 respectively identify the coordinate system of the target with respect to the camera.
- Three column vectors in the rotation matrix R1 of the coordinate system, t is the translation vector.
- the above M can be specifically obtained by calibrating the camera.
- an equation can be determined by the pixel coordinates of an anchor point of the target identifier M1 and the coordinates of the point in the coordinate system of the corresponding target identifier.
- the four anchor points of the target identifier can be determined by four equations.
- the direct equations (DLT) algorithm can be used to solve the above equations, and the vectors r1, r2, r3 and t can be obtained.
- the rotation matrix R1 is an orthogonal matrix, and the column vectors r1, r2, and r3 are mutually orthogonal unit vectors. When r1 and r2 are obtained, r3 can be obtained from the vector product of r1 and r2.
- the rotation matrix R1 can be expressed as (r1, r2, r3), that is, the rotation matrix of the camera to be grasped based on the camera, and t is the camera-based translation vector of the object to be grasped.
- the camera-based rotation matrix and translation vector, the base camera-based rotation matrix, and the translation vector of each of the plurality of motion arms can be separately calculated.
- the base is based on The rotation matrix and the translation vector of the camera, the to-be-obtained object based on the rotation matrix and the translation vector of the camera, and determining the current pose information of each of the plurality of motion arms may include the following content.
- S4 Determine current pose information of each of the plurality of moving arms based on a rotation matrix of each of the plurality of moving arms based on the base.
- the current pose information of each of the plurality of moving arms includes: each of the plurality of moving arms is based on a current Euler angle of the base.
- the current Euler angles of the respective moving arms may specifically refer to Euler angles of the coordinate systems of the respective moving arms based on the base.
- the edge of the target mark is parallel to the axis of the target object
- the coordinate axes of the established coordinate system of the target mark and the coordinate axes of the target object itself are parallel to each other.
- the X-axis of the coordinate system of the target mark M2 with O2 as the origin is parallel to the X-axis of the coordinate system of the moving arm L2 with the right end point OL2 of the moving arm as the origin.
- the current Euler angle of the target coordinate's own coordinate system relative to a reference coordinate system is the same as the current Euler angle of the target object's own coordinate system with respect to the same reference coordinate system. Therefore, the Euler angle of the target mark can be used as the Euler angle of the target object as the target object such as the pose information of each of the moving arms.
- the current pose information of each of the plurality of moving arms may be separately determined according to the following formula.
- the above ⁇ x, ⁇ y, ⁇ z may specifically represent the Euler angles of the three coordinate axes of the moving arm based on the base coordinate system
- the above r ij may specifically represent the elements of the i-th row and the j-th column in the rotation matrix of the moving arm based on the base.
- r 11 is expressed as an element of the first row and the first column of the rotation matrix of the arm based on the base.
- the current posture of the arm is corrected according to the current pose information of each of the plurality of moving arms, and specifically includes the following content:
- the current specific pose deviation of each moving arm can be specifically corrected for each moving arm in the original moving path, thereby avoiding re-planning the moving path, and achieving timely, accurate and rapid mechanical action.
- the purpose of the arm movement is corrected.
- the target identifier is set on each of the moving arm, the base, and the object to be grasped in advance, and the respective motions are respectively determined according to the target identifier and the target image in the acquired target image.
- the pose information of the arm can further correct the movement of the arm according to the pose information of each arm, thereby solving the technical problem of high cost and low efficiency of correcting the arm movement in the prior method.
- the technical effect of the movement of the arm can be corrected efficiently and accurately.
- the method may further include: calibrating the camera to obtain an internal parameter matrix of the camera (ie, an internal parameter matrix), wherein The internal reference matrix includes parameters such as the distortion coefficient and focal length of the camera.
- calibrating the camera to obtain an internal parameter matrix of the camera (ie, an internal parameter matrix), wherein The internal reference matrix includes parameters such as the distortion coefficient and focal length of the camera.
- the calibration of the camera on the top may specifically include the following:
- test location may be a plane
- S4 determining, according to the coordinate position of the feature point in the world coordinate system and the corresponding pixel coordinate in the image, the internal parameter, the external parameter, the distortion coefficient, the focal length and the like of the camera to establish an internal parameter about the camera. matrix.
- the camera may specifically be a monocular camera.
- the monocular cameras listed above are only for better explaining the embodiments of the present application.
- other suitable cameras may also be selected according to specific situations, for example, a binocular camera. In this regard, the application is not limited.
- the method for correcting the movement of the robot arm provided by the embodiment of the present application is achieved by setting a target identifier on each of the moving arms, the base, and the object to be grasped in advance; and then according to the acquired target image.
- the target identifier and the target image respectively determine the pose information of each arm, and the target arm motion can be specifically corrected according to the pose information of each arm, thereby solving the correction robot existing in the existing method.
- the technical problem of high movement cost and low efficiency achieves the technical effect of efficiently and accurately correcting the movement of the robot arm; and by using the camera disposed outside the robot arm to acquire the target logo of each of the arm and the base of the robot arm
- the target image determines the current pose information of each specific arm, and performs corresponding correction based on the current pose information according to the current pose information of each arm, thereby improving the correction efficiency while ensuring the correction accuracy.
- a correction device for the movement of the robot arm is also provided in the embodiment of the present invention, as described in the following embodiments. Since the principle of solving the problem of the motion of the arm movement is similar to the method of correcting the motion of the arm, the implementation of the correction device for the movement of the arm can be referred to the implementation of the method of correcting the movement of the arm, and the repeated description will not be repeated.
- the term "unit” or "module” may implement a combination of software and/or hardware of a predetermined function.
- the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated. Please refer to FIG.
- the device may specifically include: an acquisition module 21 , an extraction module 22 , a determination module 23 , and a correction module 24 . This structure will be specifically described.
- the acquiring module 21 is specifically configured to acquire a target image of the robot arm for grasping the object to be grasped, wherein the robot arm includes a plurality of moving arms and a base, and the target image includes a plurality of target identifiers, and the plurality of targets The identifiers are respectively disposed on each of the plurality of moving arms, the base and the object to be grasped;
- the extracting module 22 is specifically configured to extract the plurality of target identifiers from the target image
- the determining module 23 is specifically configured to determine current pose information of each of the plurality of moving arms according to the plurality of target identifiers and the target image;
- the correction module 24 is specifically configured to correct the current arm motion according to the current pose information of each of the plurality of motion arms.
- the acquiring module 21 may specifically include a camera, and the camera may be specifically disposed at a preset position, where the camera may be specifically configured to acquire a target image of the mechanical arm to grasp the object to be grasped.
- the preset position may specifically include an area other than the mechanical arm.
- the extraction module 22 may specifically include the following structural units:
- the first extracting unit may be specifically configured to perform quadrilateral contour detection on the target image to extract a plurality of contour images
- the acquiring unit may be configured to perform plane projective transformation on the plurality of contour images to obtain a front view of the plurality of target identifiers;
- the first determining unit may be specifically configured to determine the multiple target identifiers according to a front view of the multiple target identifiers.
- the determining module 23 may specifically include the following structural units:
- the first establishing unit may be configured to establish, according to the plurality of target identifiers, a correspondence between the plurality of target identifiers and the plurality of moving arms, the base, and the object to be grasped;
- the second determining unit may be specifically configured to determine, according to the target image and the corresponding relationship, a rotation matrix and a translation vector of each of the plurality of motion arms based on the camera, a rotation matrix and a translation vector of the base based on the camera, and a The grab is based on the rotation matrix and translation vector of the camera;
- the third determining unit may be specifically configured to: based on a rotation matrix and a translation vector of each camera of the plurality of moving arms, a rotation matrix and a translation vector of the base based on the camera, and the camera-based object to be grasped
- a rotation matrix and a translation vector determine current pose information of each of the plurality of motion arms.
- the first establishing unit may specifically include the following structural subunits:
- the rotation processing sub-unit may be configured to separately perform rotation processing on the plurality of target identifiers to obtain a plurality of rotated processed identification images;
- Determining a subunit which may be used to match the plurality of rotated processed identification images with the identifiers in the identification database to determine the plurality of target identifiers and the plurality of moving arms, the base, and the to-be-captured Correspondence of objects.
- the determining module 23 may specifically include the following structural units:
- the fourth determining unit may be specifically configured to determine, according to the rotation matrix and the translation vector of the camera, the rotation matrix and the translation vector of the camera based on the rotation matrix and the translation vector of each of the plurality of motion arms Each of the moving arms is based on a rotation matrix and a translation vector of the object to be grasped;
- the fifth determining unit is specifically configured to determine, according to the rotation matrix and the translation vector of the camera based on the base, the rotation matrix and the translation vector of the to-be-obtained object based on the camera, determining a rotation matrix and a translation vector of the base to be grasped based on the base ;
- the sixth determining unit may be specifically configured to determine, according to the rotation matrix and the translation vector of the to-be-grased object, the rotation matrix and the translation vector of the to-be-obtained object based on the base, according to each of the plurality of motion arms Each of the moving arms in the moving arm is based on a rotation matrix of the base;
- the seventh determining unit is specifically configured to determine current pose information of each of the plurality of moving arms according to a rotation matrix of each of the plurality of moving arms based on the base.
- the current pose information of each of the plurality of moving arms may specifically include: each of the plurality of moving arms is based on a current Euler angle of the base.
- correction module 24 may specifically include the following structural units:
- the comparison unit may be specifically configured to compare current pose information of each of the plurality of motion arms with a motion path corresponding to the current robot arm motion to determine a current position of each of the plurality of motion arms Pose deviation
- the correcting unit may be specifically configured to respectively correct each of the plurality of moving arms according to a current pose deviation of each of the plurality of moving arms.
- system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
- the above devices are described separately in terms of functions divided into various units.
- the functions of each unit may be implemented in the same software or software and/or hardware when implementing the present application.
- the apparatus for correcting the movement of the robot arm provided by the embodiment of the present application is provided by the target identification on each of the moving arms, the base, and the object to be grasped; And the determining module respectively determines the pose information of each of the moving arms according to the target identifier and the target image in the acquired target image, and further corrects the movement of the robot arm through the correction module according to the pose information of each moving arm. Therefore, the technical problem of correcting the movement cost of the robot arm and the low efficiency existing in the prior method is solved, and the technical effect of correcting the movement of the robot arm can be efficiently and accurately corrected; and the camera is obtained by using a camera disposed outside the robot arm.
- the embodiment of the present application further provides an electronic device.
- the electronic device may specifically include the input device 31. , processor 32, memory 33.
- the input device 31 may be specifically configured to input a target image of the robot arm to grasp the object to be grasped, wherein the mechanical arm includes a plurality of moving arms and a base, and the target image includes a plurality of target identifiers, A plurality of target identifiers are respectively disposed on each of the plurality of moving arms, the base, and the object to be grasped.
- the processor may be specifically configured to extract the plurality of target identifiers from the target image; and determine current positions of each of the plurality of motion arms according to the plurality of target identifiers and the target image respectively Position information; correcting the current arm movement according to the current pose information of each of the plurality of moving arms.
- the memory 33 can be specifically used for the target image input by the input device 31, and various intermediate data and result data generated during the operation of the processor.
- the input device may specifically be one of main devices for exchanging information between the user and the computer system.
- the input device may include a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input pad, a voice input device, etc.; the input device is used to input raw data and a program for processing the numbers into the computer.
- the input device can also acquire data transmitted by other modules, units, and devices.
- the processor can be implemented in any suitable manner.
- a processor can employ, for example, a microprocessor or processor and a computer readable medium, logic gate, switch, or application-specific integrated circuit (such as software or firmware) that can be executed by the (micro)processor.
- the memory may specifically be a memory device for storing information in modern information technology.
- the memory includes many levels. In the digital system, as long as the binary data can be saved, the memory can be stored; in the integrated circuit, a circuit having a storage function without a physical form is also called a memory, such as a RAM, a FIFO, etc.; Among them, a storage device having a physical form is also called a memory, such as a memory stick, a TF card, or the like.
- the computer storage medium based on the above-mentioned method for correcting the movement of the robot arm is further provided in the application embodiment, and the computer storage medium stores computer program instructions, which are implemented when the computer program instructions are executed: acquiring a mechanical arm Determining a target image of the grab, wherein the robot arm includes a plurality of moving arms and a base, the target image includes a plurality of target identifiers, and the plurality of target identifiers are respectively disposed in each of the plurality of moving arms An arm, a base, and a object to be grasped; extracting the plurality of target identifiers from the target image; determining, according to the plurality of target identifiers and the target image, respectively, each of the plurality of moving arms Current pose information; correcting current arm motion based on current pose information of each of the plurality of motion arms.
- the storage medium includes but is not limited to a random access memory (RAM), a read-only memory (ROM), a cache (Cache), a hard disk (Hard Disk Drive, HDD), or a memory card (MemoryCard). ).
- the memory can be used to store computer program instructions.
- the network communication unit may be an interface for performing network connection communication in accordance with a standard stipulated by the communication protocol.
- the method and apparatus for correcting the movement of the robot arm provided by the present application is used to correct the motion of the robot arm including the plurality of motion arms to grasp the object to be grasped.
- the scenario of the method for correcting the movement of the robot arm provided by the embodiment of the present application in a scenario example shown in FIG. 6 , and the schematic diagram of capturing the object to be grasped by the robot arm according to the following content: .
- a unique plane mark (ie, a target mark) may be separately set on each of the plurality of moving arms of the robot arm, the base, and the object to be grasped, and the specific setting is ensured.
- the edge of the plane mark is parallel to the axis of the target object.
- the plane flag when the plane flag is specifically arranged, refer to the motion of one of the plurality of moving arms before applying the method and device for correcting the motion of the robot arm provided by the embodiment of the present application in a scene example shown in FIG. 7 .
- the arm sets a schematic diagram of the corresponding target identifier. Take the plane mark M2 on the moving arm L2 as an example.
- the plane mark M2 can be specifically fixed to a position on the moving arm L2 that can be photographed by the camera.
- the center O2 of M2 can be used as the origin of the plane marker M2 coordinate system.
- the right end point O L2 of the moving arm is used as the origin of the moving arm L2 coordinate system.
- the plane mark M2 when the plane mark M2 is arranged, it is necessary to make the plane mark M2 coordinate system O2 and the coordinate axis of the moving arm L2 coordinate system O L2 respectively parallel (that is, the side of the plane mark is parallel to the axis of the target object).
- the plane marks M1, M3, and M4 are respectively disposed on the object to be grasped, the moving arm L3, and the base L1.
- the origin of the coordinate system of the plane mark may be set to O1, O3, and O4, respectively.
- the camera after the flat surface mark is laid out, it is necessary to arrange the camera at a position other than the robot arm.
- the camera can be disposed above the working platform, so that the camera can acquire the target image including the plane markers M1, M2, M3, and M4.
- the origin of the coordinate system of the camera may be set to O5.
- S1 Obtain a video stream by using a camera, and obtain a target image including plane markers M1, M2, M3, and M4 through the video stream.
- S2 Perform detection of the identifier (ie, the plane flag or the target identifier) according to the target image, and determine the ID of the identifier (ie, establish a correspondence between the target identifier and the target object).
- the identifier ie, the plane flag or the target identifier
- S5 Perform pose correction (or correction) on the motion of the robot arm.
- the rotation matrix R and the translation vector t (based on the base) can be separately calculated according to the identifier, and the pose of the moving arm can be further determined.
- the position of the moving arm L2 is calculated as an example to illustrate how to specifically determine the posture of each moving arm. Specifically, the following may be included.
- the coordinate systems O1 and O2 of the plane markers M1 and M2 can be calculated relative to the camera coordinate system O5, and the rotation matrices R1 and R2 and the translation vectors t1 and t2.
- the known matrix [R1, t1] is the transformation matrix of the plane marker M1 coordinate system O1 with respect to the camera coordinate system O5, and the inverse matrix [R1, t1] -1 of the matrix is obtained as the camera coordinate system O5 relative to the plane marker.
- the known matrix [R2, t2] is the transformation matrix of the plane marker M2 coordinate system O2 with respect to the camera coordinate system O5, then the matrix [R2, t2] [R1, t1] -1 is the plane marker M2 coordinate system O2 A transformation matrix with respect to the plane marker M1 coordinate system O1.
- the conversion relationship between the two coordinate systems is constant, that is, the coordinate system O1 of the plane marker M1 coordinate system O1 relative to the base of the robot arm.
- the rotation matrix R0 and the translation vector t0 are constant values.
- the matrix [R0, t0] is a transformation matrix of the plane marker M1 coordinate system O1 with respect to the robot arm base coordinate system O4, and the transformation matrix of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base can be expressed as [ R2,t2][R1,t1] -1 [R0,t0].
- the matrix composed of the first three columns of the conversion matrix is the rotation matrix R4 of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base, and the fourth column is the translation vector t4.
- S2 Calculate the Euler angle of the plane marker M2 coordinate system O2 with respect to the coordinate system O4 of the robot arm base according to the rotation matrix.
- the coordinate system O2 can be obtained from the coordinate matrix O2 of the plane marker M2 with respect to the rotation matrix R3 of the base coordinate system O4.
- the rotation angles ⁇ x, ⁇ y, ⁇ z of the three coordinate axes XYZ of the coordinate system O4 can be specifically calculated as follows:
- the sine and cosine functions of the Euler angles of the three axes can be recorded as: sx, cx, sy, cy, sz, cz, and if rotated sequentially around the x-axis, the y-axis, and the z-axis, the transformation matrix of the transformation can be Expressed as:
- the i-row j-column element of the plane matrix M2 coordinate system O2 with respect to the rotation matrix R4 of the base coordinate system O4 is r ij
- the value of the Euler angle can be derived by using a trigonometric function.
- the above process has calculated the coordinate system O2 of the plane mark M2 with respect to the robot arm. Since the Euler angles ⁇ x, ⁇ y, and ⁇ z of the three coordinate axes of the coordinate system O4, the Euler angles of the coordinate system OL2 of the moving arm L2 with respect to the three coordinate axes of the robot base coordinate system O4 are also ⁇ x, ⁇ y, and ⁇ z. . Therefore, the pose of the arm L2 in the arm base coordinate system O4 can be uniquely determined according to the Euler angle of the plane mark M2 obtained above, that is, the current pose of the arm L2.
- the pose of the moving arm L3 can be calculated in a similar manner. However, it is also possible to determine the pose of L3 more quickly based on the relative position of the moving arms L3 and L2 based on the pose of L2. Specifically, since the moving arm L3 is connected to the moving arm L2, the above process has calculated the posture of the moving arm L2 under the robot arm base coordinate system O4, and only needs to calculate the posture of the moving arm L3 in the coordinate frame of the moving arm L2. Then, the posture of the moving arm L3 in the robot arm base coordinate system O4 can be determined.
- the rotation matrix R4 and the translation vector t4 of the plane marker M3 coordinate system O3 relative to the plane marker M2 coordinate system O2 may be first calculated: wherein the known matrix [R2, t2] is the plane marker M2 coordinate system.
- the inverse matrix [R2, t2] -1 of the matrix is the transformation matrix of the camera coordinate system O5 with respect to the plane marker M2 coordinate system O2 with respect to the transformation matrix of the camera coordinate system O5. Since the matrix [R3, t3] is the transformation matrix of the plane marker M2 coordinate system O2 with respect to the camera coordinate system O5, the matrix [R3, t3][R2, t2] -1 is the plane marker M3 coordinate system O3.
- the Euler angles ⁇ x, ⁇ y, and ⁇ z of the coordinate axes O3 of the plane marker M3 with respect to the three coordinate axes of the coordinate system O2 of the plane marker M2 can be calculated.
- the coordinate system O3 of the plane mark M3 is parallel to the coordinate axes of the coordinate system OL3 of the moving arm L3
- the coordinate system O2 of the plane mark M2 is parallel to the coordinate axes of the coordinate system OL2 of the moving arm L2, respectively. Therefore, the Euler angles of the coordinate system OL2 of the moving arm L2 with respect to the three coordinate axes of the coordinate system OL3 of the moving arm L3 are also ⁇ x, ⁇ y, and ⁇ z. Thereby, it is possible to determine the pose information of each arm based on the base of the robot arm.
- the current posture measured in the above-mentioned real time can be utilized. That is, the current Euler angle corrects the specific execution process of each moving arm in the robot arm in real time, so that the overall execution precision of the mechanical arm can be improved and the execution error can be reduced.
- the method and apparatus for correcting the movement of the robot arm provided by the embodiment of the present application are verified by the above-mentioned scenario example, because the target identifier is set on each of the moving arm, the base, and the object to be grasped in advance; and then according to the acquired target image
- the target identifier and the target image respectively determine the pose information of each arm, and then the target arm information can be specifically corrected according to the pose information of each arm, and the corrected robot arm motion cost existing in the existing method is solved.
- the technical problems of high and low efficiency achieve the technical effect of efficiently and accurately correcting the movement of the arm.
- the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
- the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
- program modules include routines, programs, objects, components, data structures, classes, and the like that perform particular tasks or implement particular abstract data types.
- the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
- program modules can be located in both local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
Description
Claims (10)
- 一种机械臂运动的校正方法,其特征在于,包括:获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;从所述目标图像中提取多个目标标识;根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
- 根据权利要求1所述的方法,其特征在于,所述获取机械臂抓取待抓取物的目标图像,包括:通过设置于预设位置处的摄像头获取所述机械臂抓取待抓取物的目标图像,其中,所述预设位置包括除机械臂以外的区域。
- 根据权利要求1所述的方法,其特征在于,所述目标标识为包括4个定位点的二维码图像。
- 根据权利要求2所述的方法,其特征在于,从所述目标图像中提取所述多个目标标识,包括:对所述目标图像进行四边形轮廓检测,以提取多个轮廓图像;对所述多个轮廓图像分别进行平面射影变换,以获取所述多个目标标识的正视图;根据所述多个目标标识的正视图,确定所述多个目标标识。
- 根据权利要求2所述的方法,其特征在于,根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息,包括:根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底 座、待抓取物的对应关系;根据所述目标图像、所述对应关系,分别确定多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、底座基于摄像头的旋转矩阵和平移向量、待抓取物基于摄像头的旋转矩阵和平移向量;根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息。
- 根据权利要求5所述的方法,其特征在于,根据所述多个目标标识,建立所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系,包括:对所述多个目标标识分别进行旋转处理,得到多个旋转处理后的标识图像;将所述多个旋转处理后的标识图像分别与标识数据库中的标识进行匹配,以确定所述多个目标标识与所述多个运动臂、底座、待抓取物的对应关系。
- 根据权利要求5所述的方法,其特征在于,所述根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定所述多个运动臂中各个运动臂的当前位姿信息,包括:根据所述多个运动臂中各个运动臂基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量;根据所述底座基于摄像头的旋转矩阵和平移向量、所述待抓取物基于摄像头的旋转矩阵和平移向量,确定待抓取物基于底座的旋转矩阵和平移向量;根据所述多个运动臂中各个运动臂基于待抓取物的旋转矩阵和平移向量、所述待抓取物基于底座的旋转矩阵和平移向量,确定多个运动臂中各个 运动臂基于底座的旋转矩阵;根据所述多个运动臂中各个运动臂基于底座的旋转矩阵,确定所述多个运动臂中各个运动臂的当前位姿信息。
- 根据权利要求7所述的方法,其特征在于,所述多个运动臂中各个运动臂的当前位姿信息包括:所述多个运动臂中各个运动臂基于底座的当前欧拉角。
- 根据权利要求1所述的方法,其特征在于,根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正,包括:将所述多个运动臂中各个运动臂的当前位姿信息与所述当前机械臂运动对应的运动路径进行比较,以确定多个运动臂中各个运动臂的当前位姿偏差;根据所述多个运动臂中各个运动臂的当前位姿偏差对所述多个运动臂中各个运动臂分别进行校正。
- 一种机械臂运动的校正装置,其特征在于,包括:获取模块,用于获取机械臂抓取待抓取物的目标图像,其中,所述机械臂包括多个运动臂和底座,所述目标图像包括多个目标标识,所述多个目标标识分别设于多个运动臂中的各个运动臂、底座和待抓取物上;提取模块,用于从所述目标图像中提取所述多个目标标识;确定模块,用于根据所述多个目标标识和所述目标图像,分别确定所述多个运动臂中各个运动臂的当前位姿信息;校正模块,用于根据所述多个运动臂中各个运动臂的当前位姿信息,对当前机械臂运动进行校正。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711305455.4 | 2017-12-11 | ||
CN201711305455.4A CN107813313A (zh) | 2017-12-11 | 2017-12-11 | 机械臂运动的校正方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019114339A1 true WO2019114339A1 (zh) | 2019-06-20 |
Family
ID=61606459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/104735 WO2019114339A1 (zh) | 2017-12-11 | 2018-09-08 | 机械臂运动的校正方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107813313A (zh) |
WO (1) | WO2019114339A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111702755A (zh) * | 2020-05-25 | 2020-09-25 | 淮阴工学院 | 一种基于多目立体视觉的机械臂智能控制系统 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107813313A (zh) * | 2017-12-11 | 2018-03-20 | 南京阿凡达机器人科技有限公司 | 机械臂运动的校正方法和装置 |
CN108297079B (zh) * | 2018-03-30 | 2023-10-13 | 中山市中科智能制造研究院有限公司 | 一种蛇形机械臂及其姿态变化的获取方法 |
WO2019201423A1 (en) * | 2018-04-17 | 2019-10-24 | Abb Schweiz Ag | Method for controlling a robot arm |
CN108994830A (zh) * | 2018-07-12 | 2018-12-14 | 上海航天设备制造总厂有限公司 | 用于打磨机器人离线编程的系统标定方法 |
CN110741413B (zh) * | 2018-11-29 | 2023-06-06 | 深圳市瑞立视多媒体科技有限公司 | 一种刚体配置方法及光学动作捕捉方法 |
CN109397249B (zh) * | 2019-01-07 | 2020-11-06 | 重庆大学 | 基于视觉识别的二维码定位抓取机器人系统的方法 |
CN109829439B (zh) * | 2019-02-02 | 2020-12-29 | 京东方科技集团股份有限公司 | 一种对头部运动轨迹预测值的校准方法及装置 |
CN113613850B (zh) * | 2019-06-17 | 2022-08-12 | 西门子(中国)有限公司 | 一种坐标系校准方法、装置和计算机可读介质 |
KR102400965B1 (ko) * | 2019-11-25 | 2022-05-25 | 재단법인대구경북과학기술원 | 로봇 시스템 및 그 보정 방법 |
CN111002311A (zh) * | 2019-12-17 | 2020-04-14 | 上海嘉奥信息科技发展有限公司 | 基于光学定位仪的多段位移修正机械臂定位方法及系统 |
CN111360832B (zh) * | 2020-03-18 | 2021-04-20 | 南华大学 | 提高破拆机器人末端工具远程对接精度的方法 |
WO2022037356A1 (zh) * | 2020-08-19 | 2022-02-24 | 北京术锐技术有限公司 | 机器人系统以及控制方法 |
CN112164112B (zh) * | 2020-09-14 | 2024-05-17 | 北京如影智能科技有限公司 | 一种获取机械臂位姿信息的方法及装置 |
CN114619441B (zh) * | 2020-12-10 | 2024-03-26 | 北京极智嘉科技股份有限公司 | 机器人、二维码位姿检测的方法 |
CN113240739B (zh) * | 2021-04-29 | 2023-08-11 | 三一重机有限公司 | 一种挖掘机、属具的位姿检测方法、装置及存储介质 |
CN115153855B (zh) * | 2022-07-29 | 2023-05-05 | 中欧智薇(上海)机器人有限公司 | 一种微型机械臂的定位对准方法、装置及电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003117861A (ja) * | 2001-10-15 | 2003-04-23 | Denso Corp | ロボットの位置補正システム |
CN103400131A (zh) * | 2013-08-16 | 2013-11-20 | 徐宁 | 一种图像识别中的校正装置及其方法 |
CN105844277A (zh) * | 2016-03-22 | 2016-08-10 | 江苏木盟智能科技有限公司 | 标签识别方法和装置 |
CN106945049A (zh) * | 2017-05-12 | 2017-07-14 | 深圳智能博世科技有限公司 | 一种仿人机器人关节零位校准的方法 |
CN107813313A (zh) * | 2017-12-11 | 2018-03-20 | 南京阿凡达机器人科技有限公司 | 机械臂运动的校正方法和装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6418774B1 (en) * | 2001-04-17 | 2002-07-16 | Abb Ab | Device and a method for calibration of an industrial robot |
CN101093553A (zh) * | 2007-07-19 | 2007-12-26 | 成都博古天博科技有限公司 | 一种二维码系统及其识别方法 |
CN101430768B (zh) * | 2007-11-07 | 2013-05-15 | 成都市思博睿科技有限公司 | 一种二维条码的定位方法 |
JP2011209064A (ja) * | 2010-03-29 | 2011-10-20 | Fuji Xerox Co Ltd | 物品認識装置及びこれを用いた物品処理装置 |
JP5586015B2 (ja) * | 2010-06-10 | 2014-09-10 | 国立大学法人 東京大学 | 逆運動学を用いた動作・姿勢生成方法及び装置 |
CN104778491B (zh) * | 2014-10-13 | 2017-11-07 | 刘整 | 用于信息处理的图像码及生成与解析其的装置与方法 |
-
2017
- 2017-12-11 CN CN201711305455.4A patent/CN107813313A/zh active Pending
-
2018
- 2018-09-08 WO PCT/CN2018/104735 patent/WO2019114339A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003117861A (ja) * | 2001-10-15 | 2003-04-23 | Denso Corp | ロボットの位置補正システム |
CN103400131A (zh) * | 2013-08-16 | 2013-11-20 | 徐宁 | 一种图像识别中的校正装置及其方法 |
CN105844277A (zh) * | 2016-03-22 | 2016-08-10 | 江苏木盟智能科技有限公司 | 标签识别方法和装置 |
CN106945049A (zh) * | 2017-05-12 | 2017-07-14 | 深圳智能博世科技有限公司 | 一种仿人机器人关节零位校准的方法 |
CN107813313A (zh) * | 2017-12-11 | 2018-03-20 | 南京阿凡达机器人科技有限公司 | 机械臂运动的校正方法和装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111702755A (zh) * | 2020-05-25 | 2020-09-25 | 淮阴工学院 | 一种基于多目立体视觉的机械臂智能控制系统 |
CN111702755B (zh) * | 2020-05-25 | 2021-08-17 | 淮阴工学院 | 一种基于多目立体视觉的机械臂智能控制系统 |
Also Published As
Publication number | Publication date |
---|---|
CN107813313A (zh) | 2018-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019114339A1 (zh) | 机械臂运动的校正方法和装置 | |
US11049280B2 (en) | System and method for tying together machine vision coordinate spaces in a guided assembly environment | |
JP6573354B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
CN110480637B (zh) | 一种基于Kinect传感器的机械臂零件图像识别抓取方法 | |
JP2015090560A (ja) | 画像処理装置、画像処理方法 | |
CN111195897B (zh) | 用于机械手臂系统的校正方法及装置 | |
CN106845354B (zh) | 零件视图库构建方法、零件定位抓取方法及装置 | |
CN104626169A (zh) | 基于视觉与机械综合定位的机器人抓取零件的方法 | |
JP6885856B2 (ja) | ロボットシステムおよびキャリブレーション方法 | |
CN112476489B (zh) | 基于自然特征的柔性机械臂同步测量方法及系统 | |
JP2014029664A (ja) | 比較画像範囲生成方法、位置姿勢検出方法、比較画像範囲生成装置、位置姿勢検出装置、ロボット、ロボットシステム、比較画像範囲生成プログラム及び位置姿勢検出プログラム | |
JP2016170050A (ja) | 位置姿勢計測装置、位置姿勢計測方法及びコンピュータプログラム | |
JP6922348B2 (ja) | 情報処理装置、方法、及びプログラム | |
JP2009216503A (ja) | 三次元位置姿勢計測方法および装置 | |
JP2008309595A (ja) | オブジェクト認識装置及びそれに用いられるプログラム | |
CN113172636B (zh) | 一种自动手眼标定方法、装置及存储介质 | |
JP2010184300A (ja) | 姿勢変更システムおよび姿勢変更方法 | |
US11478922B2 (en) | Robot teaching device and robot system | |
CN114187312A (zh) | 目标物的抓取方法、装置、系统、存储介质及设备 | |
JP2021021577A (ja) | 画像処理装置及び画像処理方法 | |
CN115797332B (zh) | 基于实例分割的目标物抓取方法和设备 | |
JP6766229B2 (ja) | 位置姿勢計測装置及び方法 | |
WO2012076979A1 (en) | Model-based pose estimation using a non-perspective camera | |
JP7178802B2 (ja) | 2次元位置姿勢推定装置及び2次元位置姿勢推定方法 | |
Pop et al. | Robot vision application for bearings identification and sorting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18887387 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18887387 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18887387 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 09.12.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18887387 Country of ref document: EP Kind code of ref document: A1 |