CN113781558A - Robot vision locating method with decoupled posture and position - Google Patents

Robot vision locating method with decoupled posture and position Download PDF

Info

Publication number
CN113781558A
CN113781558A CN202111010510.3A CN202111010510A CN113781558A CN 113781558 A CN113781558 A CN 113781558A CN 202111010510 A CN202111010510 A CN 202111010510A CN 113781558 A CN113781558 A CN 113781558A
Authority
CN
China
Prior art keywords
robot
target object
depth
target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111010510.3A
Other languages
Chinese (zh)
Other versions
CN113781558B (en
Inventor
陶波
徐锐
曹志宏
赵兴炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Beijing Institute of Electronic System Engineering
Original Assignee
Huazhong University of Science and Technology
Beijing Institute of Electronic System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology, Beijing Institute of Electronic System Engineering filed Critical Huazhong University of Science and Technology
Priority to CN202111010510.3A priority Critical patent/CN113781558B/en
Publication of CN113781558A publication Critical patent/CN113781558A/en
Application granted granted Critical
Publication of CN113781558B publication Critical patent/CN113781558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the field of robot vision, and particularly discloses a robot vision position finding method with decoupled posture and position, which comprises the following steps: s1, carrying out position calibration through the target to obtain the position depth d of the target0Calibrating the actual offset of the moving unit pixel under the depth; s2, adjusting the posture of the robot according to the normal vector of the plane where the target object is located, and enabling the tail end of the robot to be parallel to the plane where the target object is located; s3, acquiring a camera image, and identifying the pixel coordinates and the depth value d of the target object under the current depth1Obtaining a centering compensation vector, and adjusting the robot to enable the target object to be located in the center of the camera image; then moving the robot in the depth direction to a calibrated height d0(ii) a And S4, acquiring the camera image again to obtain a locating compensation vector, and adjusting the robot according to the locating compensation vector to enable the robot to reach the target object. The calibration process of the invention has simple operation and high positioning accuracy, and improves the vision of the robotThe operability and accuracy of the seek position are sensed.

Description

Robot vision locating method with decoupled posture and position
Technical Field
The invention belongs to the field of robot vision, and particularly relates to a robot vision position finding method with decoupled posture and position.
Background
With the development and the development of the intelligent manufacturing concept, the robot is widely applied to systems such as automatic assembly, digital flexible manufacturing and the like. The robot vision locating technology is an important part, and the quality of the vision locating technology directly and accurately locates the subsequent working position, so that the reasonability of the scheme is influenced. The robot vision locating technology has a plurality of application scenes, such as an aircraft skin automatic drilling and riveting system, a robot on an industrial production line for identifying and grabbing targets, a mobile robot vision navigation map building field and the like, and has very wide application prospects.
According to the installation mode of the camera, the robot visual positioning mode can be divided into two main types: the eyes are on the hands, namely the camera is arranged at the tail end of the mechanical arm and moves along with the movement of the mechanical arm; the eyes are outside the hands, and the camera is installed outside the mechanical arm and is fixed relative to the base of the mechanical arm and does not move along with the movement of the mechanical arm. Before visual positioning, a visual system is often required to be calibrated, a checkerboard is taken as a calibration tool in a mainstream calibration method, but the checkerboard occupies a certain space and is usually difficult to be placed in the visual field of a camera with a workpiece, so that calibration can be only carried out before production. The existing robot hand-eye calibration, especially the high-precision calibration, often has the problems of complicated calibration operation, complex calculation principle and overhigh time cost.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a robot vision locating method with decoupled posture and position, aiming at simplifying the robot vision calibration process, improving the accuracy and the reliability of robot vision location and solving the problems of low calibration efficiency and low location precision at present.
In order to achieve the purpose, the invention provides a robot vision locating method with decoupled posture and position, which comprises the following steps:
s1, carrying out position calibration through the target to obtain the position depth of the target as the calibration depth d0And the actual offset of the moving unit pixel at the nominal depth
Figure BDA0003238756000000021
S2, teaching the robot, enabling a camera at the tail end of the robot to face a target object, obtaining a normal vector of a plane where the target object is located according to point cloud information of the target object obtained by the camera, and adjusting the posture of the robot according to the normal vector to enable the tail end of the robot to be parallel to the plane where the target object is located;
s3, acquiring camera image, recognizing pixel coordinates (U, V) of target object at current depth and depth value d from target object plane to camera1(ii) a According to the nominal depth d0And the actual offset
Figure BDA0003238756000000022
To a depth d1Actual shift amount of moving unit pixel
Figure BDA0003238756000000023
And then according to the actual offset
Figure BDA0003238756000000024
And the pixel coordinates (U, V) of the target object obtain a centering compensation vector, and the robot is adjusted in the horizontal direction according to the centering compensation vector to ensure that the target object is positioned in the center of the camera image; then moving the robot in the depth direction to a calibrated height d0
And S4, acquiring the camera image again, identifying the pixel coordinates (U ', V') of the target object at the current depth to further obtain a locating compensation vector, and adjusting the robot according to the locating compensation vector to enable the robot to reach the target object to finish the visual locating of the robot.
As a further stepPreferably, in step S2, the obtaining of the normal vector of the plane where the target object is located specifically includes the following steps: enabling a camera at the tail end of the robot to face a target object, dividing a point cloud convex hull according to point cloud information of the target object, acquired by the camera, moving the tail end of the robot to enable the convex hull to wrap a target workpiece plane, fitting a plane equation by the point cloud divided by the convex hull, and further obtaining a normal vector of the plane where the target object is located
Figure BDA0003238756000000025
More preferably, in step S2, the vector is calculated from the normal vector
Figure BDA0003238756000000026
The method for adjusting the robot posture specifically comprises the following steps:
firstly, the normal vector
Figure BDA0003238756000000027
The conversion to the robot base coordinate system is expressed as:
Figure BDA0003238756000000031
wherein the content of the first and second substances,
Figure BDA0003238756000000032
a rotation matrix representing the end coordinate system to the base coordinate system,
Figure BDA0003238756000000033
a rotation matrix representing a camera coordinate system to an end coordinate system;
then according to
Figure BDA0003238756000000034
Sum vector
Figure BDA0003238756000000035
(0,0 obtaining robot rotation axis
Figure BDA0003238756000000036
And corner
Figure BDA0003238756000000037
And then will rotate
Figure BDA0003238756000000038
And converting the data into quaternions to obtain tail end attitude adjustment parameters of the robot, and inputting the parameters into the robot to adjust the attitude of the robot.
More preferably, in step S3, the actual offset amount
Figure BDA0003238756000000039
The specific calculation formula is as follows:
Figure BDA00032387560000000310
further preferably, in step S3, the compensation vector is found
Figure BDA00032387560000000311
The specific calculation formula is as follows:
Figure BDA00032387560000000312
wherein width and depth represent the image width and height, respectively.
Further preferably, in step S4, the position-finding compensation vector
Figure BDA00032387560000000313
The specific calculation formula is as follows:
Figure BDA00032387560000000314
wherein (u)0,v0) And when the position is calibrated, the target pixel coordinate in the initial state is obtained.
Preferably, in step S1, the position calibration by the target specifically includes the following steps:
s11, mounting a camera and a test touch rod at the tail end of the robot, and enabling the tail end of the test touch rod to be seen in the field of view of the camera; moving the robot in a teaching mode to enable the tail end of the test touch rod to coincide with the target, and then moving away the test touch rod to obtain the pixel coordinate (u) of the target0,v0) Depth value d of target position0
S12, keeping the posture and the height of the robot unchanged, horizontally moving the robot, and calibrating to obtain the actual offset caused by the pixel coordinates of the moving unit in the x and y directions
Figure BDA0003238756000000041
More preferably, in step S12, the actual offset amount is acquired
Figure BDA0003238756000000042
The method comprises the following steps: moving the robot horizontally while keeping the attitude and height of the robot constant, and reading the end coordinates (x) of the robot at the first position1,y1,z1) Identified target pixel coordinates (u)1,v1) (ii) a End coordinate (x) at the second position2,y2,z2) Identified target pixel coordinates (u)2,v2) Finding the actual offset in the x and y directions caused by moving the unit pixel coordinate at that depth
Figure BDA0003238756000000043
Generally, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. the invention only calibrates the actual offset caused by unit pixel points in the fixed depth plane, decouples the posture and the position of the robot, processes the attitude and the position step by step, and realizes the accurate locating of the target by centering compensation, depth compensation and locating compensation, has low calibration time cost and simple operation, is suitable for various locating and operating scenes, and greatly improves the accuracy and the reliability of the visual locating of the robot.
2. When the attitude of the robot is adjusted, the point cloud convex hull is divided according to the point cloud information, the tail end of the robot is moved to enable the convex hull to wrap the plane of the target workpiece, then the point cloud divided by the convex hull is used for fitting a plane equation, a normal vector perpendicular to the plane where the target object is located can be obtained quickly and accurately, and then attitude parameters of the tail end of the robot are obtained through conversion, so that the attitude of the robot is adjusted.
Drawings
FIG. 1 is a schematic diagram of an environmental application of a robot vision locating method with decoupled posture and position according to an embodiment of the present invention;
FIG. 2 is a schematic view of an integrated fixture of a depth camera and a trial touch apparatus according to an embodiment of the invention;
fig. 3 is a flowchart of a robot vision locating method for decoupling posture and position according to an embodiment of the present invention.
The same reference numbers will be used throughout the drawings to refer to the same or like elements or structures, wherein: the method comprises the following steps of 1-a robot body, 2-a depth camera, 3-a normal vector, 4-a plane to be processed, 5-an integrated clamp and 6-a test feeler lever.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The robot vision locating method with the decoupled posture and position, provided by the embodiment of the invention, as shown in fig. 3, comprises the following steps:
s1, position calibration is carried out, as shown in figures 1 and 2, an integrated clamp 5 is installed at the tail end of the robot body 1, a depth camera 2 and a test touch rod 6 are installed on the integrated clamp 5, the tail end of the test touch rod can be seen in the field of view of the depth camera, and calibration is carried outPixel coordinate (u) of the end of the feeler lever0,v0) Reading the depth value of the target position as d0(ii) a The method specifically comprises the following steps:
s11, designing an integrated clamp of the depth camera and the trial touch device, wherein the plane of the depth camera is parallel to the plane of the tail end of the robot, and the center of the trial touch rod is horizontally deviated from the tail end dpoleThe length of the other direction is lpoleInstalling a clamp to control the tail end of the robot to be vertical to a horizontal plane;
s12, placing a calibration plate on the horizontal plane, attaching a positioning target on the calibration plate, moving the robot in a teaching mode without changing the tail end posture of the robot, enabling the tail end of the test touch rod to be just overlapped with the target, moving the test touch rod away, reading and obtaining a target pixel coordinate record (u & ltu & gt)0,v0) And the depth value of the target position is recorded as d0
S2, keeping the posture and height of the robot unchanged, horizontally moving the robot, and calibrating to obtain the depth d0Moving the unit pixel coordinates downward causes actual offsets in the x and y directions
Figure BDA0003238756000000051
Calculating the field angle of the camera; the method specifically comprises the following steps:
s21: keeping the attitude and height of the robot constant, moving the robot horizontally, reading the end coordinates (x) of the robot at a first position1,y1,z1) Identified target pixel coordinates (u)1,v1) (ii) a End coordinate (x) at the second position2,y2,z2) Identified target pixel coordinates (u)2,v2) Finding the actual offset in the x and y directions caused by moving the unit pixel coordinate at that depth
Figure BDA0003238756000000061
S22, the angle of view is calculated by the following equation: horizontal field angle
Figure BDA0003238756000000062
Vertical field of view
Figure BDA0003238756000000063
Wherein width and depth represent the image width and height, respectively, in pixel units.
S3, removing the trial feeler lever after completing the position calibration, and deviating the center of the tool end from the tail end of the robot (t)x,ty,tz) The value is determined by the tool end clamp size; teaching a robot to enable a camera to face a workpiece (namely a target object), dividing a point cloud convex hull according to point cloud information acquired by the camera, moving the tail end of the robot to enable the convex hull area to be target workpiece plane point cloud data, and fitting a plane equation by the point cloud divided by the convex hull to obtain a normal vector
Figure BDA0003238756000000064
Specifically, the shape of the divided point cloud convex hull can be a polygon, a circle and other two-dimensional figures, and the required area is smaller than the size of a point cloud imaging area of a target workpiece plane;
then the normal vector
Figure BDA0003238756000000065
Converting the attitude parameters into tail end attitude parameters of the robot, and inputting the tail end attitude parameters into the robot to adjust the attitude so that the tail end of the robot is parallel to the plane of the workpiece; the method specifically comprises the following steps:
the normal vector
Figure BDA0003238756000000066
The conversion to the robot base coordinate system is expressed as:
Figure BDA0003238756000000067
wherein the content of the first and second substances,
Figure BDA0003238756000000068
a rotation matrix representing the end coordinate system to the base coordinate system, derived from the robot pose,
Figure BDA0003238756000000069
a rotation matrix representing a coordinate system of the camera to a terminal coordinate system, determined by a mounting manner of the camera;
will be provided with
Figure BDA00032387560000000610
Orthogonalized sum vector
Figure BDA00032387560000000611
Calculating dot product and cross product to obtain robot rotating shaft
Figure BDA00032387560000000612
And corner
Figure BDA00032387560000000613
Will rotate
Figure BDA00032387560000000614
Conversion to quaternion q0(qx,qy,qz,qw) And indicating that the tail end of the robot is parallel to the plane of the workpiece as a parameter for adjusting the tail end posture of the robot and inputting the robot to adjust the posture.
S4, acquiring a camera image, identifying the pixel coordinates (U, V) of the target object at the current depth, and acquiring the depth value d from the workpiece plane to the camera1At this depth d1The actual offset for moving down one pixel can be obtained by the following two equations:
x offset:
Figure BDA0003238756000000071
y offset:
Figure BDA0003238756000000072
calculating a compensation vector for centering the robot according to the position information, and enabling the target object to be located in the center of the camera image; specifically, the centering compensation vector is
Figure BDA0003238756000000073
Conversion to base seatSign board
Figure BDA0003238756000000074
Obtaining a current position of a robot
Figure BDA0003238756000000075
Then find the coordinate of the center position under the robot base as
Figure BDA0003238756000000076
Adjusting the robot in the horizontal direction according to the coordinates to enable the target object to be located in the center of the camera image; then move the robot d in the depth direction1-d0Distance to a nominal height d0And completing the centering compensation and the depth compensation.
S5, recognizing the coordinates (U ', V') of the target pixel again to obtain the position-finding compensation vector of the robot
Figure BDA0003238756000000077
To the tool end
Figure BDA0003238756000000078
Conversion to base coordinates
Figure BDA0003238756000000079
Obtaining a current position of a robot
Figure BDA00032387560000000710
The coordinates of the target point under the robot base are
Figure BDA0003238756000000081
And controlling the robot to reach a target point to complete the visual position finding of the robot.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A robot vision locating method with decoupled posture and position is characterized by comprising the following steps:
s1, carrying out position calibration through the target to obtain the position depth of the target as the calibration depth d0And the actual offset of the moving unit pixel at the nominal depth
Figure FDA0003238755990000011
S2, teaching the robot, enabling a camera at the tail end of the robot to face a target object, obtaining a normal vector of a plane where the target object is located according to point cloud information of the target object obtained by the camera, and adjusting the posture of the robot according to the normal vector to enable the tail end of the robot to be parallel to the plane where the target object is located;
s3, acquiring camera image, recognizing pixel coordinates (U, V) of target object at current depth and depth value d from target object plane to camera1(ii) a According to the nominal depth d0And the actual offset
Figure FDA0003238755990000012
To a depth d1Actual shift amount of moving unit pixel
Figure FDA0003238755990000013
And then according to the actual offset
Figure FDA0003238755990000014
And the pixel coordinates (U, V) of the target object obtain a centering compensation vector, and the robot is adjusted in the horizontal direction according to the centering compensation vector to ensure that the target object is positioned in the center of the camera image; then moving the robot in the depth direction to a calibrated height d0
And S4, acquiring the camera image again, identifying the pixel coordinates (U ', V') of the target object at the current depth to further obtain a locating compensation vector, and adjusting the robot according to the locating compensation vector to enable the robot to reach the target object to finish the visual locating of the robot.
2. The pose and position decoupled robot vision localization method of claim 1, wherein the step S2 of obtaining the normal vector of the plane where the target object is located comprises the following steps: enabling a camera at the tail end of the robot to face a target object, dividing a point cloud convex hull according to point cloud information of the target object, acquired by the camera, moving the tail end of the robot to enable the convex hull to wrap a target workpiece plane, fitting a plane equation by the point cloud divided by the convex hull, and further obtaining a normal vector of the plane where the target object is located
Figure FDA0003238755990000015
3. The pose and position decoupled robot vision localization method of claim 2, wherein in step S2, according to normal vector
Figure FDA0003238755990000021
The method for adjusting the robot posture specifically comprises the following steps:
firstly, the normal vector
Figure FDA0003238755990000022
The conversion to the robot base coordinate system is expressed as:
Figure FDA0003238755990000023
wherein the content of the first and second substances,
Figure FDA0003238755990000024
a rotation matrix representing the end coordinate system to the base coordinate system,
Figure FDA0003238755990000025
a rotation matrix representing a camera coordinate system to an end coordinate system;
then according to
Figure FDA0003238755990000026
Sum vector
Figure FDA0003238755990000027
(0,0 obtaining robot rotation axis
Figure FDA0003238755990000028
And corner
Figure FDA0003238755990000029
And then will rotate
Figure FDA00032387559900000210
And converting the data into quaternions to obtain tail end attitude adjustment parameters of the robot, and inputting the parameters into the robot to adjust the attitude of the robot.
4. The pose and position decoupled robot vision localization method of claim 1, wherein in step S3, the actual offset is
Figure FDA00032387559900000211
The specific calculation formula is as follows:
Figure FDA00032387559900000212
5. the pose and position decoupled robot vision localization method of claim 4, wherein in step S3, the compensation vector is centered
Figure FDA00032387559900000213
The specific calculation formula is as follows:
Figure FDA00032387559900000214
wherein width and depth represent the image width and height, respectively.
6. The pose and position decoupled robot vision localization method of claim 1, wherein in step S4, a localization compensation vector
Figure FDA00032387559900000215
The specific calculation formula is as follows:
Figure FDA00032387559900000216
wherein (u)0,v0) And when the position is calibrated, the target pixel coordinate in the initial state is obtained.
7. The pose-and-position decoupled robot vision localization method according to any of claims 1-6, wherein in step S1, performing position calibration through the target specifically comprises the following steps:
s11, mounting a camera and a test touch rod at the tail end of the robot, and enabling the tail end of the test touch rod to be seen in the field of view of the camera; moving the robot in a teaching mode to enable the tail end of the test touch rod to coincide with the target, and then moving away the test touch rod to obtain the pixel coordinate (u) of the target0,v0) Depth value d of target position0
S12, keeping the posture and the height of the robot unchanged, horizontally moving the robot, and calibrating to obtain the actual offset caused by the pixel coordinates of the moving unit in the x and y directions
Figure FDA0003238755990000031
8. The pose and position decoupled robot vision localization method of claim 7, wherein in step S12, an actual offset is obtained
Figure FDA0003238755990000032
The method comprises the following steps: moving the robot horizontally while keeping the attitude and height of the robot constant, and reading the end coordinates (x) of the robot at the first position1,y1,z1) Identified target pixel coordinates (u)1,v1) (ii) a End coordinate (x) at the second position2,y2,z2) Identified target pixel coordinates (u)2,v2) Finding the actual offset in the x and y directions caused by moving the unit pixel coordinate at that depth
Figure FDA0003238755990000033
CN202111010510.3A 2021-08-31 2021-08-31 Robot vision locating method with decoupling gesture and position Active CN113781558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111010510.3A CN113781558B (en) 2021-08-31 2021-08-31 Robot vision locating method with decoupling gesture and position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111010510.3A CN113781558B (en) 2021-08-31 2021-08-31 Robot vision locating method with decoupling gesture and position

Publications (2)

Publication Number Publication Date
CN113781558A true CN113781558A (en) 2021-12-10
CN113781558B CN113781558B (en) 2024-03-19

Family

ID=78840198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111010510.3A Active CN113781558B (en) 2021-08-31 2021-08-31 Robot vision locating method with decoupling gesture and position

Country Status (1)

Country Link
CN (1) CN113781558B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783068A (en) * 2022-06-16 2022-07-22 深圳市信润富联数字科技有限公司 Gesture recognition method, gesture recognition device, electronic device and storage medium
CN116117800A (en) * 2022-12-19 2023-05-16 广东建石科技有限公司 Machine vision processing method for compensating height difference, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104647387A (en) * 2013-11-25 2015-05-27 佳能株式会社 Robot control method, system and device
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN110146099A (en) * 2019-05-31 2019-08-20 西安工程大学 A kind of synchronous superposition method based on deep learning
US20210023694A1 (en) * 2019-07-23 2021-01-28 Qingdao university of technology System and method for robot teaching based on rgb-d images and teach pendant

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104647387A (en) * 2013-11-25 2015-05-27 佳能株式会社 Robot control method, system and device
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision
CN110146099A (en) * 2019-05-31 2019-08-20 西安工程大学 A kind of synchronous superposition method based on deep learning
US20210023694A1 (en) * 2019-07-23 2021-01-28 Qingdao university of technology System and method for robot teaching based on rgb-d images and teach pendant

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783068A (en) * 2022-06-16 2022-07-22 深圳市信润富联数字科技有限公司 Gesture recognition method, gesture recognition device, electronic device and storage medium
CN114783068B (en) * 2022-06-16 2022-11-15 深圳市信润富联数字科技有限公司 Gesture recognition method, gesture recognition device, electronic device and storage medium
CN116117800A (en) * 2022-12-19 2023-05-16 广东建石科技有限公司 Machine vision processing method for compensating height difference, electronic device and storage medium
CN116117800B (en) * 2022-12-19 2023-08-01 广东建石科技有限公司 Machine vision processing method for compensating height difference, electronic device and storage medium

Also Published As

Publication number Publication date
CN113781558B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN108717715B (en) Automatic calibration method for linear structured light vision system of arc welding robot
CN109550649B (en) Dispensing positioning method and device based on machine vision
JP4021413B2 (en) Measuring device
CN110146038B (en) Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part
CN111775146A (en) Visual alignment method under industrial mechanical arm multi-station operation
CN109781164B (en) Static calibration method of line laser sensor
US10871366B2 (en) Supplementary metrology position coordinates determination system for use with a robot
CN110136204B (en) Sound film dome assembly system based on calibration of machine tool position of bilateral telecentric lens camera
CN101539397B (en) Method for measuring three-dimensional attitude of object on precision-optical basis
CN113781558A (en) Robot vision locating method with decoupled posture and position
US10913156B2 (en) Robot system with end tool metrology position coordinates determination system
US20200262080A1 (en) Comprehensive model-based method for gantry robot calibration via a dual camera vision system
CN112792814B (en) Mechanical arm zero calibration method based on visual marks
JP2015062991A (en) Coordinate system calibration method, robot system, program, and recording medium
CN113510708B (en) Contact industrial robot automatic calibration system based on binocular vision
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
US20220230348A1 (en) Method and apparatus for determining a three-dimensional position and pose of a fiducial marker
CN110695982A (en) Mechanical arm hand-eye calibration method and device based on three-dimensional vision
CN109737871B (en) Calibration method for relative position of three-dimensional sensor and mechanical arm
JP2019052983A (en) Calibration method and calibrator
KR20170087996A (en) Calibration apparatus and the method for robot
CN112958960A (en) Robot hand-eye calibration device based on optical target
JP2007533963A (en) Non-contact optical measuring method and measuring apparatus for 3D position of object
CN110595374A (en) Large structural part real-time deformation monitoring method based on image transmission machine
CN112762822B (en) Mechanical arm calibration method and system based on laser tracker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant