CN110842918B - Robot mobile processing autonomous locating method based on point cloud servo - Google Patents

Robot mobile processing autonomous locating method based on point cloud servo Download PDF

Info

Publication number
CN110842918B
CN110842918B CN201911015315.2A CN201911015315A CN110842918B CN 110842918 B CN110842918 B CN 110842918B CN 201911015315 A CN201911015315 A CN 201911015315A CN 110842918 B CN110842918 B CN 110842918B
Authority
CN
China
Prior art keywords
point cloud
mobile
trolley
depth camera
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911015315.2A
Other languages
Chinese (zh)
Other versions
CN110842918A (en
Inventor
龚泽宇
张士羿
陶波
尹周平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201911015315.2A priority Critical patent/CN110842918B/en
Publication of CN110842918A publication Critical patent/CN110842918A/en
Application granted granted Critical
Publication of CN110842918B publication Critical patent/CN110842918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the technical field related to robot positioning and servo control, and discloses a point cloud servo-based robot mobile processing autonomous locating method, which comprises the following steps: (1) providing a large component to be processed and a mobile robot platform, wherein the mobile robot platform comprises a mobile trolley and an RGBD depth camera fixedly connected with the mobile trolley, and constructing a world coordinate system, a camera coordinate system and a trolley coordinate system; (2) calculating to obtain a virtual point cloud image of the large member when the movable trolley is at an ideal processing position; (3) acquiring an actual point cloud image, and then acquiring the speed of the RGBD depth camera through point cloud servo control; (4) calculating to obtain the speed of the movable trolley to move to an ideal position; (5) and (4) circularly executing the steps (3) - (4) after the mobile trolley moves for a preset time until the mobile robot platform moves to an ideal position, so that the autonomous position finding is realized. The invention reduces the cost and improves the accuracy.

Description

Robot mobile processing autonomous locating method based on point cloud servo
Technical Field
The invention belongs to the technical field related to robot positioning and servo control, and particularly relates to a robot mobile processing autonomous locating method based on point cloud servo.
Background
Large complex components, such as wind power blades, aviation components, high-speed rail shells and the like, are key functional components of wind power generation, large airplanes and high-speed rail motor cars, and are widely applied to the fields of energy, traffic, aviation and the like. At present, the large-scale complex components are still processed mainly in a manual mode, the problems of severe working environment, low processing efficiency and the like generally exist, and a small amount of gantry type numerical control special machines are only used for processing blades abroad, but the flexibility is poor and the manufacturing cost is high.
Compared with manual machining, the machining of the robot has the advantages of safe operation and high machining efficiency and machining precision, and compared with the machining mode of a fixed guide rail or a numerical control machine tool, the machining of the mobile robot has the advantages of strong flexibility and wide working range, so that the trend of machining large and complex components during the moving machining of the robot is realized, the moving machining difficulty of the robot is to determine the accurate relative pose of the surfaces to be machined of the robot and the large components, and how to accurately acquire the relative pose of the robot and the large components in real time is the key for successful machining, so that the positioning technology of the mobile robot is an urgent problem to be solved.
Disclosure of Invention
Aiming at the defects or improvement requirements in the prior art, the invention provides the point cloud servo-based robot mobile machining autonomous locating method, which is based on the manufacturing characteristics of the existing large-scale complex components and is researched and designed to have better accuracy. The method comprises the steps of observing the surface of a large component in real time through an RGBD depth camera fixedly connected to a moving trolley, and constructing a servo controller based on a point cloud image of the surface of the large component obtained through real-time observation and a point cloud image of the surface of the large component observed at an ideal machining position of a robot obtained through virtual forming, so that the moving machining robot is controlled to move to the ideal position, the purpose of position finding is achieved, and the method has the advantages of low cost, real-time feedback, high position finding control precision and the like.
In order to achieve the above object, according to one aspect of the present invention, there is provided a point cloud servo-based robot mobile processing autonomous locating method, which is suitable for locating when a robot mobile processing manner is adopted to process a large member, the method mainly includes the following steps:
(1) providing a large-scale component and a mobile robot platform, wherein the mobile robot platform comprises a mobile trolley and an RGBD depth camera fixedly connected to the mobile trolley, and constructing a world coordinate system, a camera coordinate system of the RGBD depth camera and a trolley coordinate system of the mobile trolley;
(2) determining theA rotation matrix of the RGBD depth camera relative to the trolley coordinate system and a translation vector of the RGBD depth camera relative to the mobile trolley
Figure BDA0002245511030000021
And further constructing a homogeneous transformation matrix of the RGBD depth camera relative to the moving trolley
Figure BDA0002245511030000022
(3) Scanning the surface of the large component to obtain three-dimensional coordinates of each point on the surface of the large component in the world coordinate system, and recording the three-dimensional coordinates as PW(ii) a And determining an ideal pose of the mobile processing robot platform relative to the large member when the mobile processing robot platform processes the large member, wherein the ideal pose is a pose relation of the trolley coordinate system relative to the world coordinate system, and a corresponding homogeneous transformation matrix is
Figure BDA0002245511030000023
(4) According to a homogeneous transformation matrix
Figure BDA0002245511030000024
And homogeneous transformation matrix
Figure BDA0002245511030000025
When the mobile trolley is positioned at an ideal processing position, the pose relation between the camera coordinate system and the world coordinate system is determined
Figure BDA0002245511030000026
Further according to the three-dimensional coordinate PWAnd the pose relationship
Figure BDA0002245511030000027
Determining the position P of each point on the surface of the large member under the camera coordinate system when the moving trolley is at the ideal processing positionC
(5) According to the points on the surface of the large member in the phasePosition P in machine coordinate systemCAnd calculating to obtain a virtual point cloud image P of the large member when the movable trolley is at an ideal processing position according to the pinhole imaging principle of the camerad
(6) Observing the surface of the large component in real time by adopting the RGBD depth camera to obtain an actual point cloud image of the large component; and obtaining the speed V of the RGBD depth camera through point cloud servo control according to the virtual point cloud image and the actual point cloud imageC
(7) According to the obtained position and posture relation between the RGBD depth camera and the moving trolley and the speed V of the RGBD depth cameraCCalculating to obtain the speed V which can make the moving trolley move to the ideal positionV
(8) And (3) after the moving trolley moves for a preset time, circularly executing the step (6) to the step (7) until the mobile robot platform moves to an ideal position, thereby realizing the automatic position finding.
Further, the mobile robot platform further comprises a rack, and the rack is fixedly connected with the mobile trolley and the RGBD depth camera; the surface of the rack for bearing the RGBD depth camera is parallel to the surface of the movable trolley for bearing the rack, and the lens plane of the RGBD depth camera is perpendicular to the surface of the rack for bearing the RGBD depth camera.
Further, the rotation matrix is
Figure BDA0002245511030000031
Obtaining the translation vector of the RGBD depth camera 1 relative to the moving trolley through parameter calibration
Figure BDA0002245511030000032
Further, the pose relationship
Figure BDA0002245511030000033
The calculation formula of (2) is as follows:
Figure BDA0002245511030000034
further, a velocity V of the RGBD depth cameraCThe calculation formula of (2) is as follows:
Figure BDA0002245511030000035
in the formula, lambda is a scale parameter; Z-Z*Representing an error value between the actual point cloud image and the ideal point cloud image;
Figure BDA0002245511030000036
is a point cloud image interaction matrix LZNThe pseudo-inverse of (1).
Further, Z-Z*Expressing the error value between the actual point cloud image and the ideal point cloud image, and the calculation formula is as follows:
Figure BDA0002245511030000041
in the formula, ZiThe depth value of the ith point in the actual point cloud image shot by the RGBD depth camera is represented on the actual position of the mobile trolley;
Figure BDA0002245511030000042
representing the depth value of the ith point in an ideal point cloud image obtained by virtual imaging of an RGBD depth camera on the ideal position of the mobile trolley; wherein, i is 1,2.. n, n represents that each point cloud image has n pixel points.
Further, the air conditioner is provided with a fan,
Figure BDA0002245511030000043
the calculation formula of (2) is as follows:
Figure BDA0002245511030000044
in the formula, LZNIs the interaction moment of n pixel points.
Further, LZNThe calculation formula of (2) is as follows:
Figure BDA0002245511030000045
wherein L isZiAnd (i ═ 1,2,3.. n) is an interaction matrix of the ith pixel point in the point cloud image.
Further, an interaction matrix L of individual pixelsZThe calculation formula of (2) is as follows:
Figure BDA0002245511030000046
wherein the content of the first and second substances,
Figure BDA0002245511030000047
respectively representing the gradients of the depth values in the horizontal and vertical directions of the image pixel coordinate system; l isu,Lv,LPZCalculated using the following formula, respectively:
Figure BDA0002245511030000051
wherein (F)x,Fy,u0,v0) Are internal parameters of the RGBD depth camera; (u, v) are pixel coordinates in the point cloud image; and Z is the depth value of each pixel point of the point cloud image.
Generally, compared with the prior art, the robot mobile processing autonomous locating method based on the point cloud servo provided by the invention has the following beneficial effects:
1. the invention provides an automatic position finding method by combining an RGBD vision technology, which is suitable for position finding when a large member is machined by adopting a robot moving machining mode, is favorable for accurately determining a relative pose, improves the precision and the efficiency, and has strong applicability.
2. The detection instrument provided by the invention can control the mobile robot platform to move to an ideal position only by the RGBD depth camera, so that the autonomous position finding is realized, the cost is reduced, and the applicability is strong.
3. According to the method, the surface of a large component is observed in real time through an RGBD depth camera fixedly connected to the moving trolley, and a servo controller is constructed based on a point cloud image of the surface of the large component obtained through real-time observation and a point cloud image of the surface of the large component at an ideal processing position of the robot obtained through virtual imaging, so that the moving processing robot is controlled to move to the ideal position, the purpose of position finding is achieved, and the method has the advantages of low cost, real-time feedback, high position finding control precision and the like.
4. The method is easy to implement, has good flexibility and is beneficial to popularization and application.
Drawings
FIG. 1 is a schematic flow chart of an autonomous locating method for mobile processing of a robot based on a point cloud servo according to the present invention;
fig. 2 is a schematic structural diagram of a mobile processing robot platform involved in the point cloud servo-based robot mobile processing autonomous locating method in fig. 1;
fig. 3 is a schematic diagram of the respective structural coordinate systems of the mobile machining robot platform in fig. 2.
The same reference numbers will be used throughout the drawings to refer to the same or like elements or structures, wherein: the system comprises a 1-RGBD depth camera, a 2-frame, a 3-moving trolley and a 4-large component.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, the robot mobile processing autonomous locating method based on point cloud servo provided by the invention is suitable for locating when a robot mobile processing mode is adopted to process a large-scale complex component, and the adopted sensor is only an RGBD depth camera, so that the method has the advantages of low cost, real-time feedback and high locating control precision.
Referring to fig. 2 and 3, the point cloud servo-based robot mobile processing autonomous locating method mainly includes the following steps:
the method comprises the steps of firstly, providing a large component and a mobile robot platform, wherein the mobile robot platform comprises a mobile trolley and an RGBD depth camera fixedly connected to the mobile trolley, and constructing a world coordinate system, a camera coordinate system of the RGBD depth camera and a trolley coordinate system of the mobile trolley.
Specifically, the mobile robot platform comprises an RGBD depth camera 1, a frame 2 and a moving trolley 3, wherein the frame 2 is arranged on the moving trolley 3, and the RGBD depth camera 1 is arranged on the frame 2, so that the RGBD depth camera 1 is fixed on the moving trolley 3 through the frame 2. Wherein the surface of the rack 2 for carrying the RGBD depth camera 1 is parallel to the surface of the moving trolley 3 for carrying the rack 2; the internal parameters of the RGBD depth camera 1 are calibrated and known; the movable trolley 3 can carry equipment such as a mechanical arm and the like so as to process the large-sized component 4.
In the present embodiment, the moving vehicle 3 may be an Unmanned Ground Vehicle (UGV), an Automatic Guided Vehicle (AGV), or the like; the RGBD depth camera 1 may be an RGBD depth camera that can acquire a depth image in real time, such as Kinect2, RealSense, and the like; the relative position between the dolly 3 and the RGBD depth camera 1 does not change.
The coordinate origin of the world coordinate system is positioned at the lower left corner of the large member 4, and X iswHorizontal direction along the major axis of the large member, ZwWith axis vertically upwards, by Zw×XwDetermination of YwThe direction of the axis; the trolley coordinate system takes the intersection point of a vertical line passing through the mass center of the moving trolley 3 and the surface of the moving trolley 3 for bearing the rack 2 as the origin of coordinates, and the direction from the origin of coordinates to the right side of the moving trolley 3 and parallel to the connecting line of the two front wheels of the moving trolley 3 is XVAn axis from the origin to the front of the traveling carriage 3 on the upper surface of the traveling carriage 3 and perpendicular to XVAxial direction of the shaftVAn axis perpendicular to the upper surface of the travelling carriage 3 and perpendicular to XVAxis and YVAxial direction Z of establishmentVA shaft.
The lens plane of the RGBD depth camera 1 is perpendicular to the surface of the frame 2 for carrying the RGBD depth camera 1; x of the camera coordinate systemC,ZC,YCCoordinate axes respectively corresponding to X of the coordinate system of the moving trolleyV,YV,ZVThe coordinate axes are parallel.
Determining a rotation matrix of the RGBD depth camera relative to the trolley coordinate system and a translation vector of the RGBD depth camera relative to the moving trolley
Figure BDA0002245511030000071
And further constructing a homogeneous transformation matrix of the RGBD depth camera relative to the moving trolley
Figure BDA0002245511030000072
In particular, a rotation matrix of the RGBD depth camera 1 relative to the Cart coordinate System is determined
Figure BDA0002245511030000073
Obtaining the translation vector of the RGBD depth camera 1 relative to the moving trolley 3 through parameter calibration
Figure BDA0002245511030000074
And constructing a homogeneous transformation matrix of the RGBD depth camera 1 relative to the moving trolley 3
Figure BDA0002245511030000075
Thirdly, scanning the surface of the large member to obtain three-dimensional coordinates of each point on the surface of the large member in the world coordinate system, and marking the three-dimensional coordinates as PW(ii) a And determining the mobile machining robot platformWhen the large member is processed, the ideal pose of the large member is relative to the ideal pose of the large member, the ideal pose is the pose relation of the trolley coordinate system relative to the world coordinate system, and the corresponding homogeneous transformation matrix is
Figure BDA0002245511030000081
Specifically, the surface of the large component 4 is scanned to obtain the world coordinate system O of each point on the surface of the large component 4wThree-dimensional coordinates of lower, denoted as PW. Wherein, according to the processing requirement, the ideal pose of the mobile processing robot platform relative to the large member 4 when the mobile processing robot platform performs processing on the large member is determined, and the ideal pose is the trolley coordinate system OvRelative to the world coordinate system OwThe corresponding homogeneous transformation matrix is recorded as
Figure BDA0002245511030000082
Step four, transforming the matrix according to the homogeneous order
Figure BDA0002245511030000083
And homogeneous transformation matrix
Figure BDA0002245511030000084
When the mobile trolley is positioned at an ideal processing position, the pose relation between the camera coordinate system and the world coordinate system is determined
Figure BDA0002245511030000085
Further according to the three-dimensional coordinate PWAnd the pose relationship
Figure BDA0002245511030000086
Determining the position P of each point on the surface of the large member under the camera coordinate system when the moving trolley is at the ideal processing positionC
In particular, the matrix is transformed according to a homogeneous order
Figure BDA0002245511030000087
And a homogeneous transformation matrix of
Figure BDA0002245511030000088
The pose relation between the camera coordinate system and the world coordinate system when the movable trolley 3 is at the ideal processing position is calculated and recorded as
Figure BDA0002245511030000089
Wherein the content of the first and second substances,
Figure BDA00022455110300000810
can be calculated by the following formula:
Figure BDA00022455110300000811
the relation between the camera coordinate system and the world coordinate system and the positions of the large member 4 indicating the points in the world coordinate system are known, so that the position coordinates of the points on the surface of the large member 4 in the camera coordinate system are obtained through solving. Wherein the coordinate of a certain point i of the large component 4 in the world coordinate system is recorded as
Figure BDA00022455110300000812
Coordinates in the camera coordinate system are recorded as
Figure BDA00022455110300000813
Figure BDA00022455110300000814
Figure BDA00022455110300000815
Homogeneous transformation matrix
Figure BDA00022455110300000816
Will be provided with
Figure BDA00022455110300000817
Decomposition into a rotation matrix
Figure BDA00022455110300000818
And translation vector
Figure BDA00022455110300000819
Figure BDA0002245511030000091
Calculating the coordinates of each point on the surface of the large member 4 in the camera coordinate system by adopting the following formula:
Figure BDA0002245511030000092
fifthly, according to the position P of each point on the surface of the large member under the camera coordinate systemCAnd calculating to obtain a virtual point cloud image P of the large member when the movable trolley is at an ideal processing position according to the pinhole imaging principle of the camerad
Specifically, coordinates of each point on the surface of the large member 4 under the camera coordinate system are known, pixel positions of each point on the surface of the large member 4 in the RGBD depth camera imaging plane are calculated through a pinhole imaging principle of a camera, and the following formula is obtained according to an RGBD depth camera projection principle:
Figure BDA0002245511030000093
internal parameter matrix
Figure BDA0002245511030000094
After calibration, the surface points of the large member 4 are positioned in the camera coordinate systemCoordinate XC YC ZC TProjecting to the RGBD depth camera imaging plane u, v to obtain a virtual point cloud image.
Observing the surface of the large component by adopting the RGBD depth camera to obtain an actual point cloud image of the large component; and obtaining the speed V of the RGBD depth camera through point cloud servo control according to the virtual point cloud image and the actual point cloud imageC
Specifically, the RGBD depth camera 1 is used for observing the surface of the large component 4 by the mobile processing robot platform, and the observed point cloud image is recorded as Pa,PaRepresenting the actual point cloud image obtained by the RGBD depth camera 1.
Obtaining the speed V of the RGBD depth camera 1 through point cloud servo control according to the virtual point cloud image and the actual point cloud imageCThe velocity VCCan control the RGBD depth camera 1 to the desired position.
Wherein the speed V of the RGBD depth cameraCThe following formula is used for calculation:
Figure BDA0002245511030000101
in the formula, lambda is a scale parameter; Z-Z*And expressing the error value between the actual point cloud image and the ideal point cloud image by the following formula:
Figure BDA0002245511030000102
in the formula, ZiThe depth value of the ith point in the actual point cloud image shot by the RGBD depth camera is represented on the actual position of the mobile trolley;
Figure BDA0002245511030000103
representing the depth value of the ith point in an ideal point cloud image obtained by virtual imaging of an RGBD depth camera on the ideal position of the mobile trolley;wherein, i is 1,2.. n, and subscript n indicates that each point cloud image has n pixel points.
Figure BDA0002245511030000104
Is a point cloud image interaction matrix LZNThe pseudo-inverse of (c), calculated by the following formula:
Figure BDA0002245511030000105
one point cloud image has n pixel points, LZNIs an interaction matrix of n pixels, which can be expressed as follows: (ii) a
Figure BDA0002245511030000106
Wherein L isZiAnd (i ═ 1,2,3.. n) is an interaction matrix of the ith pixel point in the point cloud image.
Solving the interaction matrix L of a single pixel byZ
Figure BDA0002245511030000111
Wherein the content of the first and second substances,
Figure BDA0002245511030000115
respectively representing the gradients of the depth values in the horizontal and vertical directions of the image pixel coordinate system; l isu,Lv,LPZCalculated using the following formula, respectively:
Figure BDA0002245511030000113
wherein (F)x,Fy,u0,v0) Are internal parameters of the RGBD depth camera, known by calibration; (u, v) are pixel coordinates in the point cloud image; and Z is the depth value of each pixel point of the point cloud image.
Seventhly, obtaining the position and posture relation between the RGBD depth camera and the moving trolley and the speed V of the RGBD depth cameraCCalculating to obtain the speed V which can make the moving trolley move to the ideal positionV
Specifically, the RGBD depth camera 1 is fixedly connected to the frame 2, the frame 2 is fixedly connected to the moving trolley 3, and the three are a rigid body. The RGBD depth camera 1 has no degree of freedom, the movement of the RGBD depth camera needs to be driven by the moving trolley 3, and the moving trolley 3 has only three degrees of freedom which are respectively along XV,YVMovement of the axis and about ZVRotation of the axes, corresponding to the RGBD depth camera 1, is along XC,ZCMovement of the axis and about YCRotation of the shaft, so that the velocity v of the RGBD depth camera 1 is determinedCGet (v) of Zhonghao (a)X,vZ,wY) Three degrees of freedom, the other degrees of freedom being redundant degrees of freedom.
Converting the motion of the RGBD depth camera 1 to the motion of the moving trolley 3 according to the relation of the camera coordinate system and the trolley coordinate system:
Figure BDA0002245511030000114
and step eight, after the movable trolley moves for a preset time, circularly executing the step six to the step seven until the mobile robot platform moves to an ideal position, thereby realizing the automatic position finding.
Specifically, the actual point cloud image shot by the moving trolley 3 at the actual position and the ideal point cloud image obtained by virtual imaging at the ideal position are subjected to servo control processing to obtain the movement speed of the RGBD depth camera 1, the movement speed is converted into the movement speed of the moving trolley 3, the time t is set, the point cloud image of the surface of the large component 4 shot at present is captured again after the moving trolley 3 moves for the time t, and the speed V is cycledCAnd velocity VVUntil the error term e ═ Z-Z is calculated*Decrease to approach 0, said movement being smallThe speed of the vehicle 3 is gradually reduced to be less than a threshold value, and at the moment, the mobile processing robot platform reaches an ideal position to complete a servo task.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A robot mobile processing autonomous locating method based on point cloud servo is suitable for locating when a robot mobile processing mode is adopted to process a large member, and is characterized by comprising the following steps:
(1) providing a large component (4) and a mobile robot platform, wherein the mobile robot platform comprises a mobile trolley (3) and an RGBD depth camera (1) fixedly connected to the mobile trolley (3), and constructing a world coordinate system, a camera coordinate system of the RGBD depth camera (1) and a trolley coordinate system of the mobile trolley (3);
(2) determining a rotation matrix of the RGBD depth camera (1) relative to the trolley coordinate system and a translation vector of the RGBD depth camera (1) relative to the mobile trolley (3)
Figure FDA0002725300970000011
And further constructing a homogeneous transformation matrix of the RGBD depth camera (1) relative to the moving trolley (3)
Figure FDA0002725300970000012
(3) Scanning the surface of the large member (4) to obtain three-dimensional coordinates of each point on the surface of the large member (4) in the world coordinate system, and marking the three-dimensional coordinates as PW(ii) a And determining the ideal pose of the mobile processing robot platform relative to the large member (4) when the mobile processing robot platform processes the large member (4), wherein the ideal pose is the pose relation of the trolley coordinate system relative to the world coordinate system, and the corresponding homogeneous transformation matrix is
Figure FDA0002725300970000013
(4) According to a homogeneous transformation matrix
Figure FDA0002725300970000014
And homogeneous transformation matrix
Figure FDA0002725300970000015
To determine the pose relationship between the camera coordinate system and the world coordinate system when the mobile carriage (3) is at the ideal processing position
Figure FDA0002725300970000016
Further according to the three-dimensional coordinate PWAnd the pose relationship
Figure FDA0002725300970000017
Determining the position P of each point on the surface of the large member (4) in the camera coordinate system when the mobile trolley (3) is at the ideal processing positionC
(5) According to the position P of each point on the surface of the large component (4) in the camera coordinate systemCAnd calculating to obtain a virtual point cloud image P of the large component (4) when the mobile trolley (3) is at an ideal processing position according to the pinhole imaging principle of the camerad
(6) Observing the surface of the large component (4) in real time by using the RGBD depth camera (1) to obtain an actual point cloud image of the large component (4); and obtaining the speed V of the RGBD depth camera (1) through point cloud servo control according to the virtual point cloud image and the actual point cloud imageC
(7) According to the obtained position and posture relation between the RGBD depth camera (1) and the mobile trolley (3) and the speed V of the RGBD depth camera (1)CCalculating the speed V enabling the moving trolley (3) to move to the ideal positionV
(8) And (3) after the moving trolley moves for a preset time, circularly executing the step (6) to the step (7) until the mobile robot platform moves to an ideal position, thereby realizing the automatic position finding.
2. The point cloud servo-based robot mobile processing autonomous locating method according to claim 1, characterized in that: the mobile robot platform further comprises a rack (2), and the rack (2) is fixedly connected with the mobile trolley (3) and the RGBD depth camera (1); the surface of the stand (2) for carrying the RGBD depth camera (1) is parallel to the surface of the mobile trolley (3) for carrying the stand (2), and the lens plane of the RGBD depth camera (1) is perpendicular to the surface of the stand (2) for carrying the RGBD depth camera (1).
3. The point cloud servo-based robot mobile processing autonomous locating method according to claim 2, characterized in that: the rotation matrix is
Figure FDA0002725300970000021
Obtaining a translation vector of the RGBD depth camera (1) relative to the mobile trolley (3) by parameter calibration
Figure FDA0002725300970000022
4. The point cloud servo-based robot mobile processing autonomous locating method according to claim 1, characterized in that: pose relationship
Figure FDA0002725300970000023
The calculation formula of (2) is as follows:
Figure FDA0002725300970000024
5. the point cloud servo-based robot mobile processing autonomous locating method according to claim 1, characterized in that: the RGBD is deepVelocity V of the camera (1)CThe calculation formula of (2) is as follows:
Figure FDA0002725300970000025
in the formula, lambda is a scale parameter; Z-Z*Representing an error value between the actual point cloud image and the ideal point cloud image;
Figure FDA0002725300970000031
is a point cloud image interaction matrix LZNThe pseudo-inverse of (1).
6. The point cloud servo-based robot mobile processing autonomous locating method according to claim 5, characterized in that: Z-Z*Expressing the error value between the actual point cloud image and the ideal point cloud image, and the calculation formula is as follows:
Figure FDA0002725300970000032
in the formula, ZiThe depth value of the ith point in the actual point cloud image shot by the RGBD depth camera is represented on the actual position of the mobile trolley;
Figure FDA0002725300970000033
representing the depth value of the ith point in an ideal point cloud image obtained by virtual imaging of an RGBD depth camera on the ideal position of the mobile trolley; wherein, i is 1,2.. n, n represents that each point cloud image has n pixel points.
7. The point cloud servo-based robot mobile processing autonomous locating method according to claim 5, characterized in that:
Figure FDA0002725300970000034
the calculation formula of (2) is as follows:
Figure FDA0002725300970000035
in the formula, LZNIs the interaction moment of n pixel points.
8. The point cloud servo-based robot mobile processing autonomous locating method according to claim 7, characterized in that: l isZNThe calculation formula of (2) is as follows:
Figure FDA0002725300970000036
wherein L isZiAnd (i ═ 1,2,3.. n) is an interaction matrix of the ith pixel point in the point cloud image.
9. The point cloud servo-based robot mobile processing autonomous locating method according to claim 8, characterized in that: interaction matrix L of single pixelsZThe calculation formula of (2) is as follows:
Figure FDA0002725300970000041
wherein the content of the first and second substances,
Figure FDA0002725300970000042
respectively representing the gradients of the depth values in the horizontal and vertical directions of the image pixel coordinate system; l isu,Lv,LPZCalculated using the following formula, respectively:
Figure FDA0002725300970000043
wherein (F)x,Fy,u0,v0) Are internal parameters of the RGBD depth camera; (u, v) are pixel coordinates in the point cloud image; and Z is the depth value of each pixel point of the point cloud image.
CN201911015315.2A 2019-10-24 2019-10-24 Robot mobile processing autonomous locating method based on point cloud servo Active CN110842918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911015315.2A CN110842918B (en) 2019-10-24 2019-10-24 Robot mobile processing autonomous locating method based on point cloud servo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911015315.2A CN110842918B (en) 2019-10-24 2019-10-24 Robot mobile processing autonomous locating method based on point cloud servo

Publications (2)

Publication Number Publication Date
CN110842918A CN110842918A (en) 2020-02-28
CN110842918B true CN110842918B (en) 2020-12-08

Family

ID=69596817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911015315.2A Active CN110842918B (en) 2019-10-24 2019-10-24 Robot mobile processing autonomous locating method based on point cloud servo

Country Status (1)

Country Link
CN (1) CN110842918B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986086B (en) * 2020-08-27 2021-11-09 贝壳找房(北京)科技有限公司 Three-dimensional image optimization generation method and system
CN112372631B (en) * 2020-10-05 2022-03-15 华中科技大学 Rapid collision detection method and device for robot machining of large complex component
CN112894752B (en) * 2021-01-20 2022-04-01 华中科技大学 Position searching method of mobile processing robot
CN114918928B (en) * 2022-07-22 2022-10-28 杭州柳叶刀机器人有限公司 Method and device for accurately positioning surgical mechanical arm, control terminal and storage medium
CN115284297B (en) * 2022-08-31 2023-12-12 深圳前海瑞集科技有限公司 Workpiece positioning method, robot, and robot working method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102763138B (en) * 2009-11-18 2016-02-17 皇家飞利浦电子股份有限公司 Motion correction in radiation therapy
KR20110095700A (en) * 2010-02-19 2011-08-25 현대중공업 주식회사 Industrial robot control method for workpiece object pickup
CN104537687B (en) * 2014-12-22 2017-06-16 中国民航大学 A kind of sequential swooping template action method for expressing with velocity information
CN105598600B (en) * 2016-02-02 2018-06-19 上海理工大学 A kind of box part weld seam independently seeks position and track automatic generation method
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN108393523A (en) * 2018-03-23 2018-08-14 珠海格力智能装备有限公司 Searching agency and flange process equipment with it
CN110223345B (en) * 2019-05-14 2022-08-09 南京理工大学 Point cloud-based distribution line operation object pose estimation method
CN110269623A (en) * 2019-06-24 2019-09-24 京东方科技集团股份有限公司 Method for determining speed and device, virtual reality display methods and device

Also Published As

Publication number Publication date
CN110842918A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110842918B (en) Robot mobile processing autonomous locating method based on point cloud servo
CN108286949B (en) Movable three-dimensional detection robot system
CN109605371B (en) Mobile hybrid robot processing integrated system
CN110815180B (en) Six-degree-of-freedom parallel robot motion analysis modeling and quick solving method
CN108871209B (en) Large-size workpiece moving measurement robot system and method
CN108801142B (en) Double-movement measuring robot system and method for super-large-size workpiece
CN108692688B (en) Automatic calibration method for coordinate system of scanner of robot measuring-processing system
CN108827155A (en) A kind of robot vision measuring system and method
CN109794963B (en) Robot rapid positioning method facing curved surface component
CN107538508A (en) The robot automatic assembly method and system of view-based access control model positioning
Guo et al. Vision based navigation for Omni-directional mobile industrial robot
CN109895099A (en) A kind of flight mechanical arm visual servo grasping means based on physical feature
CN111426270B (en) Industrial robot pose measurement target device and joint position sensitive error calibration method
CN113146620A (en) Binocular vision-based double-arm cooperative robot system and control method
CN104298244A (en) Industrial robot three-dimensional real-time and high-precision positioning device and method
CN114378827B (en) Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN112947569A (en) Visual servo target tracking control method for quad-rotor unmanned aerial vehicle based on preset performance
Peng et al. Development of an integrated laser sensors based measurement system for large-scale components automated assembly application
CN110928311A (en) Indoor mobile robot navigation method based on linear features under panoramic camera
CN114092552A (en) Method for carrying out butt joint on large rigid body member based on fixed end image
CN113051767A (en) AGV sliding mode control method based on visual servo
CN109822575B (en) Robot system and method for performing mobile processing by using projection characteristic image
CN116079732A (en) Cabin assembly method based on laser tracker and binocular vision mixed guidance
CN113000263B (en) Method for adjusting angle of spray gun of automatic paint spraying equipment for automobile repair
CN115619877A (en) Method for calibrating position relation between monocular laser sensor and two-axis machine tool system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant