CN114882108A - Method for estimating grabbing pose of automobile engine cover under two-dimensional image - Google Patents

Method for estimating grabbing pose of automobile engine cover under two-dimensional image Download PDF

Info

Publication number
CN114882108A
CN114882108A CN202210386421.7A CN202210386421A CN114882108A CN 114882108 A CN114882108 A CN 114882108A CN 202210386421 A CN202210386421 A CN 202210386421A CN 114882108 A CN114882108 A CN 114882108A
Authority
CN
China
Prior art keywords
engine cover
mechanical arm
estimated
point
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210386421.7A
Other languages
Chinese (zh)
Inventor
李扬
黄曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210386421.7A priority Critical patent/CN114882108A/en
Publication of CN114882108A publication Critical patent/CN114882108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an estimation method of a grabbing pose of an automobile engine cover under a two-dimensional image, which relates to the technical field of industrial vision, and comprises the steps of firstly carrying out two-dimensional hand-eye calibration, determining a coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system, obtaining a transformation matrix, then manufacturing a template workpiece of the engine cover, and obtaining pose information of the template workpiece; in the real-time online stage, the online pose information of the engine cover to be estimated is acquired in real time, the position and the attitude direction angle of the engine cover to be estimated under a mechanical arm coordinate system are obtained based on the pose information and the transformation matrix of the template workpiece, the mechanical arm is corrected and grabbed based on the position and the attitude direction angle, the camera and the mechanical arm are corrected and grabbed separately, in the early off-line stage, the camera shooting can be finished once when the engine cover enters the camera visual field, the camera shooting can be finished without being matched with the mechanical arm to be synchronously carried out, the implementation process is simple, the mechanical arm correction can be carried out on the real-time pose of each engine cover, and the grabbing estimation precision is improved.

Description

Method for estimating grabbing pose of automobile engine cover under two-dimensional image
Technical Field
The invention relates to the technical field of industrial vision, in particular to an estimation method of a grabbing pose of an automobile engine cover under a two-dimensional image.
Background
With the development of modern industrial automation production, industrial vision is widely applied, and in an automobile automation production flow, a mechanical arm is required to grab a finished engine cover for subsequent detection processing.
The prior art discloses a hand-eye type servo robot, a camera on a mechanical arm is used for capturing a target workpiece object, whether the target is in a camera view field is judged, namely, an image is shot by the camera, information of the object is extracted from the image, then the relative posture between the camera and the workpiece object is calculated, finally, grabbing is realized by moving the mechanical arm, the camera and the mechanical arm are required to be highly matched in the grabbing process, and the camera is required to be installed on the mechanical arm, so that the posture estimation is required to be continuously carried out when the mechanical arm moves, the implementation process is complex, the real-time performance is poor, and if the hand-eye type servo robot is applied to grabbing of an automobile engine cover, the engine cover is easily damaged in the grabbing process. In addition, the clamp for grabbing the mechanical arm of the automobile engine cover is very complicated, and a camera cannot be installed on the clamp.
Disclosure of Invention
In order to solve the problems of complex implementation process and poor real-time performance of the existing pose estimation method, the invention provides the method for estimating the grabbing pose of the automobile engine cover under the two-dimensional image, a camera does not need to be installed on a mechanical arm, the operation and implementation are simple, the mechanical arm correction can be carried out on the real-time pose of each engine cover, and the grabbing pose estimation precision is improved.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a method for estimating the grabbing pose of an automobile engine cover under a two-dimensional image comprises the following steps:
s1, performing two-dimensional hand-eye calibration by using a nine-point calibration method, and establishing a coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system to obtain a conversion matrix;
s2, obtaining a workpiece image of a template of the engine cover, performing rough positioning after down-sampling the workpiece image of the template, obtaining a center of mass point of the characteristic contour through the rough positioning, calculating the center, and obtaining a picture of a fine positioning area based on the center point;
s3, performing fine positioning based on the picture of the fine positioning area to obtain a positioning point of the template workpiece image and a central point of the template workpiece image;
s4, connecting the locating point of the template workpiece image and the central point of the template workpiece image to obtain a direction vector of the template workpiece;
s5, directly grabbing an engine cover under the template workpiece pose by using a mechanical arm, and determining a mechanical arm coordinate point under the corresponding template workpiece pose and a mechanical arm pose direction angle under a mechanical arm base coordinate system plane;
s6, acquiring the position of the to-be-estimated engine cover in real time to shoot a two-dimensional image, and determining the image positioning point of the to-be-estimated engine cover and the direction vector of the to-be-estimated engine cover in a mode of S2-S4;
s7, calculating and acquiring a final positioning point of the engine cover to be estimated and a final attitude direction angle of the engine cover to be estimated in real time based on the image positioning point of the engine cover to be estimated and the direction vector of the engine cover to be estimated;
and S8, sending the final positioning point of the engine cover to be estimated and the final attitude direction angle of the engine cover to be estimated to the mechanical arm for correction, and grabbing the engine cover by the mechanical arm according to the correction.
In the technical scheme, the method mainly comprises an off-line stage from S1 to S5 and a real-time on-line stage from S6 to S8, wherein the off-line stage is an off-line template workpiece manufacturing stage, firstly, two-dimensional hand-eye calibration is carried out, the coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system is determined, a transformation matrix is obtained, then, a template workpiece of an engine cover is manufactured, the pose information of the template workpiece is obtained, and the pose information and the obtained transformation matrix are reserved together; in the real-time online stage, the position of the engine cover to be estimated is acquired in real time to shoot a two-dimensional image, so that online pose information is acquired, the position and the attitude direction angle of the engine cover to be estimated under a mechanical arm coordinate system are acquired based on the pose information of a template workpiece and an acquired conversion matrix, the mechanical arm is combined with the position and the attitude direction angle to be estimated to correct and grab the engine cover, in the above processes, the camera and the mechanical arm are separated to correct and grab, in the early off-line stage, the camera shooting can be finished once when the engine cover enters the camera visual field, the camera shooting is not matched with the mechanical arm to be synchronously carried out, the implementation process is simple, in the subsequent real-time online stage, the mechanical arm correction can be carried out on the real-time pose of each engine cover, and the estimation accuracy of the grabbing pose is improved.
Preferably, in step S1, let us say that coordinates of nine points on the camera pixel coordinate system are acquired by the camera: (u) 1 ,v 1 ),...,(u 9 ,v 9 ) And coordinates of the nine points on the mechanical arm base coordinate system corresponding to the coordinates of the camera pixel coordinate system: (x) 1 ,y 1 ),...,(x 9 ,y 9 ) For the coordinates (u, v) of any one of the nine points on the camera pixel coordinate system and the coordinates (x, y) on the mechanical arm base coordinate system, the coordinate conversion relationship satisfies:
Figure BDA0003595139950000021
wherein the content of the first and second substances,
Figure BDA0003595139950000022
representing the transformation matrix, A, B, C, D, E, F are all elements of the transformation matrix.
Preferably, in step S2, the process of obtaining a picture of the fine positioning region based on the center point by performing rough positioning after down-sampling the template workpiece image to obtain the center point of the feature contour by the rough positioning and calculating the center is as follows:
s21, setting the acquired template workpiece image as temp _ img (u,v) And obtaining a picture sub _ img after down-sampling (u,v)
S22, obtaining the sub _ img of the picture by utilizing an OTSU threshold segmentation algorithm (u,v) Fuzzy binary map of (u,v) Binary fuzzy (u,v) Performing binary morphological operation to obtain a contour edge (u,v)
S23, based on the contour shape characteristics of the top of the engine cover, calculating the edge of the contour through the number of sides of the polygon (u,v) Medium screeningFeature outline feature _ edge (u,v)
S24, calculating feature profile feature _ edge (u,v) First moment M of 1,0 ,M 0,1 And the zero order moment M 0,0 Obtaining the feature profile feature _ edge (u,v) Centroid point edge _ centroid (u, v); the calculation formula is as follows:
u edge_centroid =M 1,0 /M 0,0
v edge_centroid =M 0,1 /M 0,0
wherein u is edge_centroid U-axis coordinates, v, representing the center of mass point edge _ centroid (u, v) edge_centroid V-axis coordinates representing the centroid point edge _ centroid (u, v);
s25, converting sub _ img (u,v) The centroid point edge _ centroid (u, v) in (1) is converted into a picture temp _ img (u,v) The conversion relationship satisfies the following condition:
u roi_center =u edge_centroid *cols/h1
v roi_center =v edge_centroid *rows/w1
wherein cols represents a picture temp _ img (u,v) Is the pixel length of (1), rows represents the picture temp _ img (u,v) H1, w1 are the pixel length and the pixel width after down sampling;
s26, taking roi _ center (u, v) as a central point, extracting a fine positioning region picture new _ img with a pixel length h2 and a pixel width w2 (u,v)
Preferably, in step S3, the process of performing fine positioning based on the fine positioning region picture to obtain the positioning point of the template workpiece image and the central point of the template workpiece image includes:
s31, in the mode of steps S22-S24, the fine positioning area picture new _ img (u,v) Operating to obtain the new _ edge of the contour edge (u,v) And operates to screen out the feature profile new _ feature _ edge (u,v)
S32, calculating the roundness of the outline and the distance between the outline and the new _ feature _ edge (u,v) From the wheel according to the calculated distance valueContour edge new _ edge (u,v) Screening out an unordered series of feature contour features _ circle (u,v)
S33, a series of feature contour features of disorder are feature _ circle (u,v) Performing least square circle fitting to obtain feature _ circle of each feature profile (u,v) Circle _ center (u, v) corresponding to the fitted circle of (1);
s34, sequencing a series of unordered central points circle _ center (u, v) to obtain an ordered series of central points order _ circle _ center (u, v);
s35, connecting a series of central points order _ circle _ center (u, v) in an end-to-end sequence to form a new closed feature contour feature _ contour (u,v)
S36, in the mode of step S24, obtaining a closed feature profile feature _ contour (u,v) Finding a central point local _ circle _1(u, v) farthest from the centroid point contourcnroid (u, v) from a series of central points order _ circle _ center (u, v), and finding a central point local _ circle _2(u, v) farthest from the central point local _ circle _1(u, v) from a series of central points order _ circle _ center (u, v);
s37, taking a central point center _ circle _1(u, v) and a central point center _ center (u, v) of the central point center _ circle _2(u, v), obtaining a template workpiece image positioning point temp _ center (u, v) and a central point temp _ circle (u, v) through coordinate transformation, wherein the transformation relational expression is as follows:
u temp_center =u roi_center -h2+u middle_center
v temp_center =v roi_center -w2+v middle_center
u temp_circle =u roi_center -h2+u final_circle_1
v temp_circle =v roi_center -w2+v final_circle_1
wherein u is temp_center 、v temp_center A u coordinate value and a v coordinate value respectively representing a template workpiece image positioning point temp _ center (u, v); u. of temp_circle 、v temp_circle And u-coordinate values and v-coordinate values respectively representing the template workpiece image center point temp _ circle (u, v).
Preferably, in step S4, the locating point of the template workpiece image and the central point of the template workpiece image are connected to obtain the expression of the direction vector of the template workpiece, which satisfies the following conditions:
Figure BDA0003595139950000041
wherein the content of the first and second substances,
Figure BDA0003595139950000042
representing a template workpiece direction vector; temp _ center (u, v) represents a template workpiece image anchor point, temp _ circle (u, v) represents a template workpiece image anchor point.
Preferably, in step S5, the mechanical arm directly grips the engine cover in the template workpiece pose, and the obtained mechanical arm coordinate point in the corresponding template workpiece pose and the obtained mechanical arm pose direction angle in the mechanical arm base coordinate system plane are temp _ robot (x, y) and temp _ angle, respectively;
in step S7, the process of obtaining the final orientation point of the hood to be estimated and the final attitude and heading angle of the hood to be estimated by real-time calculation based on the image orientation point of the hood to be estimated and the heading vector of the hood to be estimated includes:
s71, calculating the translation amount of the to-be-estimated engine cover under a camera pixel coordinate system, translating the direction vector of the template workpiece to a point which is common with the direction vector of the to-be-estimated engine cover, and obtaining the translated direction vector;
s72, calculating the rotation angle of the to-be-estimated engine cover relative to the template workpiece based on the translated direction vector and the direction vector of the to-be-estimated engine cover;
s73, converting the image positioning points of the template workpiece and the image positioning points of the engine cover to be estimated from a camera pixel coordinate system to a mechanical arm base coordinate system by using a conversion matrix to obtain the image positioning points of the template workpiece and the image positioning points of the engine cover to be estimated under the mechanical arm base coordinate system;
s74, calculating the translation amount of the engine cover to be estimated under the mechanical arm base coordinate based on the template workpiece image positioning point and the image positioning point of the engine cover to be estimated under the mechanical arm base coordinate;
s75, calculating a new mechanical arm coordinate point under the mechanical arm base coordinate after translation based on the translation amount of the engine cover to be estimated under the mechanical arm base coordinate and the mechanical arm coordinate point under the corresponding template workpiece pose;
s76, calculating a final positioning point of the engine cover to be estimated based on a mechanical arm coordinate point under a new mechanical arm base coordinate, the rotation angle of the engine cover to be estimated relative to the template workpiece and an image positioning point of the engine cover to be estimated under a mechanical arm base coordinate system;
and S77, calculating the final attitude direction angle of the engine cover to be estimated based on the rotation angle of the engine cover to be estimated relative to the template workpiece and the attitude direction angle of the mechanical arm under the plane of the mechanical arm base coordinate system.
Preferably, the image orientation point of the engine cover to be estimated and the direction vector of the engine cover to be estimated are determined by the methods of S2-S4 as check _ center (u, v) and
Figure BDA0003595139950000051
in step S71, the amount of translation pixel _ T (u, v) of the engine cover to be estimated in the camera pixel coordinate system is:
pixel_T(u,v)=check_center(u,v)-temp_center(u,v)
direction vector of template workpiece
Figure BDA0003595139950000052
Direction vector translating to and from the engine head to be estimated
Figure BDA0003595139950000053
The starting point is shared to obtain the direction vector after translation
Figure BDA0003595139950000054
The calculation expression is:
Figure BDA0003595139950000055
preferably, in step S72, the process of calculating the angle of rotation of the hood to be estimated with respect to the template workpiece based on the translated direction vector and the direction vector of the hood to be estimated satisfies:
Figure BDA0003595139950000061
check_angle=cos -1 (cos(check_angle))
wherein cos (check _ angle) represents a direction vector of the engine cover to be estimated
Figure BDA0003595139950000062
And the translated direction vector
Figure BDA0003595139950000063
The cosine of the angle, check angle, represents the angle at which the engine cover is rotated relative to the die plate workpiece to be estimated.
Preferably, the template workpiece image positioning point and the image positioning point of the engine cover to be estimated in the mechanical arm base coordinate system obtained in step S73 are respectively temp _ center (x, y) and check _ center (x, y), and the formula for calculating the translation amount of the engine cover to be estimated in the mechanical arm base coordinate system based on the template workpiece image positioning point and the image positioning point of the engine cover to be estimated in the mechanical arm base coordinate system is as follows:
base_T(x,y)=check_center(x,y)-temp_center(x,y)
wherein, base _ T (x, y) represents the translation amount of the engine cover to be estimated under the base coordinate of the mechanical arm;
when calculating a new mechanical arm coordinate point under the mechanical arm base coordinate after translation based on the translation amount of the engine cover to be estimated under the mechanical arm base coordinate and the mechanical arm coordinate point under the corresponding template workpiece pose, the expression is as follows:
check_robot(x,y)=temp_robot(x,y)+base_T(x,y)
wherein, check _ robot (x, y) represents the robot arm coordinate point under the new robot arm base coordinate after translation, and temp _ robot (x, y) represents the robot arm coordinate point under the corresponding template workpiece pose.
Preferably, based on a coordinate point check _ robot (x, y) of the robot arm under the new robot arm base coordinate, an angle check _ angle of the rotation of the engine cover to be estimated with respect to the template workpiece, and an image positioning point check _ center (x, y) of the engine cover to be estimated under the robot arm base coordinate system, the expression for calculating the final positioning point of the engine cover to be estimated is:
Figure BDA0003595139950000064
wherein the content of the first and second substances,
Figure BDA0003595139950000065
Figure BDA0003595139950000066
Figure BDA0003595139950000067
wherein, if the position (x, y) represents the final positioning point of the engine cover to be estimated, x is position Representing the x-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; y is position The y-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; x is the number of check_robot An image positioning point of the engine cover to be estimated under the mechanical arm base coordinate system is represented by x-axis coordinates and y-axis coordinates under the mechanical arm base coordinate system check_robot Representing the y-axis coordinate of the image positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; t1 3x3 A first translation transformation matrix representing an image positioning point check _ center (x, y) of the engine cover to be estimated under a mechanical arm base coordinate system; r 3x3 A rotation transformation matrix representing a point check _ robot (x, y) in a base coordinate system of the mechanical arm; t2 3x3 And a second translation transformation matrix representing an image positioning point check _ center (x, y) of the engine cover to be estimated in the arm base coordinate system.
Calculating a final attitude direction angle of the engine cover to be estimated based on an angle check _ angle of the engine cover to be estimated rotating relative to the template workpiece and a mechanical arm attitude direction angle temp _ angle under a mechanical arm base coordinate system plane, wherein the angle is as follows:
angle=temp_angle+check_angle。
compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides an estimation method of a grabbing pose of an automobile engine cover under a two-dimensional image, which mainly comprises an off-line template workpiece manufacturing stage and a real-time on-line estimation stage, wherein in the off-line stage, two-dimensional hand-eye calibration is firstly carried out, a coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system is determined, a transformation matrix is obtained, then the template workpiece of the engine cover is manufactured, the pose information of the template workpiece is obtained, and the pose information and the obtained transformation matrix are reserved together; in the real-time online stage, the position of the engine cover to be estimated is acquired in real time to shoot a two-dimensional image, so that online pose information is acquired, the position and the attitude direction angle of the engine cover to be estimated under a mechanical arm coordinate system are acquired based on the pose information of a template workpiece and an acquired conversion matrix, the mechanical arm is combined with the position and the attitude direction angle to be estimated to correct and grab the engine cover, in the above processes, the camera and the mechanical arm are separated in real time online grabbing and correcting, in the early off-line stage, the camera shooting can be finished once when the engine cover enters the camera visual field, the camera shooting does not need to be matched with the mechanical arm to be synchronously carried out, the implementation process is simple, the mechanical arm correction can be carried out on the real-time pose of each engine cover, and the grabbing pose estimation precision is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for estimating a grasping pose of an automobile engine cover under a two-dimensional image according to embodiment 1 of the present invention;
FIG. 2 shows a downsampled picture according to embodiment 1 of the present inventionsub_img (u,v) A schematic diagram of (a);
fig. 3 is a flowchart illustrating a process of obtaining a fine positioning area picture based on a center point by performing coarse positioning after down-sampling a template workpiece image, obtaining a centroid point of a feature contour by the coarse positioning, calculating a center, and obtaining a fine positioning area picture according to the center point, according to embodiment 1 of the present invention;
FIG. 4 is a diagram showing the feature profile feature _ edge at coarse positioning according to embodiment 1 of the present invention (u,v) A schematic diagram of (a);
fig. 5 shows a new _ img of the fine positioning region picture captured after the coarse positioning in embodiment 1 of the present invention (u,v) A schematic diagram of (a);
fig. 6 is a flowchart illustrating the process of obtaining a template workpiece image location point and a template workpiece image center point by performing fine positioning based on a fine positioning region picture according to embodiment 1 of the present invention;
FIG. 7 shows nine feature circular features _ circle screened in example 1 of the present invention (u,v) A schematic diagram of (a);
FIG. 8 is a schematic view showing a closed contour consisting of nine characteristic circles proposed in embodiment 1 of the present invention;
FIG. 9 is a diagram illustrating the positioning points of the template workpiece image and the direction vectors of the template workpiece obtained in embodiment 1 of the present invention;
fig. 10 shows a flow chart of hand-eye calibration proposed in embodiment 2 of the present invention;
fig. 11 is a flowchart of a process for obtaining a final positioning point of an engine cover to be estimated and a final attitude and heading angle of the engine cover to be estimated by on-line real-time calculation in embodiment 3 of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for better illustration of the present embodiment, certain parts of the drawings may be omitted, enlarged or reduced, and do not represent actual dimensions;
it will be understood by those skilled in the art that certain well-known descriptions of the figures may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
example 1
As shown in fig. 1, the present embodiment provides a method for estimating a grasping pose of an automobile engine cover under a two-dimensional image, and with reference to fig. 1, the method specifically includes the following steps:
s1, performing two-dimensional hand-eye calibration by using a nine-point calibration method, and establishing a coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system to obtain a conversion matrix;
s2, obtaining a workpiece image of a template of the engine cover, performing rough positioning after down-sampling the workpiece image of the template, obtaining a center of mass point of the characteristic contour through the rough positioning, calculating the center, and obtaining a picture of a fine positioning area based on the center point;
in this embodiment, a Haokang MV-CE200-10GM industrial camera is used to obtain a picture temp _ img with 5472 pixels and 3648 pixels perpendicular to the plane of the engine cover (u,v) And picture temp _ img (u,v) Down-sampling to 500x500 pixel length and width to obtain sub _ img picture (u,v) FIG. 2 shows a downsampled picture sub _ img (u,v) A schematic diagram of (a); on the whole, carry out coarse positioning after down sampling template work piece image to coarse positioning obtains the centre of mass point of characteristic profile, and calculates the center, obtains the process of accurate positioning region picture based on the central point and refers to fig. 3, specifically is:
s21, setting the acquired template workpiece image as temp _ img (u,v) And obtaining a picture sub _ img after down-sampling (u,v)
S22, obtaining the sub _ img of the picture by utilizing an OTSU threshold segmentation algorithm (u,v) Fuzzy binary map of (u,v) Binary fuzzy (u,v) Performing binary morphological operation to obtain a contour edge (u,v)
S23, based on the contour shape characteristics of the top of the engine cover, calculating the edge of the contour through the number of the sides of the polygon (u,v) Middle-screening feature profile feature _ edge (u,v) (ii) a In particular, the pentagon is characterized by the contour edge according to the contour shape of the top of the engine cover (u,v) Middle-screening feature profile feature _ edge (u,v) Feature outline feature _ edge (u,v) As shown in fig. 4.
S24, calculating feature profile feature _ edge (u,v) First moment M of 1,0 ,M 0,1 And the zero order moment M 0,0 Obtaining the feature profile feature _ edge (u,v) Centroid point edge _ centroid (u, v); the calculation formula is as follows:
u edge_centroid =M 1,0 /M 0,0
v edge_centroid =M 0,1 /M 0,0
wherein u is edge_centroid U-axis coordinates, v, representing the center of mass point edge _ centroid (u, v) edge_centroid V-axis coordinates representing the centroid point edge _ centroid (u, v);
s25, converting sub _ img (u,v) The centroid point edge _ centroid (u, v) in (1) is converted into a picture temp _ img (u,v) The conversion relationship satisfies the following condition:
u roi_center =u edge_centroid *cols/h1
v roi_center =v edge_centroid *rows/w1
wherein cols represents a picture temp _ img (u,v) Is the pixel length of (1), rows represents the picture temp _ img (u,v) H1, w1 are the pixel length and the pixel width after down sampling; in this example, cols is 5472, rows is 3648, and h1 and w1 are both 500.
S26, taking roi _ center (u, v) as a central point, extracting a fine positioning region picture new _ img with a pixel length h2 and a pixel width w2 (u,v) In this embodiment, the truncated pixel length h2 and the truncated pixel width w2 are 1600, and the fine positioning region picture new _ img (u,v) As shown in fig. 5.
Then, step S3 is executed:
s3, performing fine positioning based on the picture of the fine positioning area to obtain a positioning point of the template workpiece image and a central point of the template workpiece image; referring to fig. 6, a flowchart of the step S3 process specifically includes:
s31, in the mode of steps S22-S24 (OTSU threshold segmentation algorithm, binary morphology operation and feature contour screening), carrying out new _ img on the picture of the fine positioning area (u,v) Operating to obtain the new _ edge of the contour edge (u,v) And operates to screen out the feature profile new _ feature _ edge (u,v)
S32, calculating the roundness of the outline and the distance between the outline and the new _ feature _ edge (u,v) From the profile edge new _ edge according to the calculated distance value (u,v) Screening out an unordered series of feature contour features _ circle (u,v) (ii) a Specifically, whether the roundness of the contour is larger than 0.8 or not and the distance of the contour from new _ feature _ edge are calculated (u,v) Whether the distance of (2) is greater than 20 pixels from the contour edge new _ edge (u,v) Screening nine feature circle feature _ circle (u,v) The schematic diagram can be seen in fig. 7.
S33, a series of feature contour features of disorder are feature _ circle (u,v) Performing least square circle fitting to obtain feature _ circle of each feature profile (u,v) Circle _ center (u, v) corresponding to the fitted circle of (1); in particular, for unordered 9 feature circle feature _ circle (u,v) Obtaining each feature _ circle by performing least square circle fitting method (u,v) Circle _ center (u, v) corresponding to the fitted circle of (1);
s34, sequencing a series of unordered central points circle _ center (u, v) to obtain an ordered series of central points order _ circle _ center (u, v); specifically, 9 unordered central points circle _ center (u, v) are sorted to obtain ordered 9 central points order _ circle _ center (u, v)
S35, connecting a series of central points order _ circle _ center (u, v) in an end-to-end sequence to form a new closed feature contour feature _ contour (u,v) (ii) a Specifically, 9 central points order _ circle _ center (u, v) are connected to form a new closed feature profile feature _ feature (u,v) The schematic diagram is shown in fig. 8.
S36, in the mode of step S24, obtaining a closed feature profile feature _ contour (u,v) Finding a central point (i) center _1 (i, v) farthest from the centroid point (i, v) from a series of central points (9) order _ center (i, v), and finding a central point (i _ center _2 (i, v) farthest from the central point (i, v) center _ circle _1 (i, v) from a series of central points (9) order _ center (i, v);
s37, taking a central point center _ circle _1(u, v) and a central point center _ center (u, v) of the central point center _ circle _2(u, v), obtaining a template workpiece image positioning point temp _ center (u, v) and a central point temp _ circle (u, v) through coordinate transformation, wherein the transformation relational expression is as follows:
u temp_center =u roi_center -1600+u middle_center
v temp_center =v roi_center --1600+v middle_center
u temp_circle =u roi_center -1600+u final_circle_1
v temp_circle =v roi_center -1600+v final_circle_1
wherein u is temp_center 、v temp_center A u coordinate value and a v coordinate value respectively representing a template workpiece image positioning point temp _ center (u, v); u. of temp_circle 、v temp_circle And u-coordinate values and v-coordinate values respectively representing the template workpiece image center point temp _ circle (u, v).
Then, step S4 is executed:
s4, connecting the locating point of the template workpiece image and the central point of the template workpiece image to obtain a direction vector of the template workpiece;
in step S4, the anchor points of the template workpiece image and the center points of the template workpiece image are connected to obtain an expression of the direction vector of the template workpiece, which satisfies:
Figure BDA0003595139950000111
wherein the content of the first and second substances,
Figure BDA0003595139950000112
representing a template workpiece direction vector; temp _ center (u, v) represents template workpiece image anchor points, temp _ circle (u, v) represents template workpiece image anchor points, and FIG. 9 shows schematic views of template workpiece image anchor points and template workpiece direction vectors.
Step S5 is then executed:
s5, directly grabbing an engine cover under the template workpiece pose by using a mechanical arm, and determining a mechanical arm coordinate point under the corresponding template workpiece pose and a mechanical arm pose direction angle under a mechanical arm base coordinate system plane; in actual operation, when the mechanical arm grabs, the corresponding controller displays the coordinate point and the pose direction angle, so that the template workpiece mechanical arm coordinate point temp _ robot (x, y) and the template workpiece mechanical arm x-y plane pose temp _ angle under the corresponding template pose obtained by the mechanical arm grabbing engine cover are obtained.
The off-line stage stores the positioning point temp _ center (u, v) of the template workpiece image and the direction vector of the template workpiece
Figure BDA0003595139950000113
A template workpiece mechanical arm coordinate point temp _ robot (x, y) and a template workpiece mechanical arm x-y plane posture temp _ angle.
S6, acquiring the position of the to-be-estimated engine cover in real time to shoot a two-dimensional image, and determining the image positioning point of the to-be-estimated engine cover and the direction vector of the to-be-estimated engine cover in a mode of S2-S4;
s7, calculating and acquiring a final positioning point of the engine cover to be estimated and a final attitude direction angle of the engine cover to be estimated in real time based on the image positioning point of the engine cover to be estimated and the direction vector of the engine cover to be estimated;
and S8, sending the final positioning point of the engine cover to be estimated and the final attitude direction angle of the engine cover to be estimated to the mechanical arm for correction, and grabbing the engine cover by the mechanical arm according to the correction.
On the whole, the method mainly comprises an off-line stage from S1 to S5 and a real-time on-line stage from S6 to S8, wherein the off-line stage is an off-line template workpiece manufacturing stage, firstly, two-dimensional hand-eye calibration is carried out, the coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system is determined, a transformation matrix is obtained, then, a template workpiece of an engine cover is manufactured, the pose information of the template workpiece is obtained, and the pose information and the obtained transformation matrix are reserved together; in the real-time online stage, the position of the engine cover to be estimated is acquired in real time to shoot a two-dimensional image, so that online pose information is acquired, the position and the attitude direction angle of the engine cover to be estimated under a mechanical arm coordinate system are acquired based on the pose information of a template workpiece and an acquired conversion matrix, the mechanical arm is combined with the position and the attitude direction angle to be estimated to correct and grab the engine cover, in the above processes, the camera and the mechanical arm are separated in real time online grabbing and correcting, in the early off-line stage, the camera shooting can be finished once when the engine cover enters the camera visual field, the camera shooting is not matched with the mechanical arm to be synchronously carried out, the implementation process is simple, the mechanical arm correction can be carried out on the real-time pose of each engine cover, and the grabbing pose estimation precision is improved.
Example 2
In this embodiment, a hand-eye calibration process is specifically described, and fig. 10 shows a hand-eye calibration flowchart; in the implementation, it is assumed that coordinates of nine points on a camera pixel coordinate system are acquired by camera shooting: (u) 1 ,v 1 ),...,(u 9 ,v 9 ) And coordinates of the nine points on the mechanical arm base coordinate system corresponding to the coordinates of the camera pixel coordinate system: (x) 1 ,y 1 ),...,(x 9 ,y 9 ) For the coordinates (u, v) of any one of the nine points on the camera pixel coordinate system and the coordinates (x, y) on the mechanical arm base coordinate system, the coordinate conversion relationship satisfies:
Figure BDA0003595139950000121
the above process is a nine-point calibration process, the camera knows a camera pixel coordinate system, the mechanical arm is a space coordinate system, so the hand-eye calibration is to obtain a coordinate conversion relationship between the camera pixel coordinate system and the space mechanical arm coordinate system, a relationship between the camera coordinate system and the mechanical arm coordinate system is established, namely, the mechanical arm is provided with eyes and is moved to where, the nine-point calibration method directly establishes a coordinate conversion relationship between the camera and the mechanical arm, the tail end of the mechanical arm is moved to the 9 points to obtain coordinates in the robot coordinate system, and simultaneously the camera is used for identifying the 9 points to obtain pixel coordinates, so 9 groups of corresponding coordinates are obtained.
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003595139950000122
representing the transformation matrix, A, B, C, D, E, F are all elements of the transformation matrix.
Example 3
By combining the specific operations of the offline stage mentioned in the embodiment 1 and the embodiment 2, the real-time online stage of the embodiment 3 is performed to acquire the position of the to-be-estimated engine cover in real time and shoot a two-dimensional image, so as to acquire online pose information, and based on the pose information of the template workpiece and the acquired transformation matrix, the position and the attitude direction angle of the to-be-estimated engine cover under the mechanical arm coordinate system are obtained, specifically:
in step S7, a process flowchart for calculating and acquiring a final positioning point of the engine cover to be estimated and a final attitude and direction angle of the engine cover to be estimated in real time based on the image positioning point of the engine cover to be estimated and the direction vector of the engine cover to be estimated can be seen in fig. 11, which includes:
s71, calculating the translation amount of the to-be-estimated engine cover under a camera pixel coordinate system, translating the direction vector of the template workpiece to a point which is common with the direction vector of the to-be-estimated engine cover, and obtaining the translated direction vector;
assuming that the image orientation point of the engine cover to be estimated and the direction vector of the engine cover to be estimated are determined by adopting the methods of S2-S4 as check _ center (u, v) and
Figure BDA0003595139950000131
in step S71, the amount of translation pixel _ T (u, v) of the engine cover to be estimated in the camera pixel coordinate system is:
pixel_T(u,v)=check_center(u,v)-temp_center(u,v)
the direction vector of the template workpiece
Figure BDA0003595139950000132
Direction vector translating to and from the engine head to be estimated
Figure BDA0003595139950000133
The starting point is shared, and the direction vector after translation is obtained
Figure BDA0003595139950000134
The calculation expression is:
Figure BDA0003595139950000135
s72, calculating the rotation angle of the to-be-estimated engine cover relative to the template workpiece based on the translated direction vector and the direction vector of the to-be-estimated engine cover;
and calculating the rotation angle process of the to-be-estimated engine cover relative to the template workpiece based on the translated direction vector and the direction vector of the to-be-estimated engine cover, wherein the rotation angle process satisfies the following steps:
Figure BDA0003595139950000136
check_angle=cos -1 (cos(check_angle))
wherein cos (check _ angle) represents a direction vector of the engine cover to be estimated
Figure BDA0003595139950000137
And the translated direction vector
Figure BDA0003595139950000138
The cosine of the angle, check angle, represents the angle at which the engine cover is rotated relative to the die plate workpiece to be estimated.
S73, converting the image positioning points of the template workpiece and the image positioning points of the engine cover to be estimated from a camera pixel coordinate system to a mechanical arm base coordinate system by using the conversion matrix to obtain the mechanical armThe method comprises the following steps of (1) positioning points of a template workpiece image and image positioning points of an engine cover to be estimated under a base coordinate system; by transforming matrices
Figure BDA0003595139950000139
Figure BDA0003595139950000141
Converting a template workpiece image positioning point temp _ center (u, v) and an image positioning point to be detected check _ center (u, v) from a pixel coordinate system to a mechanical arm base coordinate system to obtain temp _ center (x, y) and check _ center (x, y), wherein the calculation formula is as follows:
Figure BDA0003595139950000142
Figure BDA0003595139950000143
setting the template workpiece image positioning point and the image positioning point of the to-be-estimated engine cover in the mechanical arm base coordinate system obtained in the step S73 as temp _ center (x, y) and check _ center (x, y), respectively, and calculating the translation amount of the to-be-estimated engine cover in the mechanical arm base coordinate system based on the template workpiece image positioning point and the image positioning point of the to-be-estimated engine cover in the mechanical arm base coordinate system according to the following formula:
base_T(x,y)=check_center(x,y)-temp_center(x,y)
wherein, base _ T (x, y) represents the translation amount of the engine cover to be estimated under the base coordinate of the mechanical arm;
s74, calculating the translation amount of the engine cover to be estimated under the mechanical arm base coordinate based on the template workpiece image positioning point and the image positioning point of the engine cover to be estimated under the mechanical arm base coordinate;
s75, calculating a new mechanical arm coordinate point under the mechanical arm base coordinate after translation based on the translation amount of the engine cover to be estimated under the mechanical arm base coordinate and the mechanical arm coordinate point under the corresponding template workpiece pose; the expression is as follows:
check_robot(x,y)=temp_robot(x,y)+base_T(x,y)
wherein, check _ robot (x, y) represents the robot arm coordinate point under the new robot arm base coordinate after translation, and temp _ robot (x, y) represents the robot arm coordinate point under the corresponding template workpiece pose.
S76, calculating a final positioning point of the engine cover to be estimated based on a mechanical arm coordinate point under a new mechanical arm base coordinate, the rotation angle of the engine cover to be estimated relative to the template workpiece and an image positioning point of the engine cover to be estimated under a mechanical arm base coordinate system;
based on a mechanical arm coordinate point check _ robot (x, y) under a new mechanical arm base coordinate, an angle check _ angle of the engine cover to be estimated rotating relative to the template workpiece, and an image positioning point check _ center (x, y) of the engine cover to be estimated under a mechanical arm base coordinate system, calculating an expression of a final positioning point of the engine cover to be estimated as follows:
Figure BDA0003595139950000144
wherein the content of the first and second substances,
Figure BDA0003595139950000151
Figure BDA0003595139950000152
Figure BDA0003595139950000153
wherein, if the position (x, y) represents the final positioning point of the engine cover to be estimated, x is position
Representing the x-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; y is position The y-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; x is the number of check_robot Image positioning of engine cover to be estimated under mechanical arm base coordinate systemPoint is in x-axis coordinate, y of arm base coordinate system check_robot The method comprises the steps of representing the y-axis coordinate of an image positioning point of an engine cover to be estimated under a mechanical arm base coordinate system; t1 3x3 A first translation transformation matrix representing an image positioning point check _ center (x, y) of the engine cover to be estimated under a mechanical arm base coordinate system; r 3x3 A rotation transformation matrix representing a point check _ robot (x, y) in a base coordinate system of the mechanical arm; t2 3x3 A second translation transformation matrix representing the image location point check _ center (x, y) of the engine cover to be estimated in the arm base coordinate system, more specifically, the second translation transformation matrix is first subjected to T1 3x3 The transform check robot (x, y) is shifted by (-x) cherk_center ,-y cherk_center ) Then passes through R 3x3 The transform check robot (x, y) is rotated by check angle degrees around the origin.
And S77, calculating the final attitude direction angle of the engine cover to be estimated based on the rotation angle of the engine cover to be estimated relative to the template workpiece and the attitude direction angle of the mechanical arm under the plane of the mechanical arm base coordinate system.
Calculating a final attitude direction angle of the engine cover to be estimated based on an angle check _ angle of the engine cover to be estimated rotating relative to the template workpiece and a mechanical arm attitude direction angle temp _ angle under a mechanical arm base coordinate system plane, wherein the angle is as follows:
angle=temp_angle+check_angle。
and finally giving position (x, y) and angle to the mechanical arm to correct the pose of the mechanical arm and grab the engine cover.
The examples are given solely for the purpose of clearly illustrating the invention and are not intended to limit the practice of the invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for estimating the grabbing pose of an automobile engine cover under a two-dimensional image is characterized by comprising the following steps of:
s1, performing two-dimensional hand-eye calibration by using a nine-point calibration method, and establishing a coordinate transformation relation between a camera pixel coordinate system and a mechanical arm base coordinate system to obtain a conversion matrix;
s2, obtaining a workpiece image of a template of the engine cover, performing rough positioning after down-sampling the workpiece image of the template, obtaining a center of mass point of the characteristic contour through the rough positioning, calculating the center, and obtaining a picture of a fine positioning area based on the center point;
s3, performing fine positioning based on the picture of the fine positioning area to obtain a positioning point of the template workpiece image and a central point of the template workpiece image;
s4, connecting the locating point of the template workpiece image and the central point of the template workpiece image to obtain a direction vector of the template workpiece;
s5, directly grabbing an engine cover under the template workpiece pose by using a mechanical arm, and determining a mechanical arm coordinate point under the corresponding template workpiece pose and a mechanical arm pose direction angle under a mechanical arm base coordinate system plane;
s6, acquiring the position of the to-be-estimated engine cover in real time to shoot a two-dimensional image, and determining the image positioning point of the to-be-estimated engine cover and the direction vector of the to-be-estimated engine cover in a mode of S2-S4;
s7, calculating and acquiring a final positioning point of the engine cover to be estimated and a final attitude direction angle of the engine cover to be estimated in real time based on the image positioning point of the engine cover to be estimated and the direction vector of the engine cover to be estimated;
and S8, sending the final positioning point of the engine cover to be estimated and the final attitude direction angle of the engine cover to be estimated to the mechanical arm for correction, and grabbing the engine cover by the mechanical arm according to the correction.
2. The method for estimating the grip pose of the engine cover of the automobile under the two-dimensional image according to claim 1, wherein in step S1, it is assumed that coordinates of nine points on the camera pixel coordinate system are acquired by the camera: (u) 1 ,v 1 ),...,(u 9 ,v 9 ) Seating of nine points on the camera pixel coordinate systemAnd marking the coordinates on the corresponding mechanical arm base coordinate system: (x) 1 ,y 1 ),...,(x 9 ,y 9 ) For the coordinates (u, v) of any one of the nine points on the camera pixel coordinate system and the coordinates (x, y) on the mechanical arm base coordinate system, the coordinate conversion relationship satisfies:
Figure FDA0003595139940000011
wherein the content of the first and second substances,
Figure FDA0003595139940000012
representing the transformation matrix, A, B, C, D, E, F are all elements of the transformation matrix.
3. The method for estimating the grasping pose of the automobile engine cover under the two-dimensional image according to claim 2, wherein in step S2, the template workpiece image is downsampled and then roughly positioned, the centroid point of the feature profile is obtained by rough positioning, the center is calculated, and the process of obtaining the picture of the fine positioning area based on the centroid point is as follows:
s21, setting the acquired template workpiece image as temp _ img (u,v) And obtaining a picture sub _ img after down-sampling (u,v)
S22, obtaining the sub _ img of the picture by utilizing an OTSU threshold segmentation algorithm (u,v) Fuzzy binary map of (u,v) Binary fuzzy (u,v) Performing binary morphological operation to obtain a contour edge (u,v)
S23, based on the contour shape characteristics of the top of the engine cover, calculating the edge of the contour through the number of sides of the polygon (u,v) Middle-screened feature outline feature _ edge (u,v)
S24, calculating feature profile feature _ edge (u,v) First moment M of 1,0 ,M 0,1 And the zero order moment M 0,0 Obtaining the feature profile feature _ edge (u,v) Centroid point edge _ centroid (u, v); the calculation formula is as follows:
u edge_centroid =M 1,0 /M 0,0
v edge_centroid =M 0,1 /M 0,0
wherein u is edge_centroid U-axis coordinates, v, representing the center of mass point edge _ centroid (u, v) edge_centroid V-axis coordinates representing the centroid point edge _ centroid (u, v);
s25, converting sub _ img (u,v) The centroid point edge _ centroid (u, v) in (1) is converted into a picture temp _ img (u,v) The conversion relationship satisfies the following condition:
u roi_center =u edae_centroid *cols/h1
v roi_center =v edge_centroid *rows/w1
wherein cols represents a picture temp _ img (u,v) Is the pixel length of (1), rows represents the picture temp _ img (u,v) H1, w1 are the pixel length and the pixel width after down sampling;
s26, taking roi _ center (u, v) as a central point, extracting a fine positioning region picture new _ img with a pixel length h2 and a pixel width w2 (u,v)
4. The method for estimating the grasping pose of the automobile engine cover under the two-dimensional image according to claim 3, wherein the step S3 is to perform the fine positioning based on the fine positioning area picture, and the process of obtaining the positioning point of the template workpiece image and the center point of the template workpiece image comprises the following steps:
s31, in the mode of steps S22-S24, the fine positioning area picture new _ img (u,v) Operating to obtain the new _ edge of the contour edge (u,v) And operates to screen out the feature profile new _ feature _ edge (u,v)
S32, calculating the roundness of the outline and the distance between the outline and the new _ feature _ edge (u,v) From the profile edge new _ edge according to the calculated distance value (u,v) Screening out an unordered series of feature contour features _ circle (u,v)
S33, a series of feature contour features of disorder are feature _ circle (u,v) Performing least squares circle fittingObtaining the feature _ circle of each feature profile (u,v) Circle _ center (u, v) corresponding to the fitted circle of (1);
s34, sequencing a series of unordered central points circle _ center (u, v) to obtain an ordered series of central points order _ circle _ center (u, v);
s35, connecting a series of central points order _ circle _ center (u, v) in an end-to-end sequence to form a new closed feature contour feature _ contour (u,v)
S36, in the mode of step S24, obtaining a closed feature profile feature _ contour (u,v) Finding a central point local _ circle _1(u, v) farthest from the centroid point contourcnroid (u, v) from a series of central points order _ circle _ center (u, v), and finding a central point local _ circle _2(u, v) farthest from the central point local _ circle _1(u, v) from a series of central points order _ circle _ center (u, v);
s37, taking a central point center _ circle _1(u, v) and a central point center _ center (u, v) of the central point center _ circle _2(u, v), obtaining a template workpiece image positioning point temp _ center (u, v) and a central point temp _ circle (u, v) through coordinate transformation, wherein the transformation relational expression is as follows:
u temp_center =u roi_center -h2+u middle_center
v temp_center =v roi_center -w2+v middle_center
u temp_circle =u roi_center -h2+u final_circle_1
v temp_circle =v roi_center -w2+v final_circle_1
wherein u is temp_center 、v temp_center A u coordinate value and a v coordinate value respectively representing a template workpiece image positioning point temp _ center (u, v); u. u temp_circle 、v temp_circle And u-coordinate values and v-coordinate values respectively representing the template workpiece image center point temp _ circle (u, v).
5. The method for estimating the grasping pose of the engine cover of the automobile under the two-dimensional image according to claim 4, wherein in step S4, the locating point of the template workpiece image and the central point of the template workpiece image are connected to obtain an expression of the direction vector of the template workpiece that satisfies:
Figure FDA0003595139940000031
wherein the content of the first and second substances,
Figure FDA0003595139940000032
representing a template workpiece direction vector; temp _ center (u, v) represents a template workpiece image anchor point, temp _ circle (u, v) represents a template workpiece image anchor point.
6. The method for estimating the grasping pose of the engine cover of the automobile under the two-dimensional image according to claim 5, wherein in step S5, the mechanical arm is used to directly grasp the engine cover under the template workpiece pose, and the obtained coordinate points of the mechanical arm under the corresponding template workpiece pose and the orientation angles of the pose of the mechanical arm under the base coordinate system plane of the mechanical arm are temp _ robot (x, y) and temp _ angle, respectively;
in step S7, the process of calculating and acquiring the final orientation point of the engine cover to be estimated and the final attitude and direction angle of the engine cover to be estimated in real time based on the image orientation point of the engine cover to be estimated and the direction vector of the engine cover to be estimated includes:
s71, calculating the translation amount of the to-be-estimated engine cover under a camera pixel coordinate system, translating the direction vector of the template workpiece to a starting point which is common with the direction vector of the to-be-estimated engine cover, and obtaining the translated direction vector;
s72, calculating the rotation angle of the to-be-estimated engine cover relative to the template workpiece based on the translated direction vector and the direction vector of the to-be-estimated engine cover;
s73, converting the image positioning points of the template workpiece and the image positioning points of the engine cover to be estimated from a camera pixel coordinate system to a mechanical arm base coordinate system by using a conversion matrix to obtain the image positioning points of the template workpiece and the image positioning points of the engine cover to be estimated under the mechanical arm base coordinate system;
s74, calculating the translation amount of the engine cover to be estimated under the mechanical arm base coordinate based on the template workpiece image positioning point and the image positioning point of the engine cover to be estimated under the mechanical arm base coordinate;
s75, calculating a new mechanical arm coordinate point under the mechanical arm base coordinate after translation based on the translation amount of the engine cover to be estimated under the mechanical arm base coordinate and the mechanical arm coordinate point under the corresponding template workpiece pose;
s76, calculating a final positioning point of the engine cover to be estimated based on a mechanical arm coordinate point under a new mechanical arm base coordinate, the rotation angle of the engine cover to be estimated relative to the template workpiece and an image positioning point of the engine cover to be estimated under a mechanical arm base coordinate system;
and S77, calculating the final attitude direction angle of the engine cover to be estimated based on the rotation angle of the engine cover to be estimated relative to the template workpiece and the attitude direction angle of the mechanical arm under the plane of the mechanical arm base coordinate system.
7. The method for estimating the grasping pose of the engine cover of the automobile under the two-dimensional image according to claim 6, wherein the image positioning points of the engine cover to be estimated and the direction vectors of the engine cover to be estimated, which are determined by the method from S2 to S4, are check center (u, v) and check center (u, v), respectively
Figure FDA0003595139940000041
In step S71, the amount of translation pixel _ T (u, v) of the engine cover to be estimated in the camera pixel coordinate system is:
pixel_T(u,v)=check_center(u,v)-temp_center(u,v)
the direction vector of the template workpiece
Figure FDA0003595139940000042
Direction vector translating to and from the engine head to be estimated
Figure FDA0003595139940000051
The starting point is shared, and the direction vector after translation is obtained
Figure FDA0003595139940000052
The calculation expression is:
Figure FDA0003595139940000053
8. the method for estimating the grasping pose of the automobile engine cover under the two-dimensional image according to claim 7, wherein in step S72, the process of calculating the angle of rotation of the engine cover to be estimated with respect to the template workpiece based on the translated direction vector and the direction vector of the engine cover to be estimated satisfies:
Figure FDA0003595139940000054
check_angle=cos -1 (cos(check_angle))
wherein cos (check _ angle) represents a direction vector of the engine cover to be estimated
Figure FDA0003595139940000055
And the translated direction vector
Figure FDA0003595139940000056
The cosine of the angle, check angle, represents the angle at which the engine cover is rotated relative to the die plate workpiece to be estimated.
9. The method for estimating the grasping pose of the automobile engine cover under the two-dimensional image according to claim 8, wherein the template workpiece image positioning point under the mechanical arm base coordinate system and the image positioning point of the engine cover to be estimated obtained in step S73 are respectively temp _ center (x, y) and check _ center (x, y), and the formula for calculating the translation amount of the engine cover to be estimated under the mechanical arm base coordinate system based on the template workpiece image positioning point under the mechanical arm base coordinate system and the image positioning point of the engine cover to be estimated is:
base_T(x,y)=check_center(x,y)-temp_center(x,y)
wherein, base _ T (x, y) represents the translation amount of the engine cover to be estimated under the base coordinate of the mechanical arm;
when calculating a new mechanical arm coordinate point under the mechanical arm base coordinate after translation based on the translation amount of the engine cover to be estimated under the mechanical arm base coordinate and the mechanical arm coordinate point under the corresponding template workpiece pose, the expression is as follows:
check_robot(x,y)=temp_robot(x,y)+base_T(x,y)
wherein check _ robot (x, y) represents a mechanical arm coordinate point under a new mechanical arm base coordinate after translation, and temp _ robot (x, y) represents a mechanical arm coordinate point under a corresponding template workpiece pose.
10. The method for estimating the grasping pose of the automobile engine cover under the two-dimensional image according to claim 9, wherein based on a coordinate point check _ robot (x, y) of the mechanical arm under a new base coordinate of the mechanical arm, an angle check _ angle of the engine cover to be estimated rotating relative to the template workpiece, and an image positioning point check _ center (x, y) of the engine cover to be estimated under a base coordinate system of the mechanical arm, the expression for calculating the final positioning point of the engine cover to be estimated is as follows:
Figure FDA0003595139940000057
wherein the content of the first and second substances,
Figure FDA0003595139940000061
Figure FDA0003595139940000062
Figure FDA0003595139940000063
wherein, if the position (x, y) represents the final positioning point of the engine cover to be estimated, x is position Representing the x-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; y is position The y-axis coordinate of the final positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; x is the number of check_robot An image positioning point of the engine cover to be estimated under the mechanical arm base coordinate system is represented by x-axis coordinates and y-axis coordinates under the mechanical arm base coordinate system check_robot Representing the y-axis coordinate of the image positioning point of the engine cover to be estimated under the mechanical arm base coordinate system; t1 3x3 A first translation transformation matrix representing an image positioning point check _ center (x, y) of the engine cover to be estimated under a mechanical arm base coordinate system; r 3x3 A rotation transformation matrix representing a point check _ robot (x, y) in a base coordinate system of the mechanical arm; t2 3x3 A second translation transformation matrix representing an image positioning point check _ center (x, y) of the engine cover to be estimated under the mechanical arm base coordinate system;
calculating a final attitude direction angle of the to-be-estimated engine cover based on an angle check _ angle of the to-be-estimated engine cover rotating relative to the template workpiece and a mechanical arm attitude direction angle temp _ angle under a mechanical arm base coordinate system plane, wherein the angle is as follows:
angle=temp_angle+check_angle。
CN202210386421.7A 2022-04-13 2022-04-13 Method for estimating grabbing pose of automobile engine cover under two-dimensional image Pending CN114882108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210386421.7A CN114882108A (en) 2022-04-13 2022-04-13 Method for estimating grabbing pose of automobile engine cover under two-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210386421.7A CN114882108A (en) 2022-04-13 2022-04-13 Method for estimating grabbing pose of automobile engine cover under two-dimensional image

Publications (1)

Publication Number Publication Date
CN114882108A true CN114882108A (en) 2022-08-09

Family

ID=82669044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210386421.7A Pending CN114882108A (en) 2022-04-13 2022-04-13 Method for estimating grabbing pose of automobile engine cover under two-dimensional image

Country Status (1)

Country Link
CN (1) CN114882108A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116148259A (en) * 2022-12-28 2023-05-23 广州市斯睿特智能科技有限公司 Vehicle defect positioning system, method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116148259A (en) * 2022-12-28 2023-05-23 广州市斯睿特智能科技有限公司 Vehicle defect positioning system, method, device and storage medium
CN116148259B (en) * 2022-12-28 2024-03-22 广州市斯睿特智能科技有限公司 Vehicle defect positioning system, method, device and storage medium

Similar Documents

Publication Publication Date Title
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN113524194B (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN110315525A (en) A kind of robot workpiece grabbing method of view-based access control model guidance
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN111311679B (en) Free floating target pose estimation method based on depth camera
CN109886124B (en) Non-texture metal part grabbing method based on wire harness description subimage matching
CN111784655B (en) Underwater robot recycling and positioning method
CN111645074A (en) Robot grabbing and positioning method
CN112529858A (en) Welding seam image processing method based on machine vision
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN112419429B (en) Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN108907526A (en) A kind of weld image characteristic recognition method with high robust
CN113781561B (en) Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram
CN112365439B (en) Method for synchronously detecting forming characteristics of GMAW welding seam of galvanized steel and direction of welding gun in real time
CN113146172A (en) Multi-vision-based detection and assembly system and method
CN113500593B (en) Method for grabbing designated part of shaft workpiece for feeding
CN114140439A (en) Laser welding seam feature point identification method and device based on deep learning
CN109035214A (en) A kind of industrial robot material shapes recognition methods
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN114882108A (en) Method for estimating grabbing pose of automobile engine cover under two-dimensional image
CN112381783A (en) Weld track extraction method based on red line laser
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN115958605A (en) Monocular infrared thermal imaging vision manipulator object grabbing posture estimation device and method
CN112588621B (en) Agricultural product sorting method and system based on visual servo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination