CN112658643A - Connector assembly method - Google Patents

Connector assembly method Download PDF

Info

Publication number
CN112658643A
CN112658643A CN202011606810.3A CN202011606810A CN112658643A CN 112658643 A CN112658643 A CN 112658643A CN 202011606810 A CN202011606810 A CN 202011606810A CN 112658643 A CN112658643 A CN 112658643A
Authority
CN
China
Prior art keywords
image
ellipse
female head
expected
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011606810.3A
Other languages
Chinese (zh)
Other versions
CN112658643B (en
Inventor
陶显
严少华
徐德
马文治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202011606810.3A priority Critical patent/CN112658643B/en
Publication of CN112658643A publication Critical patent/CN112658643A/en
Application granted granted Critical
Publication of CN112658643B publication Critical patent/CN112658643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of precision assembly, and aims to solve the problem that the existing plug connector assembly method cannot meet the assembly requirements of high precision and high degree of freedom at the same time, in particular to a plug connector assembly method, which comprises the steps of obtaining a female head image of a part to be assembled as an input image; acquiring an elliptical ring area image containing a groove through a trained network, extracting an elliptical contour and performing ellipse fitting to acquire three-dimensional coordinates of an elliptical central point and N internal points; meanwhile, plane fitting is carried out by combining an expected central point of the female head to obtain a unit normal vector and a displacement, an expected position, a rotating shaft and a rotating angle of the tail end of the mechanical arm are calculated, and alignment adjustment of the image acquisition device and the female head is carried out; acquiring an image of the right female head, acquiring pose information of characteristic points of the groove, and acquiring pose deviation values by combining expected pose information of the female head so as to perform fine alignment adjustment on the male head and the female head; the invention can realize the high-precision and high-freedom assembly of the plug connector.

Description

Connector assembly method
Technical Field
The invention belongs to the technical field of precision assembly, and particularly relates to a plug connector assembly method.
Background
With the development of high-precision cameras and mechanical arm devices, the demands of high-precision and multi-degree-of-freedom automatic assembly technologies in the fields of industrial manufacturing, military, space exploration and the like are increasing. In an on-orbit maintenance task in the aerospace field, the mechanical arm needs to control male and female head parts with different initial positions and angles to be automatically assembled, and the assembly precision of the mechanical arm is required to reach a millimeter level or more. The automatic assembly of the connector by using the mechanical arm needs to solve the key technologies of automatic identification and feature extraction of the connector under different backgrounds, measurement of the pose of the connector based on visual information, alignment and the like.
The existing assembly measurement method in the technical field of precision assembly often cannot give consideration to two characteristics of high precision and high degree of freedom; the common high-precision assembly measurement method is to perform segmented measurement by matching two or more cameras with a ranging sensor and the like, but objects based on the method for assembling tasks are often simpler, such as symmetrical rectangular objects and the like, and high-precision asymmetrical precise objects cannot be measured; some methods collect images through a high-precision camera to perform accurate measurement, but the method faces nail hole assembly, only needs to measure the postures of part of the freedom directions, is limited in operation space, and cannot perform large-range six-freedom-degree assembly.
The prior general assembly control method is that the pose of a target part is measured, then the tail end of a mechanical arm is controlled to reach the corresponding pose, and finally the assembly action is finished; however, the method needs high-precision calibration of internal and external parameters of the camera and needs to calibrate the camera on the mechanical arm by hands and eyes; the calibration result obtained by the method has certain error, particularly when the camera is far away from the tail end of the mechanical arm, the error of the calibration result of the hand eye is large, and the position error between the tail end of the mechanical arm and the camera can influence the precision of actual assembly.
Accordingly, there is a need in the art for a new part pose measurement and assembly control method that solves the above problems.
Disclosure of Invention
In order to solve the problems that the existing part assembly measuring method cannot meet the problems of high precision and high degree of freedom simultaneously and the problem that the existing part assembly control method calibrates errors to influence the assembly precision, the invention provides a plug connector assembly method, which comprises the following steps:
s100, acquiring a female head image of a part to be assembled as an input image;
s200, acquiring an elliptical ring area image containing a groove in the input image through a trained image segmentation network, extracting an elliptical contour based on the elliptical ring area image, performing ellipse fitting, and acquiring three-dimensional coordinates of an elliptical central point and N internal points;
step S300, performing plane fitting to obtain a unit normal vector and a displacement amount based on the ellipse central point, the three-dimensional coordinates of the N internal points and an expected central point preset by the female head; calculating an expected position, an expected rotating shaft and an expected rotating angle of the tail end of the mechanical arm based on the unit normal vector, the displacement and the central point of the ellipse so as to control the mechanical arm to realize the alignment of the image acquisition device and the female head;
step S400, a mother head image is collected again, and the position and posture information of the groove feature points is obtained as first position and posture information through the method of the step S200; and acquiring a pose deviation value based on the first pose information and expected pose information preset by the female head, and aligning the male head and the female head according to the pose deviation value and the preset relative position of the male head and the image acquisition device.
In some preferred embodiments, step S200 specifically includes the following steps:
step S210, extracting outlines of the elliptical ring area image, wherein the two largest outlines are an inner ellipse and an outer ellipse of the surface of the female part respectively;
step S220, ellipse fitting is carried out by using a least square method, and parameter equations of an inner ellipse and an outer ellipse are as follows:
Figure BDA0002872029250000021
Figure BDA0002872029250000031
wherein the center point P of the ellipseacHas the coordinates of (u)0,v0);ain、binRespectively the major axis length and the minor axis length of the inner ellipse; theta0Is the initial angle of the ellipse, theta is a parameter, theta belongs to (0, 2 pi); a isout、boutRespectively the major axis length and the minor axis length of the outer ellipse;
step S230, obtaining a similar ellipse equation passing through the groove area according to the parameter equation of the inner ellipse and the outer ellipse:
Figure BDA0002872029250000032
wherein the parameter k represents the degree of the similar ellipse approaching the outer ellipse, and k belongs to (0, 1); (u)e,ve) Image coordinates corresponding to the feature points;
step S240, gradually increasing the angle theta of the parameter according to the similar elliptic equation, and searching a parameter angle set corresponding to the continuous points with the pixel values having larger differences with the circular ring area; step S250: extracting 5 sets with the largest continuous angle, wherein the average value of all angles in each set corresponds to the parameter angle theta of the characteristic pointi
Step S260: according to the similar ellipse equation, when k is more than 0.5, all points of the similar ellipse are on the elliptical ring, and the reference variable angle theta is substitutediMultiple feature points P can be obtainedsiCorresponding image coordinates (u)ei,vei)。
In some preferred embodiments, k in step S260 is 0.7.
In some preferred embodiments, step S300 specifically includes the following: z is a radical ofcd=|Pac-Pad|;zcdThe displacement of the tail end of the mechanical arm from the initial position to the alignment position of the mother head; padPresetting an expected central point for the female head; pacIs the center point of the ellipse;
Ped=Pac+nc×zcd,Peddesired position of the end of the arm, ncIs a unit normal vector;
f=z×(-nc)=[ncy,-ncx,0]Tf is the desired axis of rotation of the end of the arm;
θ=arc cos(ncz) And 0 is the desired angle of rotation of the end of the arm.
In some preferred embodiments, step S400 specifically includes the following:
step S410, utilizing a plurality of groove feature points PsiDesired position PsdiFitting the plane to obtain a normal vector n of the expected planed=[ndx,ndy,ndz]T
Step S420, calculating a desired attitude angle theta of an image space by using the image coordinates of the ith notch corner pointmdz=arc tan 2(vei-vad,uei-uad) Wherein (v)ad,uad) Is the expected coordinate of the groove feature point;
step S430, calculating an expected attitude angle theta according to the obtained expected plane normal vector and the expected coordinates of the groove feature pointsdx、θdyAnd thetadzSo as to adjust the image acquisition device and the plane of the female head to be assembled in parallel;
Figure BDA0002872029250000041
step S440, carrying out accurate alignment of the image acquisition device and the female head by adopting a hybrid vision servo control method according to the following formula;
Figure BDA0002872029250000042
wherein k is1、k2Is a coefficient; (u)ac,vac) For the current imageCoordinates; z is a radical ofac、zadRespectively the current distance and the expected distance between the female part and the image acquisition device; (theta)cx,θcy) Calculating current attitude angles in the directions of an x axis and a y axis for the three-dimensional coordinates; thetamcz、θmdzRespectively representing the current attitude angle and the desired attitude angle in the z-axis direction calculated by the image coordinates.
In some preferred embodiments, k1=k2=0.6。
In some preferred embodiments, when the image acquisition device is aligned with the female head, the coincidence degree of the central point of the ellipse and the expected central point is a first threshold range; the ellipse central point is the image central point collected by the image collecting device when the image collecting device is far away from the female head;
when the image of the female head is collected again, the image collecting device and the female head are positioned at the same horizontal axis;
the alignment process of the male head and the female head further comprises the step of adjusting the contact ratio of the central point of the ellipse and the expected central point in the alignment state of the image acquisition device and the female head to be within a second threshold value range; the second threshold range has a precision greater than the precision of the first threshold range.
In some preferred embodiments, the obtaining of the image segmentation network model in step S200 specifically includes the following steps:
acquiring mother head gray-scale images with different angles and distances by using an image acquisition device;
marking the collected grey-scale image of the female head, and taking a circular ring area containing groove information on the surface of the female head as label information;
and training the marked data by using the image segmentation network to obtain a trained image segmentation network model.
In some preferred embodiments, the manner of obtaining the desired central point preset by the female head specifically includes the following:
manually controlling the mechanical arm to finish the alignment action of the part;
and only changing the translation amount of the shaft of the mechanical arm Z, X, Y under the terminal coordinate system to ensure that the male head is withdrawn from the female head to the position of the image acquisition device opposite to the female head, recording the translation amount of the withdrawal movement and the translation amount of the movement of the acquired image under the terminal coordinate system, taking the image acquired by the image acquisition device and opposite to the female head as an expected image, and carrying out ellipse fitting by using a least square method to calculate the expected position of the central point.
In some preferred embodiments, N.gtoreq.5.
The plug connector assembling method based on the high-precision and high-freedom device assembling requirement comprises a part pose measuring method based on 2D and 3D image information and a part assembling device control method based on pre-assembly-approach-alignment, after the surface features of the complex part can be accurately extracted through an image segmentation network, three-dimensional coordinates of corresponding feature points are read by combining a structured light depth camera, and the effect of accurately measuring the pose of the complex part can be achieved; meanwhile, the invention adopts the assembly strategy of preassembly-approach-alignment, solves the error problem caused by hand-eye calibration, further improves the assembly success rate, and is suitable for large-range multi-degree-of-freedom assembly scenes. With the development of high-precision cameras and mechanical arm devices, the high-precision and multi-degree-of-freedom automatic assembly technology can be widely applied to the fields of industrial manufacturing, military, space exploration and the like.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a perspective view of a component assembly according to an embodiment of the present invention;
FIG. 3 is a schematic view of a surface feature of a part to be assembled in accordance with an embodiment of the present invention;
FIG. 4 is a surface data annotation view of a part to be assembled in accordance with an embodiment of the invention;
FIG. 5 is a graph of a feature extraction result of a desired image of a part to be assembled in one embodiment of the invention;
FIG. 6 is a diagram illustrating a feature extraction result of a current image of a part to be assembled in an approach phase according to an embodiment of the present invention;
FIG. 7 is a system block diagram of a hybrid servo control method of the parts assembling apparatus according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating feature extraction results for a current image of a part to be assembled during an alignment phase in accordance with an embodiment of the present invention;
fig. 9 is a diagram showing a variation in deviation of the current posture from the desired posture of the part to be assembled at the alignment stage in one embodiment of the present invention.
The description of the reference numbers follows in order:
1. a first robot arm; 2. a second mechanical arm; 3. a male plug-in; 4. a plug-in female head; 5. structured light depth camera.
Detailed Description
In order to make the embodiments, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
The invention provides a plug connector assembling method, which comprises the following steps: s100, acquiring a female head image of a part to be assembled as an input image; the method is used for assembling the plug connector under the assembling requirements of high precision and high degree of freedom, so that the position of the female head to be assembled is not unique every time, and the image acquired from the surface to be assembled of the female head through any position can determine the relative position of the female head in space and provide a basis for the rough alignment of the female head and the male head.
S200, acquiring an elliptical ring area image containing a groove in an input image through a trained U-NET network, extracting an elliptical contour based on the elliptical ring area image, performing ellipse fitting, and acquiring three-dimensional coordinates of an elliptical center point and N internal points, namely points in an actually acquired image; wherein N is more than or equal to 5; the method specifically comprises the following steps: step S210, extracting outlines of the elliptical ring area image by using a findContours function, wherein the two largest outlines are an inner ellipse and an outer ellipse of the surface of the female part respectively;
step S220, ellipse fitting is carried out by using a least square method, and parameter equations of an inner ellipse and an outer ellipse are as follows:
Figure BDA0002872029250000071
Figure BDA0002872029250000072
wherein the center point P of the ellipseacHas the coordinates of (u)0,v0);ain、binRespectively the major axis length and the minor axis length of the inner ellipse; theta0Is the initial angle of the ellipse, theta is a parameter, theta belongs to (0, 2 pi); a isout、boutRespectively the major axis length and the minor axis length of the outer ellipse;
step S230, obtaining a similar ellipse equation passing through the groove area according to the parameter equation of the inner ellipse and the outer ellipse:
Figure BDA0002872029250000081
wherein the parameter k represents the degree of the similar ellipse approaching the outer ellipse, and k belongs to (0, 1); (u)e,ve) Image coordinates corresponding to the feature points;
step S240, gradually increasing the angle theta of the parameter according to the similar elliptic equation, and searching a parameter angle set corresponding to the continuous points with the pixel values having larger differences with the circular ring area;
step S250: extracting 5 sets with the largest continuous angle, wherein the average value of all angles in each set corresponds to the parameter angle theta of the characteristic pointi
Step S260: according to the similar ellipse equation, when k is more than 0.5, all points of the similar ellipse are on the elliptical ring, and the reference variable angle theta is substitutediMultiple feature points P can be obtainedsiCorresponding image coordinates (u)ei,vei)。
The acquisition of the U-NET network model in the step S200 specifically comprises the following steps: acquiring mother head gray-scale images with different angles and distances by using an image acquisition device; marking the collected grey-scale image of the female head, and taking a circular ring area containing groove information on the surface of the female head as label information; and training the marked data by using the U-NET network to obtain a trained U-NET network model.
Furthermore, the U-NET network structure comprises a contraction path for capturing semantics and a symmetrical expansion path for precise positioning, wherein the contraction path part comprises four convolution layers and a pooling layer for down-sampling, and the expansion path part comprises four convolution layers and a convolution layer for up-sampling.
Step S300, performing plane fitting to obtain a unit normal vector and a displacement amount based on an ellipse central point, three-dimensional coordinates of N internal points and a preset expected central point of a female head; based on the unit normal vector, the displacement and the ellipse central point, the expected position, the expected rotating shaft and the expected rotating angle of the tail end of the mechanical arm are calculated to control the mechanical arm to realize the alignment of the image acquisition device and the female head, and the contact ratio of the ellipse central point of the image acquisition device and the expected central point preset by the female head is in a first threshold range, namely the rough alignment of the female head and the male head.
Step S300 specifically includes the following: z is a radical ofcd=|Pac-Pad|;ZcdThe displacement of the tail end of the mechanical arm from the initial position to the alignment position of the mother head; padPresetting an expected central point for the female head; pacIs the center point of the ellipse; ped=Pac+nc×zcd,PedDesired position of the end of the arm, ncIs a unit normal vector; f ═ z × (-n)c)=[ncy,-ncx,0]TF is the expected rotating shaft at the tail end of the mechanical arm; θ ═ arc cos (n)cz) And θ is the desired angle of rotation of the end of the arm.
Further, the obtaining mode of the expected central point preset by the female head specifically includes the following contents: manually controlling the mechanical arm to finish the alignment action of the part; and only changing the translation amount of the shaft of the mechanical arm Z, X, Y under the terminal coordinate system to ensure that the male head is withdrawn from the female head to the position of the image acquisition device opposite to the female head, recording the translation amount of the withdrawal movement and the translation amount of the movement of the acquired image under the terminal coordinate system, taking the image acquired by the image acquisition device and opposite to the female head as an expected image, and carrying out ellipse fitting by using a least square method to calculate the expected position of the central point.
Step S400, a mother head image is collected again (at the moment, the image collecting device is in a state of being opposite to the mother head), the mother head image is used as an input image, and the mother head image is collected when the mother head is opposite to the mother head through the method of the step S200; specifically, an elliptical ring area image containing a groove in an input image is obtained through a trained U-NET network, an elliptical contour is extracted based on the elliptical ring area image and ellipse fitting is carried out, and three-dimensional coordinates of an elliptical center point and more than 5 internal points, namely points in an actually acquired image, are obtained; specifically, extracting outlines of the elliptical ring area image by using a findContours function, wherein the two largest outlines are an inner ellipse and an outer ellipse of the surface of the female part respectively; carrying out ellipse fitting by using a least square method to obtain parameter equations of an inner ellipse and an outer ellipse; obtaining a similar ellipse equation passing through the groove area according to the parameter equations of the inner ellipse and the outer ellipse: gradually increasing the parameter angle theta according to the similar elliptic equation, and searching a parameter angle set corresponding to the continuous points with the pixel values having larger difference with the circular ring area; extracting 5 sets with the largest continuous angle, wherein the average value of all angles in each set corresponds to the parameter angle theta of the characteristic pointi(ii) a According to the similar ellipse equation, when k is more than 0.5, all points of the similar ellipse are on the elliptical ring, and the reference variable angle theta is substitutediMultiple feature points P can be obtainedsiCorresponding image coordinates (u)ei,vei)。
Using a plurality of groove feature points PsiDesired position PsdiFitting the plane to obtain a normal vector n of the expected planed=[ndx,ndy,ndz]T(ii) a Calculating a desired attitude angle theta of an image space using image coordinates of an ith notch corner pointmdz=arc tan 2(vei-vad,uei-uad) Wherein (v)ad,uad) Is the expected coordinate of the groove feature point; calculating an expected attitude angle theta according to the obtained expected plane normal vector and the expected coordinates of the groove feature pointsdx、θdyAnd thetadzSo as to adjust the image acquisition device and the plane of the female head to be assembled in parallel;
Figure BDA0002872029250000101
accurately aligning the image acquisition device with the female head by adopting a hybrid vision servo control method according to the following formula;
Figure BDA0002872029250000102
wherein k is1、k2Is a coefficient; (u)ac,vac) Is the current image coordinate; z is a radical ofac、zadRespectively the current distance and the expected distance between the female part and the image acquisition device; (theta)cx,θcy) Calculating current attitude angles in the directions of an x axis and a y axis for the three-dimensional coordinates; thetamcz、θmdzRespectively representing the current attitude angle and the desired attitude angle in the z-axis direction calculated by the image coordinates.
Preferably, k is1=k2The convergence rate can be ensured to be faster than 0.6.
Acquiring pose information of groove feature points in the mother head image at the moment as first pose information; acquiring a pose deviation value based on the first pose information and expected pose information preset by the female head, and aligning the male head and the female head according to the pose deviation value and a preset relative position of the male head and the image acquisition device; in the step, firstly, plane fitting is carried out through expected positions of a plurality of groove characteristic points, and the pose of the plane of the image acquisition device and the plane to be assembled of the female head are adjusted to ensure that the plane and the plane to be assembled of the female head are in parallel; secondly, adjusting the contact ratio of the central points according to the acquired ellipse central point and a preset expected central point to ensure that the contact ratio of the central points and the expected central point is within a second threshold range, wherein the precision of the second threshold range is greater than that of the first threshold range, so that the assembly precision is further improved; then, the alignment of the groove is carried out according to the acquired position information of the characteristic point of the concave groove and the preset expected position information, as the groove in the part is relatively small, the precision of the acquired position cannot be ensured by long-distance image acquisition, the accurate groove position can be acquired in the state that the image acquisition device is aligned with the female head, and the deviation of the actual groove position and the expected groove position can be effectively determined through the data acquired by the image acquisition in the state, so that the accurate alignment of the groove position can be realized; the relative position of the image acquisition device and the male head in the mechanical arm is determined, and the male head and the female head can be assembled in a high-precision mode by combining the acquired pose deviation value.
The invention is further described with reference to the following detailed description of embodiments with reference to the accompanying drawings.
Referring to fig. 1, fig. 2 and fig. 3, fig. 1 is a flow chart of a method of an embodiment of the invention, fig. 2 is a schematic perspective structural diagram of part assembly of an embodiment of the invention, and fig. 3 is a schematic structural diagram of a surface feature of a female part to be assembled in an embodiment of the invention; in fig. 2, the first robot arm 1 is a seven-degree-of-freedom robot arm, to which a male connector clamping and tightening mechanism and a male connector 3 are connected, the structured light depth camera 5 is fixed to the end of the robot arm, and the second robot arm 2 is a UR16 robot arm, to which a female connector gripping device and a female connector 4 are connected; in fig. 3, the upper surface of the female head is in a circular ring shape, a plurality of grooves are formed in the inner side of the circular ring, and a plurality of protrusions are arranged on the outer side of the male plug and correspond to the plurality of grooves, so that the plurality of protrusions and the plurality of grooves in the female plug are accurately assembled except for the alignment of the centers of the male plug and the female plug.
Preferably, the structured light depth camera 5 adopts an LMI Gocator3210 binocular snapshot type sensor, the XY direction resolution is 60-90 μm, the visual field is 71 x 98mm-100 x 154mm, and the working distance is 164 mm; it should be noted that the model and structure of the first robot arm, the model and structure of the second robot arm, the structure of the part to be assembled, and the model of the vision measuring device are exemplary descriptions, and therefore, should not be construed as limiting the present invention.
The invention provides a plug connector assembling method, which specifically comprises the following steps: s100, acquiring a female head image of a part to be assembled as an input image; s200, acquiring an elliptical ring area image containing a groove in an input image through a trained U-NET network, extracting an elliptical contour based on the elliptical ring area image, performing ellipse fitting, and acquiring three-dimensional coordinates of an elliptical central point and more than 5 internal points; step S300, performing plane fitting to obtain a unit normal vector and a displacement amount based on an ellipse central point, three-dimensional coordinates of N internal points and a preset expected central point of a female head; calculating an expected position, an expected rotating shaft and an expected rotating angle of the tail end of the mechanical arm based on the unit normal vector, the displacement and the central point of the ellipse so as to control the mechanical arm to realize the alignment of the image acquisition device and the female head; step S400, a mother head image is collected again, and the position and posture information of the groove feature points is obtained as first position and posture information through the method of the step S200; and acquiring a pose deviation value based on the first pose information and expected pose information preset by the female head, and aligning the male head and the female head according to the pose deviation value and the preset relative position of the male head and the image acquisition device. In the approach stage, the image acquisition device is far away from the female head and cannot accurately acquire notch feature points, so that rough alignment is required first, namely alignment of approximate positions of the male head and the female head; and acquiring three-dimensional coordinates of the elliptic central point and more than 5 points inside the elliptic central point according to the acquired gray level image and the expected central point, performing plane fitting to obtain a unit normal vector, an expected position of the tail end of the mechanical arm, and rotating shaft and corner parameters of expected posture adjustment, and moving along the unit normal vector to realize coarse alignment. Step S400 is used for operating the image acquisition device and the female head in an alignment state, extracting the current image or position characteristics and realizing alignment by adopting a hybrid vision servo control method; and then, the alignment of the male head and the female head is carried out by combining the relative position information of the preset image acquisition device and the male head, and then the plug connector is advanced on the central axis of the female head to carry out the precise alignment assembly of the plug connector.
Preferably, the captured image of the female head is contoured using a findContours function, the largest two contours being the inner and outer ellipses of the surface of the female part, respectively.
Preferably, a least square method is used for ellipse fitting, and the parameter equations of the inner ellipse and the outer ellipse are as follows:
Figure BDA0002872029250000121
Figure BDA0002872029250000122
wherein the center point P of the ellipseacHas the coordinates of (u)0,v0);ain、binRespectively the major axis length and the minor axis length of the inner ellipse; theta0Is the initial angle of the ellipse, theta is a parameter, theta belongs to (0, 2 pi); a isout、boutRespectively, the major axis length and the minor axis length of the outer ellipse.
The similar elliptical equations across the groove region are specifically:
Figure BDA0002872029250000131
wherein the parameter k represents the degree of the similar ellipse approaching the outer ellipse, and k belongs to (0, 1); (u)e,ve) The image coordinates corresponding to the feature points.
Referring to fig. 4 and 5, fig. 4 is a surface data labeling diagram of a part to be assembled in an embodiment of the present invention, and fig. 5 is a feature extraction result diagram of a desired image of the part to be assembled in an embodiment of the present invention; FIG. 4 is a mother sampling picture labeling diagram, which is used for segmenting and labeling the surface area of a part, reserving the groove information of the surface to the maximum extent, and then training as network input; the U-NET network training process specifically comprises the following steps: collecting mother head gray-scale images at different angles and distances by using a structured light depth camera; labeling the collected gray level image of the female head by using labelme software, and taking an annular region containing groove information on the surface of the female head as label information; training the marked data by using a U-NET network; training the marked data by using a U-NET network; and carrying out ellipse fitting by using the output result of the U-NET network, and extracting the groove characteristics of the part along the elliptical contour. Fig. 5 is a result of feature extraction performed on a mother picture in an expected alignment state input to the U-NET network, where a line segment combination 100 represents an extraction result of 5 groove features, an ellipse 200 is an inner ellipse extraction result, and an ellipse 300 is an outer ellipse extraction result; the measuring method of the part assembling device mainly comprises the following steps: inputting a mother picture in an expected alignment state to a trained U-NET network to obtain a picture containing a groove elliptical ring area; extracting outlines of the network output pictures by using a Findcontours operator, wherein the two largest outlines are an inner ellipse and an outer ellipse of the surface of the part respectively; fitting by using a least square method to obtain an inner ellipse parameter equation and an outer ellipse parameter equation, wherein the fitting result is shown as an ellipse in FIG. 5; when the degree parameter of the similar ellipse close to the outer ellipse is 0.3, the inner ellipse equation and the outer ellipse equation obtain a similar ellipse equation passing through the groove area; gradually increasing the parameter angle according to a similar elliptic equation, and searching a parameter angle set corresponding to a continuous point with a pixel value having a larger difference with the circular ring area; extracting 5 sets with the largest continuous angles, wherein the average value of all angles in each set corresponds to the parameter angle of the characteristic point; when the degree parameter of the similar ellipse close to the outer ellipse is 0.7, all the points of the similar ellipse are on the elliptical ring, and the parameter angle is substituted, so that the image coordinate corresponding to the expected female head image feature point can be obtained.
Further, the method for acquiring the desired image feature and the displacement adjustment amount required by the pre-assembly stage specifically includes the following steps: manually controlling the mechanical arm to finish the alignment action of the part; only the translation amount of the shaft of the mechanical arm Z, X, Y is changed under the terminal coordinate system, so that the male head is withdrawn from the female head, the camera can shoot the right female head image, and the translation amount of the withdrawal movement and the translation amount of the movement of the collected image under the terminal coordinate system are recorded; acquiring a mother head image as an expected image; extracting expected image characteristics of an ellipse central point according to a similar ellipse equation, and recording a central point expected position measured by a depth camera in a visual coordinate system of a female head; when the degree parameter of the similar ellipse close to the outer ellipse is 0.7, obtaining expected image characteristics, and recording expected positions of characteristic points; and fitting the plane by using the expected positions of the feature points to obtain an expected plane normal vector and an expected attitude angle, calculating the expected attitude angle by using the ith notch angular point, and calculating the expected attitude angle in the image space by using the image coordinates of the ith notch angular point.
Preferably, step S300 specifically includes the following: randomly setting initial positions of a female head and a male head within a certain range to obtain a real-time gray scale image; the three-dimensional coordinates of the central point of the ellipse and more than 5 points inside the ellipse are obtained by utilizing the trained U-NET network processing, and the unit normal vector n is obtained by plane fittingcDesired position P of the end of the robot arm (i.e. the position of the image acquisition device)edThe expected rotating shaft f at the tail end of the mechanical arm and the expected rotating angle theta at the tail end of the mechanical arm; the displacement of the end of the robot arm from the initial position to the alignment position with the master is zcd,zcd=|Pac-PadL, wherein PadDesired center point, P, preset for the female headacIs the center point of the ellipse;
desired position Ped=Pac+nc×Zcd
Desired attitude adjustment axis f ═ z × (-n)c)=[ncy,-ncx,0]T
Angle of rotation θ ═ arc cos (n)cz)。
Further, referring to fig. 6, fig. 6 is a feature extraction result diagram of a current image of a part to be assembled in an approach stage in an embodiment of the present invention, and fig. 6 is a result diagram obtained by extracting an ellipse from a grayscale image acquired by a camera in the approach stage, where the ellipse 200 is an inner ellipse extraction result and the ellipse 300 is an outer ellipse extraction result, it can be seen that an extraction effect is relatively precise, an inner point can be accurately acquired to complete coarse alignment, so that a structured light depth camera can capture a clear mother image to complete more precise alignment.
Further, referring to fig. 7 to 9, fig. 7 is a system block diagram of a hybrid servo control method of a part assembling apparatus according to an embodiment of the present invention, fig. 8 is a feature extraction result diagram of a current image of a part to be assembled at an alignment stage according to an embodiment of the present invention, and fig. 9 is a variation diagram of a deviation between a current posture and a desired posture of the part to be assembled at the alignment stage according to an embodiment of the present invention; in fig. 7, the visual feedback is the current pose of the mother head, and consists of two parts, namely 2D image acquisition and 3D pose measurement; comparing the two measurement results with an expected parent pose according to a preset formula to obtain pose deviation, controlling translation along an X, Y axis and rotation around a Z axis by using image characteristics, and realizing conversion from image space deviation to Cartesian space deviation by adopting position-based control on other motions; in fig. 8, a line segment combination 100 represents the extraction result of 5 groove features, an ellipse 200 is the extraction result of an inner ellipse, and an ellipse 300 is the extraction result of an outer ellipse; in fig. 9, it can be seen that after 6 times of iterative control, the error is substantially close to 0, and the convergence rate is faster.
The preset formula is as follows:
Figure BDA0002872029250000151
wherein k is1、k2Is a coefficient; (u)ac,vac) Is the current image coordinate; z is a radical ofac、zadRespectively the current distance and the expected distance between the female part and the image acquisition device; (theta)cx,θcy) Calculating current attitude angles in the directions of an x axis and a y axis for the three-dimensional coordinates; thetamcz、θmdzRespectively representing the current attitude angle and the desired attitude angle in the z-axis direction calculated by the image coordinates. Wherein, thetadx、θdyAnd thetadzCalculating an expected attitude angle for the expected coordinates of the groove feature points;
Figure BDA0002872029250000152
wherein n isdx,ndy,ndzObtaining by a desired plane normal vector; the normal vector of the expected plane is formed by using a plurality of groove feature points PsiDesired bit ofPut PsdiNormal vector n of desired plane obtained by fitting planed=[ndx,ndy,ndz]T;θmdzTo a desired attitude angle, θmdz=arc tan 2(vei-vad,uei-uad) Wherein (u)ad,vad) Is the desired coordinates of the groove feature points. u. ofei、veiCan be obtained from similar elliptic equations.
As can be seen from the above description and the result figures, in a preferred embodiment of the present invention, the measurement and control method of the component mounting apparatus of the present invention mainly includes the following steps: randomly setting initial positions of the female head and the male head within a certain range, acquiring images, extracting three-dimensional coordinates of internal points of the female head, and fitting a plane calculation method vector to realize coarse alignment of the positions; after the coarse alignment is finished, extracting the current image or position characteristics, and realizing accurate alignment by adopting a mixed vision servo control method; and finally, finishing the part alignment operation according to the acquired camera alignment state and the displacement of the tail end of the mechanical arm in the part alignment state. According to the invention, after the surface features of the complex part can be accurately extracted through the U-NET network, the three-dimensional coordinates of corresponding feature points are read by combining a structured light depth camera, and the effect of accurately measuring the pose of the complex part can be achieved; and an assembly strategy of preassembly, approach and alignment is adopted, so that the problem of errors caused by hand-eye calibration is solved, the assembly success rate is further improved, and the method is suitable for a large-range multi-degree-of-freedom assembly scene.
It should be noted that in the description of the present invention, the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicating the directions or positional relationships are based on the directions or positional relationships shown in the drawings, which are only for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Furthermore, it should be noted that, in the description of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A method of assembling a connector, comprising:
s100, acquiring a female head image of a part to be assembled as an input image;
s200, acquiring an elliptical ring area image containing a groove in the input image through a trained image segmentation network, extracting an elliptical contour based on the elliptical ring area image, performing ellipse fitting, and acquiring three-dimensional coordinates of an elliptical central point and N internal points;
step S300, performing plane fitting to obtain a unit normal vector and a displacement amount based on the ellipse central point, the three-dimensional coordinates of the N internal points and an expected central point preset by the female head; calculating an expected position, an expected rotating shaft and an expected rotating angle of the tail end of the mechanical arm based on the unit normal vector, the displacement and the central point of the ellipse so as to control the mechanical arm to realize the alignment of the image acquisition device and the female head;
step S400, a mother head image is collected again, and the position and posture information of the groove feature points is obtained as first position and posture information through the method of the step S200; and acquiring a pose deviation value based on the first pose information and expected pose information preset by the female head, and aligning the male head and the female head according to the pose deviation value and the preset relative position of the male head and the image acquisition device.
2. The connector assembly method according to claim 1, wherein step S200 specifically comprises the steps of:
step S210, extracting outlines of the elliptical ring area image, wherein the two largest outlines are an inner ellipse and an outer ellipse of the surface of the female part respectively;
step S220, ellipse fitting is carried out by using a least square method, and parameter equations of an inner ellipse and an outer ellipse are as follows:
Figure FDA0002872029240000011
Figure FDA0002872029240000012
wherein the center point P of the ellipseacHas the coordinates of (u)0,v0);ain、binRespectively the major axis length and the minor axis length of the inner ellipse; theta0Is the initial angle of the ellipse, theta is a parameter, theta belongs to (0, 2 pi); a isout、boutRespectively the major axis length and the minor axis length of the outer ellipse;
step S230, obtaining a similar ellipse equation passing through the groove area according to the parameter equation of the inner ellipse and the outer ellipse:
Figure FDA0002872029240000021
wherein the parameter k represents the degree of the similar ellipse approaching the outer ellipse, and k belongs to (0, 1); (u)e,ve) Image coordinates corresponding to the feature points;
step S240, gradually increasing the angle theta of the parameter according to the similar elliptic equation, and searching a parameter angle set corresponding to the continuous points with the pixel values having larger differences with the circular ring area;
step S250: extracting 5 sets with the largest continuous angle, wherein the average value of all angles in each set corresponds to the parameter angle theta of the characteristic pointi
Step S260: according to the similar ellipse equation, when k is more than 0.5, all points of the similar ellipse are on the elliptical ring, and the reference variable angle theta is substitutediMultiple feature points P can be obtainedsiCorresponding image coordinates (u)ei,vei)。
3. The connector assembling method according to claim 2, wherein k in step S260 is 0.7.
4. The connector assembly method according to claim 2, wherein step S300 specifically includes the following: z is a radical ofcd=|Pac-PadL, |; zcd is the amount of displacement of the end of the arm from the initial position to the alignment with the parent head; padPresetting an expected central point for the female head; pacIs the center point of the ellipse;
Ped=Pac+nc×zcd,Peddesired position of the end of the arm, ncIs a unit normal vector;
f=z×(-nc)=[ncy,-ncx,0]Tf is the desired axis of rotation of the end of the arm;
θ=arc cos(ncz) And θ is the desired angle of rotation of the end of the arm.
5. The connector assembly method according to claim 2, wherein step S400 specifically includes the following:
step S410, utilizing a plurality of groove feature points PsiDesired position PsdiFitting the plane to obtain a normal vector n of the expected planed=[ndx,ndy,ndz]T
Step S420, calculating a desired attitude angle theta of an image space by using the image coordinates of the ith notch corner pointmdz=arc tan2(vei-vad,uei-uad) Wherein (v)ad,uad) Is the expected coordinate of the groove feature point;
step S430, calculating an expected attitude angle theta according to the obtained expected plane normal vector and the expected coordinates of the groove feature pointsdx、θdyAnd thetadzSo as to adjust the image acquisition device and the plane of the female head to be assembled in parallel;
Figure FDA0002872029240000031
step S440, carrying out accurate alignment of the image acquisition device and the female head by adopting a hybrid vision servo control method according to the following formula;
Figure FDA0002872029240000032
wherein k is1、k2Is a coefficient; (u)ac,vac) Is the current image coordinate; z is a radical ofac、zadRespectively the current distance and the expected distance between the female part and the image acquisition device; (theta)cx,θcy) Calculating current attitude angles in the directions of an x axis and a y axis for the three-dimensional coordinates; thetamcz、θmdzRespectively representing the current attitude angle and the desired attitude angle in the z-axis direction calculated by the image coordinates.
6. The connector assembly method of claim 5 wherein k is1=k2=0.6。
7. The connector assembly method according to claim 1, wherein when the image pickup device is aligned with the female head, a contact ratio of a center point of the ellipse with the desired center point is within a first threshold range; the ellipse central point is the image central point collected by the image collecting device when the image collecting device is far away from the female head;
when the image of the female head is collected again, the image collecting device and the female head are positioned at the same horizontal axis;
the alignment process of the male head and the female head further comprises the step of adjusting the contact ratio of the central point of the ellipse and the expected central point in the alignment state of the image acquisition device and the female head to be within a second threshold value range; the second threshold range has a precision greater than the precision of the first threshold range.
8. The connector assembly method according to claim 1, wherein the step S200 of obtaining the image segmentation network model specifically comprises the steps of:
acquiring mother head gray-scale images with different angles and distances by using an image acquisition device;
marking the collected grey-scale image of the female head, and taking a circular ring area containing groove information on the surface of the female head as label information;
and training the marked data by using the image segmentation network to obtain a trained image segmentation network model.
9. The connector assembly method according to claim 8, wherein the desired center point preset by the female connector is obtained in a manner that includes:
manually controlling the mechanical arm to finish the alignment action of the part;
and only changing the translation amount of the shaft of the mechanical arm Z, X, Y under the terminal coordinate system to ensure that the male head is withdrawn from the female head to the position of the image acquisition device opposite to the female head, recording the translation amount of the withdrawal movement and the translation amount of the movement of the acquired image under the terminal coordinate system, taking the image acquired by the image acquisition device and opposite to the female head as an expected image, and carrying out ellipse fitting by using a least square method to calculate the expected position of the central point.
10. The connector assembly method according to claim 1, wherein N.gtoreq.5.
CN202011606810.3A 2020-12-30 2020-12-30 Connector assembly method Active CN112658643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011606810.3A CN112658643B (en) 2020-12-30 2020-12-30 Connector assembly method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011606810.3A CN112658643B (en) 2020-12-30 2020-12-30 Connector assembly method

Publications (2)

Publication Number Publication Date
CN112658643A true CN112658643A (en) 2021-04-16
CN112658643B CN112658643B (en) 2022-07-01

Family

ID=75410870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011606810.3A Active CN112658643B (en) 2020-12-30 2020-12-30 Connector assembly method

Country Status (1)

Country Link
CN (1) CN112658643B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113751981A (en) * 2021-08-19 2021-12-07 哈尔滨工业大学(深圳) Space high-precision assembling method and system based on binocular vision servo
CN114178832A (en) * 2021-11-27 2022-03-15 南京埃斯顿机器人工程有限公司 Robot guide assembly robot method based on vision
CN115411464A (en) * 2022-09-15 2022-11-29 大连中比动力电池有限公司 Welding method, system and control device for full-lug cylindrical battery cell current collecting disc

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0449281A1 (en) * 1990-03-30 1991-10-02 KUKA Schweissanlagen GmbH Method and device for inserting parts in holes
JPH0628456A (en) * 1991-01-21 1994-02-04 Amadasonoike Co Ltd Image processor of work robot
US5305427A (en) * 1991-05-21 1994-04-19 Sony Corporation Robot with virtual arm positioning based on sensed camera image
US20070094868A1 (en) * 2005-10-31 2007-05-03 Hitachi High-Tech Instruments Co., Ltd. Electronic component mounting apparatus
US20180207755A1 (en) * 2015-05-25 2018-07-26 Kawasaki Jukogyo Kabushiki Kaisha Gear mechanism assembly apparatus and assembly method
US20180222049A1 (en) * 2017-02-09 2018-08-09 Canon Kabushiki Kaisha Method of controlling robot, method of teaching robot, and robot system
CN108534679A (en) * 2018-05-14 2018-09-14 西安电子科技大学 A kind of cylindrical member axis pose without target self-operated measuring unit and method
CN109926817A (en) * 2018-12-20 2019-06-25 南京理工大学 Transformer automatic assembly method based on machine vision
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111322967A (en) * 2020-03-04 2020-06-23 西北工业大学 Centering method for assembly process of stepped shaft and hole
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
CN111798413A (en) * 2020-06-10 2020-10-20 郑徵羽 Tire tangent plane positioning method and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0449281A1 (en) * 1990-03-30 1991-10-02 KUKA Schweissanlagen GmbH Method and device for inserting parts in holes
JPH0628456A (en) * 1991-01-21 1994-02-04 Amadasonoike Co Ltd Image processor of work robot
US5305427A (en) * 1991-05-21 1994-04-19 Sony Corporation Robot with virtual arm positioning based on sensed camera image
US20070094868A1 (en) * 2005-10-31 2007-05-03 Hitachi High-Tech Instruments Co., Ltd. Electronic component mounting apparatus
US20180207755A1 (en) * 2015-05-25 2018-07-26 Kawasaki Jukogyo Kabushiki Kaisha Gear mechanism assembly apparatus and assembly method
US20180222049A1 (en) * 2017-02-09 2018-08-09 Canon Kabushiki Kaisha Method of controlling robot, method of teaching robot, and robot system
CN108534679A (en) * 2018-05-14 2018-09-14 西安电子科技大学 A kind of cylindrical member axis pose without target self-operated measuring unit and method
CN109926817A (en) * 2018-12-20 2019-06-25 南京理工大学 Transformer automatic assembly method based on machine vision
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111322967A (en) * 2020-03-04 2020-06-23 西北工业大学 Centering method for assembly process of stepped shaft and hole
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
CN111798413A (en) * 2020-06-10 2020-10-20 郑徵羽 Tire tangent plane positioning method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张辉 等: "一种面向叉耳式翼身对接的视觉测量方法", 《航空制造技术》 *
曲吉旺 等: "基于显微视觉的微球微管精密装配", 《高技术通讯》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113751981A (en) * 2021-08-19 2021-12-07 哈尔滨工业大学(深圳) Space high-precision assembling method and system based on binocular vision servo
CN113751981B (en) * 2021-08-19 2022-08-19 哈尔滨工业大学(深圳) Space high-precision assembling method and system based on binocular vision servo
CN114178832A (en) * 2021-11-27 2022-03-15 南京埃斯顿机器人工程有限公司 Robot guide assembly robot method based on vision
CN115411464A (en) * 2022-09-15 2022-11-29 大连中比动力电池有限公司 Welding method, system and control device for full-lug cylindrical battery cell current collecting disc
CN115411464B (en) * 2022-09-15 2023-10-31 大连中比动力电池有限公司 Method, system and control device for welding full-lug cylindrical cell current collecting disc

Also Published As

Publication number Publication date
CN112658643B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN112658643B (en) Connector assembly method
CN111775146B (en) Visual alignment method under industrial mechanical arm multi-station operation
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN110116407A (en) Flexible robot's pose measuring method and device
CN113674345B (en) Two-dimensional pixel-level three-dimensional positioning system and positioning method
CN110017852B (en) Navigation positioning error measuring method
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN113500593B (en) Method for grabbing designated part of shaft workpiece for feeding
CN116749198A (en) Binocular stereoscopic vision-based mechanical arm grabbing method
US20220080597A1 (en) Device and method for calibrating coordinate system of 3d camera and robotic arm
CN113221953A (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
CN116619350A (en) Robot error calibration method based on binocular vision measurement
CN112958960A (en) Robot hand-eye calibration device based on optical target
CN116766194A (en) Binocular vision-based disc workpiece positioning and grabbing system and method
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN114998422B (en) High-precision rapid three-dimensional positioning system based on error compensation model
CN114998444B (en) Robot high-precision pose measurement system based on two-channel network
CN116372938A (en) Surface sampling mechanical arm fine adjustment method and device based on binocular stereoscopic vision three-dimensional reconstruction
CN114046889B (en) Automatic calibration method for infrared camera
CN115383740A (en) Mechanical arm target object grabbing method based on binocular vision
CN106153012B (en) The spatial attitude parameter measurement method of specified target and its application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant