CN112598752A - Calibration method based on visual identification and operation method - Google Patents

Calibration method based on visual identification and operation method Download PDF

Info

Publication number
CN112598752A
CN112598752A CN202011545117.XA CN202011545117A CN112598752A CN 112598752 A CN112598752 A CN 112598752A CN 202011545117 A CN202011545117 A CN 202011545117A CN 112598752 A CN112598752 A CN 112598752A
Authority
CN
China
Prior art keywords
camera
end flange
center
tool
alpha
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011545117.XA
Other languages
Chinese (zh)
Other versions
CN112598752B (en
Inventor
李家清
石金博
陈晓聪
邬荣飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Liqun Automation Technology Co ltd
Original Assignee
Dongguan Liqun Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Liqun Automation Technology Co ltd filed Critical Dongguan Liqun Automation Technology Co ltd
Priority to CN202011545117.XA priority Critical patent/CN112598752B/en
Publication of CN112598752A publication Critical patent/CN112598752A/en
Application granted granted Critical
Publication of CN112598752B publication Critical patent/CN112598752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses a calibration method based on visual identification and an operation method, wherein the calibration method based on visual identification comprises the following steps: acquiring camera TOOLC(ii) a Determining that the end flange is in a first fixed angular attitude α1Then, the conversion relation between the pixel coordinate system and the camera coordinate system; shooting at least one characteristic point on the marker; obtaining the position relation between the feature point and the camera center during shooting; angle ROLL-V for obtaining mark of marking article1(ii) a Acquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1(ii) a Acquiring a position P of a tip flange when a work tool reaches a work position in a desired posture1According to P1、α1、PT、αTThe transformation relation offset is obtained. The invention can enable the operation equipment with the visual identification function to adapt to the operation requirement under the condition that the photographing position and the operation position are not coincident in the operation process.

Description

Calibration method based on visual identification and operation method
Technical Field
The invention relates to the technical field of industrial automation, in particular to a calibration method and an operation method based on visual identification.
Background
In an industrial automation device, a robot or other motion mechanism with a visual recognition function, a camera and a working tool are fixed on a rotating shaft and rotate along with the rotating shaft, and it is often necessary to take a picture of a product or a working point in a plane by using the camera, obtain a position of the product or a working position, and then use the working tool to go to the working position to perform work, such as grabbing, placing or welding of the product. In a common requirement, the photographing position and the working position of a product or a working point are overlapped sometimes, but in many cases, the photographing position and the working position of the product or the working point are not overlapped often due to different conditions of the product or the working point.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a calibration method based on visual identification, so that during operation, operation equipment with a visual identification function can adapt to operation requirements under the condition that a photographing position and an operation position are not coincident.
The invention also provides an operation method applying the visual identification calibration method.
The calibration method based on the visual recognition is applied to operating equipment, the operating equipment comprises an end effector and a motion mechanism, the end effector comprises an end flange, a camera and an operating tool, the camera and the operating tool are arranged on the end flange, and the motion mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the end flange; the calibration method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC(ii) a According to the camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinates and the physical position with the camera center as a referencePC(ii) a At the end flange in a first fixed angular attitude alpha1Shooting at least one characteristic point on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the position of the current tail end flange when shooting each characteristic point; converting the relation T according to the image obtained by shooting each feature pointPCAcquiring the position relation between each characteristic point and the center of the camera during shooting; according to the position relation between each feature point and the center of the camera during shooting, the current position of the end flange during shooting of each feature point, and the TOOL of the cameraCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1(ii) a Acquiring an angle ROLL-V when the marker is calibrated according to an image obtained by shooting each characteristic point1(ii) a Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT(ii) a According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transform relationship offset.
The calibration method based on visual identification according to the embodiment of the first aspect of the invention has at least the following beneficial effects:
the embodiment of the invention obtains the TOOL of the camera firstlyCThen according to the camera TOOLCObtaining a first fixed angle attitude alpha of any one end flange1The relative position relation between the feature point and the camera center can be obtained according to the coordinate of the feature point in the pixel coordinate system, and then the camera TOOL is used for photographing the feature pointCAnd the relative position relation between the acquired feature point and the camera center is known to keep the first fixed angle posture alpha1When the center of the camera is aligned with the feature points or the set relative positions between the feature points, the position P of the center of the end of the flange1Meanwhile, the angle ROLL-V when the marking article is calibrated can be obtained according to the image obtained by shooting the characteristic points1And then the position P of the end flange when the work tool reaches the work position in the desired posture is obtainedTAnd end flange angular attitude alphaTI.e. according to P1、α1、PT、αTCan obtain (P)1、α1) And (P)T、αT) Converting the relation offset so as to complete the calibration process; conversion relation offset obtained according to calibration time and angle ROLL-V of marker calibration time1In the subsequent actual operation process, after the characteristic points with fixed relative positions to the markers are photographed, the position and the angle posture of the tail end flange can be known according to the images obtained by photographing when the operation tool reaches the operation position in the expected posture, and then the operation tool can be driven to come to the operation position to complete the operation in the proper posture; therefore, after the operation equipment is calibrated according to the calibration method based on the visual recognition, the operation equipment can adapt to the operation requirement under the condition that the photographing position and the operation position are not coincident.
According to some embodiments of the invention, the calibration method based on visual recognition is implemented by determining the position relationship between the camera center and the end flange, i.e. obtaining the camera TOOLCThe method comprises the following steps:
at the end flange in an angular position alpha3Then, the calibration object is photographed from at least three different position points, and the angle posture alpha is obtained according to the position of the end flange and the pixel coordinates of the calibration object in the image during each photographing3The transformation relation between the lower pixel coordinates and the flange tail end position is obtained, and the position P which the tail end flange needs to reach when the calibration object is positioned at the image center is obtained according to the transformation relation3
At the end flange in an angular position alpha4Then, the calibration object is photographed from at least three different position points, and the angle posture alpha is obtained according to the position of the end flange and the pixel coordinates of the calibration object in the image during each photographing4The transformation relation between the lower pixel coordinates and the flange tail end position is obtained, and the position P which the tail end flange needs to reach when the calibration object is positioned at the image center is obtained according to the transformation relation4(ii) a Wherein alpha is4Is not equal to alpha3
According toα3And P3And alpha4And P4Calculating the TOOL of a cameraC
The calibration method based on visual recognition according to some embodiments of the present invention, wherein the calibration method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
will be alpha3、P3、α4、P4Substituting the correlation value in the mapping relation fTOOLCalculating to obtain the TOOL of the cameraC(ii) a Wherein the content of the first and second substances,
Figure BDA0002855486690000031
(X3,Y3) Is P3(X) position coordinates of4,Y4) Is P4The position coordinates of (a).
The calibration method based on visual recognition according to some embodiments of the present invention, wherein the calibration method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
obtaining end flange in position P3And the angular attitude is alpha3In a state where the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate systemCExpression T of middle parameter3
Obtaining end flange in position P4And the angular attitude is alpha4In a state where the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate systemCExpression T of middle parameter4
Establishing an equation T according to the relationship that the coordinate position of the camera center in the motion mechanism is not changed in the two states3=T4Obtaining the TOOL of the cameraC
Figure BDA0002855486690000041
Wherein (X)3,Y3) Is P3(X) position coordinates of4,Y4) Is P4The position coordinates of (a).
The calibration method based on visual recognition according to some embodiments of the present invention, wherein the calibration method is based on a camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePCThe method comprises the following steps: at the end flange in a first fixed angular attitude alpha1Then, the calibration points are photographed from at least 3 different positions respectively, so that the corresponding graph of the calibration points in the image has at least three points which are not collinear in a pixel coordinate system; when the calibration point is photographed at each position, the current position of the end flange is respectively recorded; according to the recorded terminal flange position and camera TOOL when shooting the calibration point each timeCThe position of the camera center at the calibration point is obtained through conversion, and the conversion relation T between the pixel coordinate and the physical position taking the camera center as the reference is obtained according to the pixel coordinate corresponding to the calibration point at each shooting position and the position of the camera center at the calibration point for each shootingPC
According to some embodiments of the calibration method based on visual recognition of the present invention, if there are 2 feature points, the current end flange position and the camera TOOL when capturing each feature point are determined according to the position relationship between each feature point and the camera center when capturing the feature point, and the camera TOOLCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1The method comprises the following steps: according to the position relationship between each characteristic point and the camera center during shooting and the camera TOOLCAnd shooting the current tail end flange position of each feature point to obtain the positions P of the centers of the two cameras when the centers of the cameras are respectively aligned to the two feature pointsO1And PO2(ii) a According to the obtained PO1And PO2Obtaining PO1PO2The position of the center point O of the connecting line; according to the position of the center point O and the camera TOOLCWhen the center of the camera is aligned with the center point OEnd flange position P of1
According to some embodiments of the calibration method based on visual recognition, if there are a plurality of feature points, the angle ROLL-V at the time of calibrating the marker is obtained according to the image obtained by capturing each feature point1The method comprises the following steps: according to the obtained PO1And PO2Obtaining PO1PO2Angle of line to horizontal centre line of camera, i.e. angle ROLL-V at which marking is calibrated1
According to some embodiments of the calibration method based on visual recognition of the present invention, if there is one feature point, the current end flange position and the camera TOOL when capturing each feature point are determined according to the position relationship between each feature point and the camera center when capturing the feature pointCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1The method comprises the following steps: according to the camera TOOLCAcquiring the position of the center of the camera when the center of the camera is aligned with the feature point; acquiring the position P of the end flange when the center of the camera is aligned with the feature point according to the position of the center of the camera when the center of the camera is aligned with the feature point and the TOOLC of the camera1
According to the calibration method based on visual recognition, the position P of the end flange when the work tool reaches the work position in the expected posture is obtainedTAnd angular attitude alphaTThe method comprises the following steps: controlling a movement mechanism to drive the tail end flange to move and enable the working tool to reach a working position in a desired posture in a manual teaching mode, and recording the current position P of the tail end flangeTAnd angular attitude alphaT
According to some embodiments of the calibration method based on visual recognition, if the marked article is a carrier carrying a product or a product, wherein the position P of the end flange is obtained when the work tool reaches the work position in a desired postureTAnd angular attitude alphaTIncluding the followingThe method comprises the following steps: at the end flange in a first fixed angular attitude alpha1Then, the product is photographed through a camera to obtain a product image; according to the conversion relation T between the product image, the pixel coordinates and the physical position taking the camera center as the referencePCTOOL of cameraCAcquiring the operation position of the product; acquiring the position P of the end flange when the work tool reaches the work position in the expected posture according to the work position of the productTAnd angular attitude alphaT
According to some embodiments of the calibration method based on visual recognition of the present invention, if the marked object is a carrier of a product to be received and a placement structure for placing the product is disposed on the carrier, wherein a position P of the end flange is obtained when the work tool reaches the work position in a desired postureTAnd angular attitude alphaTThe method comprises the following steps: at the end flange in a first fixed angular attitude alpha1The placing structure is photographed through a camera to obtain an image of the placing structure; according to the conversion relation T between the image of the placing structure, the pixel coordinate and the physical position taking the camera center as the referencePCTOOL of cameraCAcquiring the operation position of the product; acquiring the position P of the end flange when the work tool reaches the work position in the expected posture according to the work position of the productTAnd angular attitude alphaT
According to the calibration method based on visual identification of some embodiments of the invention, the marked article corresponds to N operation positions, N is more than or equal to 1; wherein the position P of the end flange is obtained when the work tool reaches the working position in a desired postureTAnd angular attitude alphaTThe method comprises the following steps: the position P of the end flange 20 when the work tool 40 reaches each work position in the desired posture is acquired for each work positionTnAnd angular attitude alphaTnWherein N is more than or equal to N and more than or equal to 1.
According to some embodiments of the invention, the calibration method based on visual recognition is based on P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) Is/are as followsThe method for transforming the relationship offset comprises the following steps: according to P1、α1And the recorded position P of each set of end flangesTnAnd end flange angular attitude alphaTnObtaining (P)1、α1) And each group (P)Tn、αTn) The transformation relation offset-n.
The working method based on visual recognition according to the second aspect of the invention is applied to working equipment, the working equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and a working tool, the camera and the working tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the movement mechanism; the operation method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC(ii) a According to the camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePC(ii) a At the end flange in a first fixed angular attitude alpha1Then, shooting a single characteristic point on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the current position of the tail end flange when shooting the characteristic point; image obtained according to shooting characteristic points and conversion relation TPCAcquiring the position relation between the feature point and the camera center during shooting; according to the position relationship between the feature point and the camera center when shooting, the current end flange position when shooting the feature point, and the camera TOOLCAnd when the center of the camera is aligned with the feature point, the position P of the end flange is obtained1(ii) a Obtaining the angle ROLL-V when the marking article is calibrated according to the image obtained by shooting the characteristic point1(ii) a Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT(ii) a According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transformation relation offset of;
at the end flange in a first fixed angular attitude alpha1Thirdly, photographing the characteristic points on the marked product again, and recording the position coordinates of the current end flange; converting the relation T according to the image obtained by shooting the marked article againPCObtaining the position relation between the feature point and the camera center when shooting again; according to the position relationship between the feature point and the camera center when shooting again, the current end flange position when shooting the feature point again, and the camera TOOLCAcquiring the position P of the camera center when the camera center is aligned with the feature point again5(ii) a Obtaining the angle ROLL-V of the marked article during operation according to the image obtained by shooting the marked article again2(ii) a According to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5Position and end flange in angular attitude alpha1+(ROLL-V2-ROLL-V1) Lower, end flange position P6(ii) a According to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) Converting the offset relationship to obtain the position P of the tail end flange at the working position7And angular attitude alpha7(ii) a Moving the end flange to P7And rotating the end flange to an angular attitude alpha7The work is performed at the work position by the work tool.
The working method based on visual recognition according to the embodiment of the second aspect of the invention has at least the following advantages: the operation method comprises the calibration method and the corresponding operation steps of the embodiment of the first aspect when the characteristic point is one; in the working step, it is determined by finding the position P of the end flange in a state where the camera center is also aligned with the feature point and the angle of the marker in the camera field of view is identical to the angle of the marker in the camera field of view 60 in the calibration process6And angular attitude alpha1+(ROLL-V2-ROLL-V1) Then, by means of the conversion relation offset obtained in the calibration process, the position P of the end flange when the working tool comes to the actual working position in the expected posture can be obtained7And angular attitude alpha7Thereby can pass through the exercise machineThe tail end flange is driven to reach the position and the angle posture, so that the operation tool can smoothly complete the operation; therefore, the visual recognition-based work method according to the second aspect of the present invention can complete the work in the case where the photographing position and the work position do not coincide with each other.
The working method based on visual recognition according to the third aspect of the invention is applied to working equipment, the working equipment comprises an end effector and a movement mechanism, the end effector comprises an end flange, a camera and a working tool, the camera and the working tool are arranged on the end flange, and the movement mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the movement mechanism; the operation method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC(ii) a According to the camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePC(ii) a At the end flange in a first fixed angular attitude alpha1Next, shooting a plurality of characteristic points on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the position of the current end flange when shooting each characteristic point; converting the relation T according to the image obtained by shooting each feature pointPCAcquiring the position relation between each characteristic point and the center of the camera during shooting; according to the position relation between each feature point and the center of the camera during shooting, the current position of the end flange during shooting of each feature point, and the TOOL of the cameraCAnd acquiring the position P of the end flange when the relative position between the center alignment feature points of the camera is set1(ii) a Obtaining the angle ROLL-V when the marking article is calibrated according to the image obtained by shooting the characteristic point1(ii) a Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT(ii) a According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) Of (2) aChanging the relation offset;
at the end flange in a first fixed angular attitude alpha1Thirdly, photographing each characteristic point on the marked product again, and recording the current position of the tail end flange when each characteristic point is photographed; converting the relation T according to the image obtained by shooting each feature point againPCObtaining the position relation between each characteristic point and the center of the camera when shooting again; according to the position relation between each characteristic point and the camera center when shooting again, the current tail end flange position and the camera TOOL when shooting each characteristic point againCAcquiring the position P of the camera center when the camera center is aligned with the feature points again5(ii) a According to the image obtained by shooting each characteristic point again, the angle ROLL-V of the marked article during operation is obtained2(ii) a According to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5Position and end flange in angular attitude alpha1+(ROLL-V2-ROLL-V1) Position P of the end flange6(ii) a According to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) Converting the offset relationship to obtain the position P of the tail end flange at the working position7And angular attitude alpha7(ii) a Moving the end flange to P7And rotating the end flange to an angular attitude alpha7The work is performed at the work position by the work tool.
According to the working method based on visual recognition of the third aspect of the invention, at least the following advantages are achieved: the operation method comprises the calibration method and corresponding operation steps of the embodiment of the first aspect when a plurality of characteristic points are provided; in the working step, the position P of the end flange is determined by finding the position P of the end flange when the camera center is aligned with the feature point and the angle of the marker in the camera field of view is consistent with the angle of the marker in the camera field of view in the calibration process6And angular attitude alpha1+(ROLL-V2-ROLL-V1) Then, by means of the transformation relation offset obtained in the calibration process, the current working worker can be obtainedThe position P of the end flange when coming to the actual working position in the desired attitude7And angular attitude alpha7Therefore, the tail end flange can be driven to reach the position and the angle posture through the movement mechanism, and the operation tool can be ensured to smoothly complete the operation; thus, the visual recognition-based work method according to the third aspect of the present invention can complete the work in the case where the photographing position and the work position do not coincide with each other.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a calibration method based on visual recognition according to an embodiment of the first aspect of the present invention;
FIG. 2 is a schematic structural diagram of a working apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of nine corresponding photo-taking points of the camera center and the moving path of the camera center in step S210;
fig. 4 is a schematic diagram of a 3 × 3 lattice formed by arranging the corresponding graphics of the designated point in the pixel coordinate system in step S210;
fig. 5 is a process diagram of steps S300 to S700 when there are two feature points;
FIG. 6 is a process diagram of steps S300 to S700 when the feature point is single;
FIG. 7 is a flowchart of a method for performing a task based on visual recognition according to an embodiment of the second aspect;
FIG. 8 is a process diagram from S900b to S1400b of FIG. 7;
FIG. 9 is a flowchart of a method for performing a task based on visual recognition according to an embodiment of the third aspect;
fig. 10 is a process diagram of S900b through S1400b in fig. 9.
Reference numerals:
motion mechanism 10, end flange 20, camera 30, work tool 40, index point 50, camera field of view 60, markers 70, work location 710, product 80, feature points 90.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, left, right, front, rear, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
The following describes a calibration method based on visual recognition according to an embodiment of the first aspect of the present invention, which is mainly applied to the following working equipment, with reference to fig. 1 to 6.
Referring to fig. 1 and 2, the working device includes an end effector and a movement mechanism 10, the end effector includes an end flange 20, a camera 30 and a working tool 40, the camera 30 and the working tool 40 are disposed on the end flange 20, and the movement mechanism 10 can drive the end flange 20 to move freely on an XY plane and can drive the end flange 20 to rotate around its axis.
Depending on the type of the work tool 40, the work tool 40 may be, for example, a chuck, a welding torch, a detection device, or the like, and correspondingly, the work equipment may be, for example, an up-down/material handling equipment for picking and placing the product 80, a welding equipment for welding, a detection equipment for detection, or the like, and the work tool 40 and the work equipment are not particularly limited.
Similarly, the moving mechanism 10 may be of various types, such as an articulated robot, or other automated device having a rotary driving module and a plurality of linear driving modules, and the moving mechanism 10 is not particularly limited as long as it can drive the end flange 20 to move freely on the XY plane and can drive the end flange 20 to rotate around its axis.
The calibration method based on visual identification comprises the following steps:
s100, determining the position relation between the center of the camera and the end flange 20, namely acquiring the TOOL of the cameraC
S200, according to the camera TOOLCDetermining that the end flange 20 is in the first fixed angular attitude α1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePC
S300, placing the end flange 20 in a first fixed angle posture alpha1Next, shooting at least one characteristic point 90 on the marker 70, wherein the relative position between the marker 70 and the working position 710 is fixed, and recording the position of the current end flange 20 when shooting each characteristic point 90;
s400, converting the relation T according to the image obtained by shooting each characteristic point 90PCAcquiring the position relation between the feature point 90 and the camera center during shooting;
s500, according to the position relation between each characteristic point 90 and the camera center during shooting, the current end flange position and the current camera TOOL position during shooting of each characteristic point 90CThe position P of the end flange 20 at the time of acquiring the camera center aligned with the feature point 90 or the set relative position between the feature points 901
S600, acquiring an angle ROLL-V when the marker 70 is calibrated according to the image obtained by shooting the characteristic point 901
S700, acquiring the position P of the end flange 20 when the work tool 40 reaches the work position 710 in the desired postureTAnd angular attitude alphaT
S800, according to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transform relationship offset.
It is understood that the marker 70 may be a product 80, a carrier already carrying the product 80, a carrier waiting to receive the product 80, or the like; for example, when the marked article 70 is a product 80, the working position 710 is located on the product 80, and may be, for example, a welding position on the product 80, a detection position on the product 80, a grasping position of the product 80, or the like; for example, when the marker 70 is a carrier already carrying a product 80, the working position 710 is located on the product 80 in the carrier, and may be a welding position on the product 80, a detection position on the product 80, a grasping position of the product 80, or the like; for another example, when the marked article 70 is a carrier waiting to receive the product 80, the operation position 710 may be a placement position corresponding to the product 80 on the carrier, and the corresponding operation process is to place the product 80 on the carrier.
It will be appreciated that the angle ROLL-V at which marker 70 is marked is1And angle ROLL-V at which subsequently described marker 70 operates2In the process of operation, the change value of the corresponding angle of the marked article 70, namely ROLL-V, is finally acquired2-ROLL-V1Thus ROLL-V1And ROLL-V2The angle of the marker 70 in the image coordinate system, the angle of the marker 70 in the motion mechanism coordinate system, or the first fixed angular position α of the end flange 20 may be obtained1In this state, the angle of marker 70 in the camera coordinate system may be, for example, an angle between a straight line with an unchanged relative position on marker 70 and a coordinate axis in the pixel coordinate system, an angle between a straight line with an unchanged relative position on marker 70 and a coordinate axis in the motion mechanism coordinate system, an angle between a straight line with an unchanged relative position on marker 70 and the horizontal center line of the camera, or the like.
It is understood that the feature point 90 may be a MARK point artificially set on the product 80 or the carrier, or may be some existing structural features on the product 80 or the carrier, such as an existing hole, a boss, or a boundary and a vertex on the product 80 or the carrier, and a corresponding image after being captured is convenient for recognition in the image.
The calibration method of the present embodiment is implemented by first obtaining the TOOL of the cameraCThen according to the camera TOOLCObtaining a first fixed angle attitude alpha of any one end flange1Then the feature point 90 is photographed, the relative position relation between the feature point 90 and the camera center can be obtained according to the coordinate of the feature point 90 in the pixel coordinate system, and then the camera TOOL is used for photographing the feature point 90CAnd the relative positional relationship between the acquired feature point 90 and the camera center is learned while maintaining the first fixed angular posture α1Next, when the camera center is aligned with the feature point 90 or the set relative position between the feature points 90, the position P of the flange end center1Meanwhile, the angle ROLL-V when the marker is calibrated can be obtained according to the image obtained by shooting the characteristic point 901And then the position P of the end flange 20 when the work tool 40 reaches the work position 710 in the desired attitude is obtainedTAnd end flange angular attitude alphaTI.e. according to P1、α1、PT、αTCan obtain (P)1、α1) And (P)T、αT) Converting the relation offset so as to complete the calibration process; conversion relation offset obtained according to calibration time and angle ROLL-V of marker calibration time1In the subsequent actual operation process, after the feature point 90 fixed in position relative to the marker 70 is photographed, the position and the angular posture of the end flange 20 when the operation tool 40 reaches the operation position 710 in the expected posture can be known according to the image obtained by photographing, and the operation tool 40 can be driven to come to the operation position 710 to complete the operation in the appropriate posture; therefore, after the operation equipment is calibrated according to the calibration method based on the visual recognition, the operation equipment can adapt to the photographing position and the photographing positionJob requirements where job locations 710 do not coincide. How to calibrate angle ROLL-V of mark 70 according to conversion relation offset obtained during calibration1The operation of bringing the work tool 40 to the working position 710 with a proper posture will be described in detail in the second and third embodiments of the present invention.
It is understood that, unless otherwise specified, the position of the end flange 20 refers to the coordinates of the end flange 20 in the kinematic mechanism coordinate system; it is understood that, unless otherwise specified, the position of the camera center refers to the coordinates of the camera center in the motion mechanism coordinate system; the motion mechanism coordinate system may be established with any one of the fixed points on the base of the motion mechanism 10 as the origin, and the XY plane in the motion mechanism 10 is parallel to the plane of the markers 70.
It is to be understood that, in step S100, the positional relationship between the camera center and the end flange 20 is determined, i.e., the camera TOOL is obtainedCThe method comprises the following steps:
s110, the end flange 20 is in a certain angle posture alpha3Then, the calibration object is photographed from at least three different position points, and the angle posture α is obtained according to the position of the end flange 20 at each photographing and the pixel coordinates of the calibration object in the image1The transformation relation between the lower pixel coordinates and the flange end position is obtained, and the position P which the end flange 20 needs to reach when the calibration object is positioned at the image center is obtained according to the transformation relation3
It will be understood that alpha is3The angle may be any angle, for example, 0 degree, 45 degrees, 90 degrees, 180 degrees, or the like.
Specifically, the end flange 20 is first driven by the movement mechanism 10 to move to a preset angular posture α3Before photographing, the movement of the end flange 20 is controlled by the movement mechanism 10, and the camera 30 is brought to a position near the calibration object; then at least three points, such as three points, four points, or more than four points, near the current position are selected, and then the device is driven by the motion mechanism 10 to automatically move to each point one by one, and after moving to each point, the device is driven to automatically move to each pointAll take pictures through the camera 30 and ensure that the calibration objects are within the visual field of the camera 30 at each picture taking; when a picture is taken at each point, the position of the current end flange 20 is recorded; the current position of the end flange 20, and the angular attitude α3All can be obtained in real time through the bottom layer of the operation equipment; according to the shot image, the coordinates of the corresponding graph of the calibration object in the image, namely the pixel coordinates of the calibration object in the image, can be obtained through an image processing technology. The angular posture alpha of the end flange 20 can be obtained by obtaining the coordinates of the positions of at least three end flanges 20 and the three pixel coordinates which are obtained in a one-to-one correspondence manner3Next, the transformation relationship between the pixel coordinates and the coordinates of the central position of the end flange 20; then, according to the pixel coordinates of the image center and the transformation relation, the position P to be reached by the end flange 20 when the calibration object is at the image center can be obtained3(ii) a The calibration object is positioned at the center of the image, which indicates that the center of the camera is aligned with the calibration object, and the angular posture alpha of the flange 20 at the tail end is obtained3Next, the position P of the end flange 20 when the camera center is aligned with the calibration object3
S120, placing the end flange 20 in a certain angle posture alpha4Then, the calibration object is photographed from at least three different position points, and the angle posture α is obtained according to the position of the end flange 20 at each photographing and the pixel coordinates of the calibration object in the image2The transformation relation between the lower pixel coordinates and the flange end position is obtained, and the position P which the end flange 20 needs to reach when the calibration object is positioned at the image center is obtained according to the transformation relation4(ii) a Wherein alpha is4Is not equal to alpha3
It will be understood that alpha is4May be at any angle, e.g., 0 degrees, 45 degrees, 90 degrees, 180 degrees, etc., but α4Is not equal to alpha3(ii) a Alpha may also be chosen for ease of calculation3Is 0 degrees and alpha4Is 180 degrees.
Specifically, the step of S320 is substantially the same as the step of S310, except that at the beginning, the end flange 20 is first driven to move to the preset angular posture α by the movement mechanism 104. And, correspondingly, will find the angular attitude α at the end flange 204Next, the camera center is aligned with the target, the position P4 of the end flange 20.
S130, according to alpha3And P3And alpha4And P4Calculating the TOOL of a cameraC
It should be understood that the TOOL is a cameraCAnd the position of the calibration object under the coordinate of the motion mechanism is fixed, so that the TOOL of the camera can be obtainedC. The camera TOOL will be given belowCTwo specific calculation methods.
In the first calculation method, step S330, the method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
s131a, capturing end flange 20 in position P3And the angular attitude is alpha3In a state where the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate systemCExpression T of middle parameter3
Understandably, camera TOOLCFor deviations in the X direction and the Y direction of the movement mechanism coordinate system between the center of the machine and the center of the end flange 20 in the state where the end flange 20 is in the angular posture of 0 degree, the deviations in the two directions will be respectively referred to as X for convenience of descriptionCAnd YCTOOL of cameraCIs denoted by PC,PC=[XC YC]T,P3Is noted as (X)3,Y3),P4Is noted as (X)4,Y4) (ii) a For ease of calculation, the spatial coordinates are converted to a matrix representation consisting of a 3 × 3 rotation matrix R and a 1 × 3 shift matrix M, the expression:
Figure BDA0002855486690000151
will P3、P4TOOL of cameraCConverting into matrix for operation, where P3The corresponding matrix is:
Figure BDA0002855486690000152
P3the corresponding matrix is:
Figure BDA0002855486690000153
PCthe corresponding matrix is:
Figure BDA0002855486690000154
since, the camera center is therefore coordinated with respect to the camera TOOL in the kinematic mechanism coordinate systemCExpression T of middle parameter3Comprises the following steps:
Figure BDA0002855486690000161
s132a, taking the end flange 20 at position P4And the angular attitude is alpha4In a state where the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate systemCExpression T of middle parameter4
It is understood that, substantially the same as the above S131a, the expression T3 for the coordinates of the camera center in the motion mechanism coordinate system with respect to the parameters in the camera TOOLC can be derived as:
Figure BDA0002855486690000162
s132a, establishing an equation T according to the relationship that the coordinate position of the camera center in the motion mechanism is not changed in the two states3=T4Obtaining the TOOL of the cameraC
It will be appreciated that the two states are centered relative to each otherQuasi-calibration, i.e. indicating that the coordinate position of the camera center in the kinematic mechanism is unchanged in both states, it is possible to establish equation T3=T4
Figure BDA0002855486690000163
From the above equation it follows:
Figure BDA0002855486690000164
since only the offset value X needs to be found hereCAnd YCTherefore, the rotation matrix can be ignored:
R3MC+M3=R4MC+M4
the simplification can result in:
(R3-R4)MC=M4-M3
finally, it can be found that:
MC=(R3-R4)-1×(M4-M3)
ignoring the height substitution expression yields:
Figure BDA0002855486690000171
in the second calculation method, step S330, the method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
s130b, converting alpha3、P3、α4、P4Substituting the correlation value in the mapping relation fTOOLCalculating to obtain the TOOL of the cameraC
It can be understood that the mapping relationship fTOOLCan be preset in the internal program of the operation equipment in advance, and then alpha can be displayed in the teaching process3、P3、α4、P4Substituting the correlation value in the mapping relation fTOOLIs obtained by direct calculation.
Wherein the content of the first and second substances,
Figure BDA0002855486690000172
(X3,Y3) Is P3(X) position coordinates of4,Y4) Is P4The position coordinates of (a).
Wherein the mapping relationship fTOOLThe principle and the detailed calculation process of (a) are basically the same as those of the first calculation method, and will not be described in detail here.
Referring to fig. 3 and 4, it can be understood that, in step S200, according to the camera TOOLCDetermining that the end flange 20 is in the first fixed angular attitude α1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePCThe method comprises the following steps:
s210, placing the end flange 20 in a first fixed angle posture alpha1Then, the calibration points 50 are photographed from at least 3 different positions respectively, so that the corresponding graph of the calibration points 50 in the image has at least three points which are not collinear in the pixel coordinate system; when the calibration point 50 is photographed at each position, the current position of the end flange is respectively recorded;
it will be understood that alpha is1The angle may be any angle, for example, 0 degree, 45 degrees, 90 degrees, 180 degrees, or the like.
Moreover, it can be understood that the more the position points are selected for shooting, the more accurate the obtained result is, for example, to ensure the accuracy, the following method may be selected for shooting: the moving mechanism 10 drives the camera center to move along the X direction and the Y direction to fix the offset to 9 different positions for taking pictures respectively, so that the corresponding graphs of the calibration points 50 in the image are arranged into a 3 × 3 dot matrix in a pixel coordinate system; when pictures were taken at the 9 positions, the current end flange 20 positions were recorded. Specifically, the tip is first driven by the movement mechanism 10End flange 20 is rotated to an angular attitude α1Then, the moving mechanism 10 drives the center of the camera to come near the single calibration point 50, so as to ensure that the single calibration point 50 can enter the range of the camera view 60 in the subsequent photographing process; then the moving mechanism 10 drives the camera center to come to one point which is preset, photographing is started, and the current position of the tail end flange 20 is recorded; then the moving mechanism 10 drives the center of the camera to move for a fixed distance along the X direction or the Y direction each time, and after moving each time, photographing and recording the current position of the tail end flange 20 are carried out; the track of the movement of the camera center driven by the movement mechanism 10 and the position of the calibration point 50 can be set in the manner shown in fig. 3; it will be understood, of course, that the initial shot point, the camera center, may be selected to be aligned with the index point 50, or may be selected to be misaligned with the index point 50. Moreover, it should be understood that the track of the movement mechanism 10 driving the camera center to move may also be selected from other modes than the mode shown in fig. 3, as long as it is ensured that at least three points of the graph corresponding to the calibration point 50 in the image are not collinear in the pixel coordinate system.
S220, according to the tail end flange position and the camera TOOL recorded when the calibration point 50 is shot each timeCThe position of the camera center at which the index point 50 is shot each time is obtained by conversion, and the conversion relation T between the pixel coordinates and the physical position with the camera center as a reference is obtained according to the pixel coordinates corresponding to the index point 50 at each shooting position and the position of the camera center at which the index point 50 is shot each timePC
It is understood that, through image processing technology, the pixel coordinates corresponding to the index point 50 at each photographing position can be obtained, and the position of each end flange 20 corresponding to the photographing position can be directly obtained from the bottom layer of the motion mechanism 10 during photographing, and then obtained by using the camera TOOLCThe position of the camera center of each shot point can be converted, i.e. the coordinates of the camera center in the motion mechanism coordinate system at each shot position. Then, the rotation between the pixel coordinate system and the physical position taking the camera center as the reference can be obtained through the pixel coordinate corresponding to the calibration point under each photographing position and the position of the camera center during each photographing of the calibration pointTrade relation TPC
Specifically, for example, on the basis of the 9 different shooting positions, 9 pixel coordinates corresponding to the subscript fixed points 50 of the 9 shooting positions can be obtained through an image processing technology, and the 9 positions of the end flange 20 corresponding to the shooting positions can be directly obtained from the bottom layer of the motion mechanism 10 during shooting, and then obtained by means of the camera TOOLCThe position of the camera center of each shot point, i.e. the coordinates of the camera center in the motion mechanism coordinate system at each shot point, can be converted. Then, the conversion relation T between the pixel coordinate system and the physical position taking the camera center as the reference can be obtained through the 9 position coordinates of the camera center and the corresponding 9 pixel coordinatesPC
By the conversion relation T between the acquired pixel coordinate system and the physical position with the camera center as the referencePCThe relative position between the subject and the camera center, i.e., the distance between the subject and the camera center in the X direction and the Y direction under the motion mechanism coordinates can be obtained by the pixel coordinates of the corresponding figure of the subject in the image. It will be appreciated that if the relative distance between the work position 710 and the feature point 90 is large, for example, the marked article is a large table, the feature point 90 is located at one corner of the table, and the work position 710 is located at the other diagonal corner or center of the table, when the positioning is performed only by the single feature point 90, there may be a large deviation between the work position 710 and the position actually reached by the work tool 40 during the actual work due to a slight deviation of the single feature point 90 in the image recognition. To solve this problem, a plurality of feature points 90 may be used for positioning, for example, two feature points 90, three feature points 90, or more than three feature points 90 may be used.
Referring to fig. 5, if the characteristic points 90 have two or more, the end flange 20 is in the first fixed angular posture α in step S3001Next, shooting at least one characteristic point 90 on the marker 70, wherein the relative position between the marker 70 and the working position 710 is fixed, and recording the position of the current end flange 20 when shooting each characteristic point 90, comprising the following steps:
s310a, the end flange 20 maintains the first fixed angular attitude α1Or the moving mechanism 10 drives the end flange 20 to rotate to reach the first fixed angle posture alpha1The following steps of (1);
s320a, for each feature point 90, the motion mechanism 10 drives the end flange 20 to move the camera 30 to the vicinity of each feature point 90, and takes a picture near each feature point 90; and, when each of the characteristic points 90 is photographed, the position of the front end flange 20 is recorded.
It is understood that if there are 2 or more feature points 90, in step S400, the relationship T is converted according to the image obtained by capturing each feature point 90PCAcquiring the position relationship between each feature point 90 and the center of the camera during shooting, comprising the following steps:
s410a, after photographing near each feature point 90, acquiring the coordinates of the graph corresponding to each feature point 90 in the pixel coordinate system by the image processing technology for the image acquired after each photographing;
s420a, according to the conversion relation T between the pixel coordinate system obtained in the step S200 and the physical position taking the camera center as the referencePCThe coordinate values of the corresponding graph of each feature point obtained in the above step S410a in the pixel coordinate system can be converted to obtain the position relationship between each feature point 90 and the camera center during photographing, that is, the distance Δ X between each feature point and the camera center in the X direction and the distance Δ Y between each feature point and the camera center in the Y direction in the motion mechanism coordinate system during photographing.
Referring to fig. 5, it can be understood that if there are 2 feature points 90, in step S500, according to the position relationship between each feature point 90 and the camera center at the time of shooting, the current end flange position at the time of shooting each feature point 90, and the camera TOOLCThe position P of the end flange 20 at the time of acquiring the camera center aligned with the feature point 90 or the set relative position between the feature points 901The method comprises the following steps:
s510a, obtaining a phase according to the position relationship between each feature point 90 and the camera center during shooting, the camera tol, and the position of the current end flange 20 during shooting of each feature point 90Position P of two camera centers when the camera centers are aligned with the two feature points 90, respectivelyO1And PO2
Specifically, since the distance Δ X in the X direction and the distance Δ Y in the Y direction between each feature point 90 and the camera center in the motion mechanism coordinate system are obtained in step S400, and the camera 30 is mounted on the end flange 20, on the basis of the current position of the end flange 20 when capturing each feature point 90, if the end flange 20 is driven to move the distance Δ X and Δ Y, the camera center can be aligned to the corresponding feature point 90, and then the camera TOOL is combined with the camera TOOLCThen, the positions P of the centers of the two cameras when the centers of the cameras are aligned with the two feature points 90 can be calculated respectivelyO1And PO2. It will of course be appreciated that it is not necessary to actually move the end flange 20 by the aforementioned Δ X and Δ Y at this time, and this may be obtained by calculation only.
S520a, P obtained according toO1And PO2Obtaining PO1PO2The position of the center point O of the connecting line;
it is understood that P is obtainedO1PO2The position of the center point O of the connecting line may represent the position when the center of the camera is aligned with the midpoint of the connecting line between the two feature points 90, and the set relative position between the feature points 90 at this time is the midpoint of the connecting line between the two feature points 90.
S530a, obtaining the position P of the end flange 20 when the center of the camera is aligned with the center point O according to the position of the center point O and the camera TOOLc1
Specifically, since the position of the center point O may represent a position when the center of the camera is aligned with the midpoint of the connection line between the two feature points 90, the position P of the end flange 20 when the camera center is aligned with the relative position between the feature points 90 (i.e., the midpoint of the connection line between the two feature points 90) can be obtained by the camera TOOLc1
It will be appreciated that when dual feature points 90 are employed, the camera center needs to be aligned to a set relative position between the feature points 90, except as may be P as described aboveO1PO2The center point O of the connecting line can be PO1PO2Connecting wireThe trisection point of (a), and the like, as long as the relative position thereof with the two feature points 90 is fixed. It should be understood that, when only the double feature points 90 are given above, the detailed process of step S500 is given, but the situation of the multiple feature points 90 in the calibration method of the present invention is not limited to the double feature points 90, and three feature points 90 or more than three feature points 90 may be selected, for example, when three feature points 90 are used, the set relative position between the feature points 90 whose camera centers need to be aligned may be the center point, the inner center, the vertical center, etc. of the triangle structure formed by the three feature points 90, as long as the relative position between the three feature points 90 is fixed.
It is understood that if the dual feature points 90 are used, in step S600, the angle ROLL-V at the time of calibration of the marker 70 is obtained from the image obtained by capturing the feature points 901The method comprises the following steps:
s600a, P obtained according to the step S500O1And PO2Obtaining PO1PO2Angle of line to horizontal centre line of camera, i.e. angle ROLL-V at which marking is calibrated1
In particular, according to PO1And PO2Can acquire PO1PO2The angle gamma of the connecting line and the X axis under the coordinate system of the motion mechanism1While the end flange is in a first fixed angular attitude alpha1Therefore, at this time, the included angle between the horizontal center line of the camera and the X axis under the coordinate system of the motion mechanism is fixed, so that P can be obtained through calculationO1PO2Angle of the connecting line and the horizontal center line of the camera; for example, assume that when the angular pose of the end flange is 0 degrees, the horizontal centerline of the camera makes an angle γ with the X-axis in the kinematic coordinate system2Then the end flange is in a first fixed angular attitude α1When the camera is in motion, an angle gamma is formed between the horizontal central line of the camera and the X axis of the motion mechanism coordinate system21So as to calculate the angle ROLL-V of the mark calibration1Is gamma1-(γ21)。
It will be appreciated that three or more than three are usedThe step S600 is substantially the same as the step S600a, and it is only necessary to use any two feature points of the plurality of feature points 90 as the dual feature points 90 in the step S600a, and then correspondingly obtain PO1And PO2Then obtaining P againO1PO2The angle between the connecting line and the horizontal center line of the camera is just needed.
Referring to fig. 6, it can be understood that if there is one feature point 90, in step S300, at least one feature point 90 on the marker 70 is photographed with the end flange 20 in the first fixed angular posture, and when each of the feature points 90 is photographed, the current position of the end flange is recorded, including the following steps:
s310b, the end flange 20 maintains the first fixed angular attitude α1Or the moving mechanism 10 drives the end flange 20 to rotate to reach the first fixed angle posture alpha1The following steps of (1);
s320b, the moving mechanism 10 drives the end flange 20 to move the camera 30 to the vicinity of the feature point 90, and then the camera 30 takes a picture of the feature point 90 and records the current position of the end flange 20.
Moving the camera 30 near the feature point 90 ensures that the feature point 90 is within the camera field of view 60 during the photographing process, so as to ensure that the captured image has the pattern corresponding to the feature point 90, so that the coordinates of the pattern corresponding to the feature point 90 in the pixel coordinate system can be obtained in the subsequent steps.
It is understood that if there is one feature point 90, in step S400, the relationship T is converted according to the image obtained by capturing each feature point 90PCAcquiring the position relationship between each feature point 90 and the center of the camera during shooting, comprising the following steps:
s410b, acquiring coordinate values of the graph corresponding to the feature points 90 in the pixel coordinate system through an image processing technology according to the image shot in the step S300;
s420b, according to the conversion relation T between the pixel coordinate system obtained in the step S200 and the physical position taking the camera center as the referencePCThe feature point 90 obtained in the above step S410b corresponds to a coordinate value of the graph in the pixel coordinate system, that is, it is sufficientThe position relationship between the feature point and the camera center is obtained through conversion, that is, the distance Δ X in the X direction and the distance Δ Y in the Y direction between the feature point 90 and the camera center in the motion mechanism coordinate system are obtained.
Referring to fig. 6, it can be understood that if there is one feature point 90, in step S500, the current end flange position when each feature point 90 is captured, the camera TOOL, and the position relationship between each feature point and the camera center when each feature point 90 is captured are determined according to the position relationship between each feature point and the camera center when each feature point is capturedCThe position P of the end flange 20 at the time of acquiring the camera center aligned with the feature point 90 or the set relative position between the feature points 901The method comprises the following steps:
s510b, acquiring the position of the camera center when the camera center is aligned with the feature point 90 according to the position relation between the feature point 90 and the camera center acquired in the step S400 and the position of the end flange 20 during photographing in the step S400;
it is to be understood that the position of the camera center when the camera center alignment feature point 90 is acquired here is basically the same as the manner described in step S510 a.
S520b, according to the position of the camera center and the camera TOOLC obtained in step 510b, the position P of the end flange 20 when the camera center is aligned with the feature point 90 can be calculated1
It should be understood that, as such, acquiring the position of the camera center when the camera center is aligned with the feature point 90 does not need to move the camera center and align the feature point 90 in practice, but only needs to obtain the position of the camera center when the camera center is aligned with the feature point 90 and finally obtain the position P where the end flange 20 is located in the corresponding state by calculation1And (4) finishing.
It will be appreciated that with a single feature point 90, the line between the feature points 90 will not be relied upon to obtain the angle ROLL-V at which the marker 70 is calibrated1(ii) a To solve this problem, when there is one feature point 90, the feature points 90 are arranged in a non-circular and other non-symmetrical structure, for example, the feature points 90 are arranged in a non-equilateral and non-isosceles triangle, or in a non-isosceles trapezoid, etc.; when the feature points 90 are set in the triangular form, specifically, in step S600, the rootObtaining an image angle ROLL-V of the product 80 according to the shot image1The method comprises the following steps:
s600b, acquiring a certain boundary of the feature point 90 according to the image processing technology, and acquiring the angle between a connecting line between two pixel points on the boundary of the feature point 90 and the horizontal center line of the camera (i.e. the u axis of the pixel coordinate system) in the pixel coordinate system, namely the angle ROLL-V when the marker 70 is calibrated1
It can be understood that, when taking a picture, by virtue of the difference between the color, brightness, and the like of the graph of the feature point 90 and the background graph in the image, the boundary of the feature point 90 can be obtained through an image processing technology, and then after obtaining the coordinates of two pixel points on the boundary of the feature point 90, the calibration angle ROLL-V of the marker can be obtained in a similar manner to S600a1
It is understood that the step S600b is also applicable to the case where the plurality of feature points 90 are asymmetric.
It is understood that, in step S700, the position P of the end flange 20 at the time when the work tool 40 reaches the work position 710 in the desired attitude is acquiredTAnd angular attitude alphaTThis can be achieved in a number of different ways, mainly to name a few.
In the first mode, in step S700, the position P of the end flange 20 when the work tool 40 reaches the work position 710 in the desired posture is acquiredTAnd angular attitude alphaTThe method comprises the following steps:
s710a, controlling the movement mechanism 10 to drive the end flange 20 to move in a manner of manual teaching, so that the working tool 40 reaches the working position 710 in a desired posture;
s720a, recording the current end flange position PTAnd angular attitude alphaT
It is understood that in the first mode, which is mainly implemented by a manual teaching, when the first mode is adopted in step S700, the positional relationship between the work tool 40 and the end flange 20 does not need to be obtained or calculated in the whole calibration method, that is, the calculation is not neededA work tool; moreover, since the working tool 40 is irregular in some working scenarios, the working tool is difficult to be effectively obtained, and the working tool and the image processing technology are adopted to obtain PTAnd alphaTThe acquisition process is complicated, the operation is complex, more complex offset calculation needs to be performed, such as XY-direction component calculation caused by angular offset, and meanwhile, the angle adjustment needs to be manually tried to obtain the correct operation angle posture. Step S700 is therefore simpler in its operation by adopting the first manner, and can be applied even if the work tool 40 is irregular.
In the second embodiment, if the marker 70 is a carrier or product carrying a product, step S700 acquires a position P of the end flange when the work tool reaches the work position in a desired postureTAnd angular attitude alphaTThe method comprises the following steps:
s710b, at the end flange 20, in a first fixed angular attitude α1Then, the product 80 is photographed by a camera to obtain a product image;
s720b, according to the conversion relation T between the product image, the pixel coordinate and the physical position taking the camera center as the referencePCTOOL of cameraCAcquiring the operation position of the product 80;
specifically, assuming that the work to be performed on the product 80 is a product grabbing or welding work, the image processing technique can obtain the characteristics of the boundary or weld of the product 80, and the pixel coordinates of the grabbing position or welding position of the product 80 in the pixel coordinate system can be derived and calculated according to the characteristics of the boundary or weld of the product 80, and the conversion relationship T between the pixel coordinates and the physical position with the camera center as a reference is calculated according to the pixel coordinates and the pixel coordinatesPCTOOL of cameraCWhen the center of the camera is aligned with the grabbing position or the welding position (namely, the operation position), the position coordinate of the center of the camera, namely, the operation position of the product in the motion mechanism coordinate system can be obtained;
s730b, acquiring the work tool 40 reaching the work position with a desired posture according to the work position of the product 80710, position P of end flange 20TAnd angular attitude alphaT
It is understood that, in combination with the tool or the like calculated before the calibration method described in the embodiment of the present invention, P may be calculated according to the coordinates of the working position 710 of the product 80 in the motion mechanism coordinate systemTAnd alphaT
In the third mode, if the marker 70 is a carrier to receive the product 80 and a placement structure for placing the product 80 is provided on the carrier, step S700 obtains the position P of the end flange 20 when the work tool 40 reaches the work position 710 in a desired postureTAnd angular attitude alphaTThe method comprises the following steps:
the position P of the end flange when the work tool 40 reaches the work position 710 in the desired posture is acquiredTAnd angular attitude alphaTThe method comprises the following steps:
s710c, at the end flange 20, in a first fixed angular attitude α1Then, the placing structure is photographed by the camera 30 to obtain an image of the placing structure;
s720c, converting relation T between image according to placement structure, pixel coordinate and physical position taking camera center as referencePCTOOL of cameraCObtaining a work location 710 for the product 80;
it can be understood that, assuming that the placement structure is the accommodating cavity of the product 80, the image of the placement structure is processed by the image processing technology to obtain the characteristics such as the boundary of the accommodating cavity, the pixel coordinates of the placement position of the product 80 in the pixel coordinate system can be derived and calculated according to the characteristics such as the boundary of the accommodating cavity, and then the conversion relation T between the pixel coordinates, the pixel coordinates and the physical position with the camera center as the reference is calculated according to the pixel coordinates and the pixel coordinatesPCTOOL of cameraCThat is, when the center of the camera is aligned with the grabbing position or the welding position (i.e., the operation position 710), the position coordinates of the center of the camera, that is, the operation position 710 of the product in the motion mechanism coordinate system, can be obtained;
s730c, obtaining the desired posture of the work tool 40 according to the work position 710 of the productPosition P of end flange 20 when the state reaches the working position 710TAnd angular attitude alphaT
It is understood that, in combination with the tool or the like calculated before the calibration method described in the embodiment of the present invention, P may be calculated according to the coordinates of the working position 710 of the product 80 in the motion mechanism coordinate systemTAnd alphaT
If the operation is performed only for a single product 80 or a single operation position 710 on the vehicle, in step S700, the operation can be performed only by the first to third methods, however, in the actual industrial production process, there are cases where a plurality of products 80 need to be operated simultaneously, or there is a need to perform the operation for a plurality of operation positions 710 on the same product 80. To address this need, the following methods may be combined with the first to third methods described above.
Referring to fig. 5, it can be understood that when marker 70 corresponds to N working positions 710, N is greater than or equal to 1; step S700 of acquiring the position P of the end flange 20 when the work tool 40 reaches the work position 710 in the desired postureTAnd angular attitude alphaTThe method comprises the following steps:
s700d, for each work position 710, the position P of the end flange 20 when the work tool 40 reaches each work position 710 in the desired posture is acquiredTnAnd angular attitude alphaTnWherein N is more than or equal to N and more than or equal to 1.
It is understood that, at S700d, for any one of the working positions 710, any one of the first to third manners described above may be selected to obtain the corresponding PTnAnd alphaTn. For example, the movement mechanism 10 may be controlled to drive the end flange 20 to move in a manner taught manually, and the work tool 40 is made to reach each working position 710 in a desired posture, and when the work tool 40 reaches each working position 710, the current position P of the end flange 20 is recordedTnAnd 20 angular attitude of end flangeTnWherein N is more than or equal to N and more than or equal to 1.
In addition to step S700d, step S800,according to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transform relationship offset comprises the following steps:
s800d, according to P1、α1And the recorded position P of each set of end flangesTnAnd end flange angular attitude alphaTnObtaining (P)1、α1) And each group (P)Tn、αTn) The transformation relation offset-n.
It can be understood that, when marker 70 is a carrier carrying product 80, N work positions 710 corresponding to marker 70 may be distributed on N different products 80, so that through steps S700d and S800d, in the post-teaching work process, according to the transformation relationship offset-N corresponding to each product 80, it can be ensured that work can be completed in a desired posture and position for each product 80 in the subsequent work process, the work accuracy of each product 80 is ensured, and the need of working on a plurality of products 80 at the same time is met.
It can be understood that, when the marker 70 is a carrier waiting to receive the product 80, the N work positions 710 corresponding to the marker 70 may be placement structures of N different products 80 distributed on the carrier, so that through the above steps S700d and S800d, in the teaching completed work process, according to the transformation relationship offset-N of the placement position (work position) of each product 80, it can be ensured that the products 80 are placed one by one in the desired posture and position for each product 80 and other placement positions in the subsequent work process, so as to meet the requirement of placing the plurality of products 80 at the same time.
It can be understood that when marked article 70 is product 80, N working positions 710 corresponding to marked article 70 may be N different working positions 710 distributed on the same product 80, so that through the above steps S700d and S800d, in the teaching completed working process, it can be ensured that each working position 710 on product 80 is completed with a desired posture and position, the precision of working at each working position 710 is ensured, and the requirement of working at a plurality of working positions 710 on the same product 80 at the same time is met.
The visual recognition-based working methods implemented by the second aspect embodiment and the third aspect of the present invention are described below, and both are equally applied to the working apparatus described in the first aspect embodiment.
Referring to fig. 7, the operation method based on visual recognition according to the second aspect of the present invention includes two steps, wherein the first step is a calibration step, and the second step is an operation step, wherein the calibration step employs the calibration method based on visual recognition according to the first aspect of the present invention, and is a case where a single feature point 90 is employed in the operation method based on visual recognition according to the first aspect of the present invention, and therefore, the calibration step portion in the first step will not be described in detail herein.
Referring to fig. 8, the working method based on visual recognition according to the second aspect of the present invention further includes the following second major steps:
s900b, at the end flange 20, in a first fixed angular attitude α1Next, photographing the feature points 90 on the marker 70 again, and recording the position coordinates of the current end flange 20;
s1000b, converting the relation T based on the image obtained by photographing the marker 70 again in S900bPCObtaining the position relation between the feature point 90 and the camera center when shooting again;
s1100b, according to the position relation between the characteristic point 90 and the camera center when shooting again, the current end flange position when shooting the characteristic point 90 again, and the camera TOOLCAcquiring the position P of the camera center when the camera center is aligned again with the feature point 905
S1200b, obtaining angle ROLL-V of marker 70 during operation based on image obtained by re-shooting marker 70 in S900b2
It is to be understood that the detailed procedure of steps S900b to S1200b is substantially the same as the steps S300 to S600 at the single feature point 90 in the embodiment of the first aspect;
s1300b, according to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5Position and end flange in angular attitude alpha1+(ROLL-V2-ROLL-V1) Lower, end flange position P6
It will be appreciated that when the camera center is at P5In position and with the end flange 20 in angular position alpha1+(ROLL-V2-ROLL-V1) Indicating the position and angle of the marker 70 in the camera field of view 60, and the end flange 20 is at P during calibration1And alpha1The position and angle of the marker 70 in the camera view 60 are the same according to P5Position coordinates and camera TOOLCThe position P of the end flange 20 in this state can be estimated6
S1400b, according to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) The offset is converted to obtain the position P of the end flange under the working position 7107And alpha7(ii) a Moving the end flange 20 to P7And rotating the end flange 20 to an angular attitude alpha7The work is performed at the work position by the work tool.
It will be appreciated that when the end flange is at P6Position and angular attitude of alpha1+(ROLL-V2-ROLL-V1) The position and angular attitude of end flange 20 and work tool 40 in the marker coordinate system established with reference to marker 70 itself, and at the calibration method steps (i.e., at P1 and α with the end flange)1In the state of (b) is consistent, thus passing P at this time6Angular attitude alpha1+(ROLL-V2-ROLL-V1) By converting the relationship offset, the position P of the end flange of the work tool 40 at the desired attitude in the working position 710 can be obtained7And angular attitude alpha7Then the moving mechanism 10 drives the end flange 20 to move, and the end flange 20 is located at the position P7And angular attitude alpha7I.e., work may be performed directly at work location 710 by work tool 40.
Referring to fig. 9 and 10, the operation method based on visual recognition according to the third aspect of the present invention includes two steps, where the first step is a calibration step, and the second step is an operation step, where the calibration step uses the calibration method based on visual recognition according to the first aspect of the present invention, and is a case where a plurality of feature points are used in the operation method based on visual recognition according to the first aspect of the present invention, and therefore, the calibration step in the first step will not be described in detail here.
Referring to fig. 10, the second major step of the working method based on visual recognition according to the third embodiment of the present invention includes the following steps:
s900a, at the end flange 20, in a first fixed angular attitude α1Next, photographing each feature point 90 on the marker 70 again, and recording the position coordinates of the current end flange 20 when photographing each feature point 90;
s1000a, converting the relationship T based on the image obtained by capturing each feature point 90 again in S900aPCObtaining the position relation between each feature point 90 and the center of the camera when shooting again;
s1100a, according to the position relationship between the characteristic points 90 and the camera center when shooting again in S1000a, the current end flange position when shooting again each of the characteristic points 90, the camera TOOLCThe position P of the camera center at which the relative position between the camera center realignment feature points 90 is set is obtained5
S1200a, obtaining angle ROLL-V of marker 70 during operation according to the image obtained by shooting each feature point 90 again in S900a2
It is to be understood that the detailed procedure of steps S900a to S1200a is substantially the same as the steps S300 to S600 at the plurality of feature points 90 in the embodiment of the first aspect;
s1300a, according to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5In position and with the end flange 20 in angular position alpha1+(ROLL-V2-ROLL-V1) In the state of (a) to (b),position P of end flange 206
S1400a, according to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) The offset is converted to obtain the position P of the end flange under the working position 7107And alpha7(ii) a Moving the end flange to P7And rotating the end flange to an angular attitude alpha7Work is performed at work position 710 by work tool 40.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (15)

1. The calibration method based on the visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a motion mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the motion mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the end flange;
the calibration method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC
According to the cameraTOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinates and the physical position with the camera center as a referencePC
At the end flange in a first fixed angular attitude alpha1Shooting at least one characteristic point on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the position of the current tail end flange when shooting each characteristic point;
converting the relation T according to the image obtained by shooting each feature pointPCAcquiring the position relation between each characteristic point and the center of the camera during shooting;
according to the position relation between each feature point and the center of the camera during shooting, the current position of the end flange during shooting of each feature point, and the TOOL of the cameraCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1
Obtaining the angle ROLL-V when the marking article is calibrated according to the image obtained by shooting the characteristic point1
Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT
According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transform relationship offset.
2. The vision recognition-based calibration method of claim 1, wherein determining the positional relationship between the center of the camera and the end flange is obtaining the TOOL of the cameraCThe method comprises the following steps:
at the end flange in an angular position alpha3Then, the calibration object is photographed from at least three different position points, and the angle posture alpha is obtained according to the position of the end flange and the pixel coordinates of the calibration object in the image during each photographing3The transformation relation between the lower pixel coordinate and the flange end position, according to whichThe position P which the end flange needs to reach when the calibration object is positioned at the center of the image is obtained3
At the end flange in an angular position alpha4Then, the calibration object is photographed from at least three different position points, and the angle posture alpha is obtained according to the position of the end flange and the pixel coordinates of the calibration object in the image during each photographing4The transformation relation between the lower pixel coordinates and the flange tail end position is obtained, and the position P which the tail end flange needs to reach when the calibration object is positioned at the image center is obtained according to the transformation relation4(ii) a Wherein alpha is4Is not equal to alpha3
According to alpha3And P3And alpha4And P4Calculating the TOOL of a cameraC
3. The calibration method based on visual recognition of claim 2, wherein the calibration method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
will be alpha3、P3、α4、P4Substituting the correlation value in the mapping relation fTOOLAnd calculating to obtain the TOOL of the camera, wherein,
Figure FDA0002855486680000021
(X3,Y3) Is P3(X) position coordinates of4,Y4) Is P4The position coordinates of (a).
4. The calibration method based on visual recognition of claim 2, wherein the calibration method is based on alpha3And P3And alpha4And P4Calculating the TOOL of a cameraCThe method comprises the following steps:
obtaining end flange in position P3And the angular attitude is alpha3In the state of (1), the camera center is seated in the moving mechanism coordinate systemTOOL for cameraCExpression T of middle parameter3
Obtaining end flange in position P4And the angular attitude is alpha4In a state where the camera center coordinates with respect to the camera TOOL in the motion mechanism coordinate systemCExpression T of middle parameter4
Establishing an equation T according to the relationship that the coordinate position of the camera center in the motion mechanism is not changed in the two states3=T4Obtaining the TOOL of the cameraC
Figure FDA0002855486680000022
Wherein (X)3,Y3) Is P3(X) position coordinates of4,Y4) Is P4The position coordinates of (a).
5. The vision recognition-based calibration method of claim 1, wherein the calibration method is based on a camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinates and the physical position with the camera center as a referencePCThe method comprises the following steps:
at the end flange in a first fixed angular attitude alpha1Then, the calibration points are photographed from at least 3 different positions respectively, so that the corresponding graph of the calibration points in the image has at least three points which are not collinear in a pixel coordinate system; when the calibration point is photographed at each position, the current position of the end flange is respectively recorded;
according to the recorded terminal flange position and camera TOOL when shooting the calibration point each timeCThe position of the camera center at the calibration point is obtained through conversion, and the conversion relation T between the pixel coordinate and the physical position taking the camera center as the reference is obtained according to the pixel coordinate corresponding to the calibration point at each shooting position and the position of the camera center at the calibration point for each shootingPC
6. The calibration method based on visual recognition according to claim 1, wherein if there are 2 feature points,
wherein, according to the position relationship between each feature point and the camera center during shooting, the current end flange position during shooting each feature point, and the camera TOOLCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1The method comprises the following steps:
according to the position relationship between each feature point and the camera center during shooting, the camera TOOLCAnd shooting the current tail end flange position of each feature point to obtain the positions P of the centers of the two cameras when the centers of the cameras are respectively aligned to the two feature pointsO1And PO2
According to the obtained PO1And PO2Obtaining PO1PO2The position of the center point O of the connecting line;
according to the position of the center point O and the camera TOOLCAcquiring the position P of the end flange when the center of the camera is aligned with the center point O1
7. The calibration method based on visual recognition according to claim 6, wherein if there are two feature points,
wherein, according to the image obtained by shooting each characteristic point, the angle ROLL-V when the marking article is calibrated is obtained1The method comprises the following steps:
according to the obtained PO1And PO2Obtaining PO1PO2Angle of connecting line to horizontal centre line of camera, i.e. angle ROLL-V at calibration of marking1
8. The calibration method based on visual recognition according to claim 1, wherein if there is one feature point,
wherein, according to the position relationship between each feature point and the camera center during shooting, the current end flange position during shooting each feature point, and the camera TOOLCAcquiring the position P of the end flange when the center of the camera is aligned with the feature points or the set relative positions between the feature points1The method comprises the following steps:
according to the camera TOOLCAcquiring the position of the center of the camera when the center of the camera is aligned with the feature point;
acquiring the position P of the end flange when the center of the camera is aligned with the feature point according to the position of the center of the camera when the center of the camera is aligned with the feature point and the TOOLC of the camera1
9. Calibration method based on visual recognition, according to claim 1, characterized by acquiring the position P of the end flange when the work tool reaches the working position in the desired attitudeTAnd angular attitude alphaTThe method comprises the following steps:
controlling a movement mechanism to drive the tail end flange to move and enable the working tool to reach a working position in a desired posture in a manual teaching mode, and recording the current position P of the tail end flangeTAnd angular attitude alphaT
10. The calibration method based on visual recognition according to claim 1, wherein if the marker is a carrier carrying a product or a product,
acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaTThe method comprises the following steps:
at the end flange in a first fixed angular attitude alpha1Then, the product is photographed through a camera to obtain a product image;
according to the conversion relation T between the product image, the pixel coordinates and the physical position taking the camera center as the referencePCTOOL of cameraCAcquiring the operation position of the product;
acquiring the end flange when the work tool reaches the work position in a desired posture according to the work position of the productPosition P ofTAnd angular attitude alphaT
11. The calibration method based on visual recognition according to claim 1, wherein if the marked article is a carrier for receiving a product and a placement structure for placing the product is disposed on the carrier,
acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaTThe method comprises the following steps:
at the end flange in a first fixed angular attitude alpha1The placing structure is photographed through a camera to obtain an image of the placing structure;
according to the conversion relation T between the image of the placing structure, the pixel coordinate and the physical position taking the camera center as the referencePCTOOL of cameraCAcquiring the operation position of the product;
acquiring the position P of the end flange when the work tool reaches the work position in the expected posture according to the work position of the productTAnd angular attitude alphaT
12. The calibration method based on visual recognition according to claim 1, wherein;
the marked product corresponds to N working positions, and N is more than or equal to 1;
wherein the position P of the end flange is obtained when the work tool reaches the working position in a desired postureTAnd angular attitude alphaTThe method comprises the following steps:
the position P of the end flange 20 when the work tool 40 reaches each work position in the desired posture is acquired for each work positionTnAnd angular attitude alphaTnWherein N is more than or equal to N and more than or equal to 1.
13. The calibration method based on visual recognition according to claim 12, wherein; according to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transform relationship offset comprises the following steps:
according to P1、α1And the recorded position P of each set of end flangesTnAnd end flange angular attitude alphaTnObtaining (P)1、α1) And each group (P)Tn、αTn) The transformation relation offset-n.
14. The operation method based on the visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a motion mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the motion mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the end flange;
the operation method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC(ii) a According to the camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePC(ii) a At the end flange in a first fixed angular attitude alpha1Then, shooting a single characteristic point on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the current position of the tail end flange when shooting the characteristic point; image obtained according to shooting characteristic points and conversion relation TPCAcquiring the position relation between the feature point and the camera center during shooting; according to the position relationship between the feature point and the camera center when shooting, the current end flange position when shooting the feature point, and the camera TOOLCAnd when the center of the camera is aligned with the feature point, the position P of the end flange is obtained1(ii) a Obtaining the angle ROLL-V when the marking article is calibrated according to the image obtained by shooting the characteristic point1(ii) a Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT(ii) a According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transformation relation offset of;
at the end flange in a first fixed angular attitude alpha1Thirdly, photographing the characteristic points on the marked product again, and recording the position coordinates of the current end flange; converting the relation T according to the image obtained by shooting the characteristic points againPCObtaining the position relation between the feature point and the camera center when shooting again; according to the position relationship between the feature point and the camera center when shooting again, the current end flange position when shooting the feature point again, and the camera TOOLCAcquiring the position P of the camera center when the camera center is aligned with the feature point again5(ii) a According to the image obtained by shooting the characteristic points again, the angle ROLL-V of the marked article during operation is obtained2(ii) a According to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5Position and end flange in angular attitude alpha1+(ROLL-V2-ROLL-V1) Lower, end flange position P6(ii) a According to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) Converting the offset relationship to obtain the position P of the tail end flange at the working position7And angular attitude alpha7(ii) a Moving the end flange to P7And rotating the end flange to an angular attitude alpha7The work is performed at the work position by the work tool.
15. The operation method based on the visual identification is characterized by being applied to operation equipment, wherein the operation equipment comprises an end effector and a motion mechanism, the end effector comprises an end flange, a camera and an operation tool, the camera and the operation tool are arranged on the end flange, and the motion mechanism can drive the end flange to freely move on an XY plane and can drive the end flange to rotate around the axis of the end flange;
the operation method comprises the following steps:
determining the position relationship between the center of the camera and the end flange to obtain the TOOL of the cameraC(ii) a According to the camera TOOLCDetermining the end flange to be in a first fixed angular attitude alpha1The conversion relation T between the pixel coordinate system and the physical position with the camera center as the referencePC(ii) a At the end flange in a first fixed angular attitude alpha1Next, shooting a plurality of characteristic points on the marked product, wherein the relative position between the marked product and the operation position is fixed, and recording the position of the current end flange when shooting each characteristic point; converting the relation T according to the image obtained by shooting each feature pointPCAcquiring the position relation between each characteristic point and the center of the camera during shooting; according to the position relation between each feature point and the center of the camera during shooting, the current position of the end flange during shooting of each feature point, and the TOOL of the cameraCAnd acquiring the position P of the end flange when the relative position between the center alignment feature points of the camera is set1(ii) a Obtaining the angle ROLL-V when the marker is calibrated according to the image obtained by shooting the calibration point1(ii) a Acquiring the position P of the end flange when the work tool reaches the working position in the expected postureTAnd angular attitude alphaT(ii) a According to P1、α1、PT、αTTo obtain (P)1、α1) And (P)T、αT) The transformation relation offset of;
at the end flange in a first fixed angular attitude alpha1Thirdly, photographing each characteristic point on the marked product again, and recording the current position of the tail end flange when each characteristic point is photographed; converting the relation T according to the image obtained by shooting each feature point againPCObtaining the position relation between each characteristic point and the center of the camera when shooting again; according to the position relation between each characteristic point and the camera center when shooting again, the current tail end flange position and the camera TOOL when shooting each characteristic point againCAcquiring the position P of the camera center when the camera center is aligned with the feature points again5(ii) a According to the image obtained by shooting each characteristic point again, the angle ROLL-V of the marked article during operation is obtained2(ii) a According to ROLL-V2And ROLL-V1Deviation between, camera TOOLCWith the acquisition camera centered at P5Position and end flange in angular attitude alpha1+(ROLL-V2-ROLL-V1) Position P of the end flange6(ii) a According to P6Angular attitude alpha1+(ROLL-V2-ROLL-V1) Converting the offset relationship to obtain the position P of the tail end flange at the working position7And angular attitude alpha7(ii) a Moving the end flange to P7And rotating the end flange to an angular attitude alpha7The work is performed at the work position by the work tool.
CN202011545117.XA 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition Active CN112598752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011545117.XA CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011545117.XA CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Publications (2)

Publication Number Publication Date
CN112598752A true CN112598752A (en) 2021-04-02
CN112598752B CN112598752B (en) 2024-02-27

Family

ID=75200614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011545117.XA Active CN112598752B (en) 2020-12-24 2020-12-24 Calibration method and operation method based on visual recognition

Country Status (1)

Country Link
CN (1) CN112598752B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471446A (en) * 2022-06-23 2022-12-13 上海江波龙数字技术有限公司 Slot position coordinate obtaining method and device and storage medium
CN116297531A (en) * 2023-05-22 2023-06-23 中科慧远视觉技术(北京)有限公司 Machine vision detection method, system, medium and equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050131582A1 (en) * 2003-10-01 2005-06-16 Arif Kazi Process and device for determining the position and the orientation of an image reception means
US20170274533A1 (en) * 2014-08-14 2017-09-28 Kuka Roboter Gmbh Positioning A Robot
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN110450163A (en) * 2019-08-20 2019-11-15 上海中车瑞伯德智能系统股份有限公司 The general hand and eye calibrating method based on 3D vision without scaling board
CN110634164A (en) * 2019-10-16 2019-12-31 易思维(杭州)科技有限公司 Quick calibration method for vision sensor
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
KR102111655B1 (en) * 2019-11-01 2020-06-04 주식회사 뉴로메카 Automatic calibration method and apparatus for robot vision system
CN111649667A (en) * 2020-05-29 2020-09-11 新拓三维技术(深圳)有限公司 Flange pipeline end measuring method, measuring device and adapter structure
CN111791227A (en) * 2019-12-31 2020-10-20 深圳市豪恩声学股份有限公司 Robot hand-eye calibration method and device and robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050131582A1 (en) * 2003-10-01 2005-06-16 Arif Kazi Process and device for determining the position and the orientation of an image reception means
US20170274533A1 (en) * 2014-08-14 2017-09-28 Kuka Roboter Gmbh Positioning A Robot
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN110450163A (en) * 2019-08-20 2019-11-15 上海中车瑞伯德智能系统股份有限公司 The general hand and eye calibrating method based on 3D vision without scaling board
CN110634164A (en) * 2019-10-16 2019-12-31 易思维(杭州)科技有限公司 Quick calibration method for vision sensor
KR102111655B1 (en) * 2019-11-01 2020-06-04 주식회사 뉴로메카 Automatic calibration method and apparatus for robot vision system
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
CN111791227A (en) * 2019-12-31 2020-10-20 深圳市豪恩声学股份有限公司 Robot hand-eye calibration method and device and robot
CN111649667A (en) * 2020-05-29 2020-09-11 新拓三维技术(深圳)有限公司 Flange pipeline end measuring method, measuring device and adapter structure

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FANG WAN等: "Flange-Based Hand-Eye Calibration Using a 3D Camera With High Resolution, Accuracy, and Frame Rate", 《FRONT ROBOT AI 》, 1 May 2020 (2020-05-01), pages 1 - 10 *
JOSÉ MAURÍCIO S.T. MOTTA等: "Robot calibration using a 3D vision-based measurement system with a single camera", 《ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING》, vol. 17, no. 6, 31 December 2001 (2001-12-31), pages 487 - 497, XP004323453, DOI: 10.1016/S0736-5845(01)00024-2 *
孙义林;樊成;陈国栋;龚勋;许辉;: "基于激光跟踪仪的机器人抛光工具系统标定", 制造业自动化, no. 24, 25 December 2014 (2014-12-25) *
王达;娄小平;董明利;孙鹏;: "自由度冗余蛇形臂机器人手眼标定研究", 计算机测量与控制, no. 08, 25 August 2015 (2015-08-25) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471446A (en) * 2022-06-23 2022-12-13 上海江波龙数字技术有限公司 Slot position coordinate obtaining method and device and storage medium
CN116297531A (en) * 2023-05-22 2023-06-23 中科慧远视觉技术(北京)有限公司 Machine vision detection method, system, medium and equipment
CN116297531B (en) * 2023-05-22 2023-08-01 中科慧远视觉技术(北京)有限公司 Machine vision detection method, system, medium and equipment

Also Published As

Publication number Publication date
CN112598752B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
JP6966582B2 (en) Systems and methods for automatic hand-eye calibration of vision systems for robot motion
CN109794938B (en) Robot hole-making error compensation device and method suitable for curved surface structure
JP4021413B2 (en) Measuring device
JP3946711B2 (en) Robot system
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
TWI594097B (en) System and methods for virtual assembly of an object in an assembly system
JP4191080B2 (en) Measuring device
CA2710669C (en) Method and system for the high-precision positioning of at least one object in a final location in space
US8406923B2 (en) Apparatus for determining pickup pose of robot arm with camera
JP3733364B2 (en) Teaching position correction method
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
CN110276799B (en) Coordinate calibration method, calibration system and mechanical arm
CN111300481A (en) Robot grabbing pose correction method based on vision and laser sensor
CN113001535A (en) Automatic correction system and method for robot workpiece coordinate system
CN112598752A (en) Calibration method based on visual identification and operation method
TWI699264B (en) Correction method of vision guided robotic arm
CN110202560A (en) A kind of hand and eye calibrating method based on single feature point
CN110490942A (en) A kind of mobile camera calibration method based on the second arm of SCARA manipulator
CN112238453B (en) Vision-guided robot arm correction method
JP6912529B2 (en) How to correct the visual guidance robot arm
JPH09222913A (en) Teaching position correcting device for robot
JPH06785A (en) Correcting method for visual sensor coordinate system
CN111283676B (en) Tool coordinate system calibration method and calibration device of three-axis mechanical arm
CN112792818A (en) Visual alignment method for rapidly guiding mechanical arm to grab target
CN115397634A (en) Device for acquiring position of visual sensor in robot control coordinate system, robot system, method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant