CN111015664A - Intelligent identification method based on CCD camera - Google Patents

Intelligent identification method based on CCD camera Download PDF

Info

Publication number
CN111015664A
CN111015664A CN201911369932.2A CN201911369932A CN111015664A CN 111015664 A CN111015664 A CN 111015664A CN 201911369932 A CN201911369932 A CN 201911369932A CN 111015664 A CN111015664 A CN 111015664A
Authority
CN
China
Prior art keywords
coordinate system
point
camera
robot
ccd camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911369932.2A
Other languages
Chinese (zh)
Other versions
CN111015664B (en
Inventor
周斌
刘一
王钰松
肖云霄
吴静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unication Electronic Technology Co ltd
Original Assignee
Chongqing Unication Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unication Electronic Technology Co ltd filed Critical Chongqing Unication Electronic Technology Co ltd
Priority to CN201911369932.2A priority Critical patent/CN111015664B/en
Publication of CN111015664A publication Critical patent/CN111015664A/en
Application granted granted Critical
Publication of CN111015664B publication Critical patent/CN111015664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an intelligent identification method based on a CCD camera, which comprises the following steps: moving the CCD camera to a point position I; determining the type of an object to be identified by adopting a CCD camera, acquiring a grabbing point with the identified object, and acquiring a pixel coordinate point of the grabbing point; moving the CCD camera to a point II; determining coordinate values of the point location I and the point location II under a camera coordinate system and a robot coordinate system; determining the coordinate value of the point location III in a camera coordinate system through a CCD camera; and determining the coordinate value of the point position III in the robot coordinate system. The recognition method provided by the invention is based on the CCD camera, can accurately capture the grabbing point without additionally designing a jig and a mechanical device so as to drive the robot to grab, has a simple and stable algorithm, and saves a large amount of customized jigs, machined parts and labor cost.

Description

Intelligent identification method based on CCD camera
Technical Field
The invention relates to the field of intelligent identification, in particular to an intelligent identification method based on a CCD camera.
Background
Traditional transport scheme adopts the location, and mechanical positioning scheme is fixed with the future, perhaps fixes a position through anchor clamps, goes to fixed position to get and puts, and often the design cost is very high, and is not general moreover, changes the product at every turn and probably need change a lot of tools, and the cost is very high, and ubiquitous manual assembly among the industry at present, transportation such as transport need formulate very complicated mechanical structure and replace the manual work, and based on this kind of application scene, research and development a neotype automatic identification technique is compelled at the forefront.
Disclosure of Invention
In view of the above, the present invention provides an intelligent recognition method based on a CCD camera, which saves a lot of customized jigs, machining parts and labor costs.
The purpose of the invention is realized by the following technical scheme:
an intelligent identification method based on a CCD camera comprises the following steps:
moving the CCD camera to a point position I;
determining the type of an object to be identified by adopting a CCD camera, acquiring a grabbing point with the identified object, and acquiring a pixel coordinate point of the grabbing point;
moving the CCD camera to a point II;
determining coordinate values of the point location I and the point location II under a camera coordinate system and a robot coordinate system;
determining the coordinate value of the point location III in a camera coordinate system through a CCD camera;
and determining the coordinate value of the point position III in the robot coordinate system.
Further, the determination of the kind of the object to be recognized is based on the basic contour curved surface of the object to be recognized.
Further, the method for acquiring the pixel coordinate point comprises the following steps: the method comprises the steps of distinguishing different communication thresholds of an object to be identified in a threshold processing mode, further selecting an interval of a communication domain, determining an external rectangle according to the maximum values of the interval in a first direction and a second direction of a camera coordinate system, and determining the intersection point of two diagonal lines of the external rectangle as a pixel coordinate point.
Further, the method for obtaining the coordinate value of the point location III in the robot coordinate system includes the following steps:
the method comprises the following steps: obtaining an included angle between the camera coordinate system and the robot coordinate system;
step two: obtaining an included angle between a connecting line of the point location III and the point location II in the camera coordinate system and a first direction of the camera coordinate system;
step three: obtaining an included angle between a connecting line of the point location III and the point location II and a first direction of the robot coordinate system under the robot coordinate system through triangulation;
step four: obtaining the offset of the robot coordinate system;
step five: and adding the offset to the current coordinate value of the robot. Further, the first step specifically comprises:
α=Ax-Aj
Ax=arctan((Y1-Y3)/(X1-X3))
Aj=arctan((Y4-Y2)/(X4-X2))
α, forming an included angle between the camera coordinate system and the robot coordinate system;
Ax: under the camera coordinate system, an included angle between a connecting line of the point position I and the point position II and a first direction of the camera coordinate system is formed;
Aj: under the camera coordinate system, an included angle between a connecting line of the point position I and the point position II and a first direction of the robot coordinate system is formed;
the point positions of the point position I under the camera coordinate system and the robot coordinate system are respectively (X)1,Y1) And (X)2,Y2);
The point positions of the point position II under the camera coordinate system and the robot coordinate system are respectively (X)3,Y3) And (X)4,Y4)。
Further, the second step is specifically:
A3=A2
A2=arctan((Y5-Y3)/(X5-X3));
wherein: (X)5,Y5) Is the coordinate value of the point location III in the camera coordinate system, A2Is the angle between the connecting line of the point location III and the point location II in the camera coordinate system and the first direction of the camera coordinate system.
Further, the third step is specifically:
x=L·cos(A2+α)
y=L·sin(A2+α)
wherein: and L is the distance between the point II and the point III.
The invention has the beneficial effects that:
the recognition method provided by the invention is based on the CCD camera, can accurately capture the grabbing point without additionally designing a jig and a mechanical device so as to drive the robot to grab, has a simple and stable algorithm, and saves a large amount of customized jigs, machined parts and labor cost.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and/or embodiments described below.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a schematic view of the structure of an object A;
FIG. 2 is a picture of an object A after thresholding;
FIG. 3 is a schematic diagram of a second coordinate system and a third coordinate system;
FIG. 4 is a flow chart of the present invention.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
Hereinafter, the first direction is an X-axis direction and the second direction is a Y-axis direction.
The embodiment provides an intelligent recognition method based on a CCD camera, as shown in fig. 4, where an object to be recognized is an object a, the structure of the object a is shown in fig. 1, and the recognition method specifically includes:
the CCD camera is moved to a point position I which is any point position, and the coordinate value (X) of the point position I under the robot coordinate system can be obtained at the moment2,Y2)。
The method comprises the steps of shooting an object A by a CCD camera, determining the type of the object A, collecting a basic contour curved surface of the object, and determining a grabbing point of the object A according to the uniformity of the same kind of objects of the object and the dissimilarity of different kinds of objects, wherein the grabbing point is a point position under an object coordinate system A.
As shown in fig. 2, according to the capture point, different communication thresholds of the object a are distinguished in a threshold processing manner, and then an interval of a communication domain is selected, and a circumscribed rectangle is determined by a maximum value of the interval in a first direction and a second direction, and an intersection point of two diagonal lines of the circumscribed rectangle is used to obtain a pixel coordinate point of the object to be recognized, where the pixel coordinate point is a point location in a camera coordinate system.
As shown in fig. 3, the CCD camera is moved to the point location II, the coordinate of the point location II in the camera coordinate system coincides with the pixel coordinate point, and the coordinate value of the point location I in the camera coordinate system can be obtained by shooting with the CCD camera, and the point location II is the center point of the camera in the robot coordinate system, so that the coordinate value of the point location II in the robot coordinate system can also be obtained. Therefore, the coordinate values of the point location I in the camera coordinate system and the robot coordinate system are (X)1,Y1) And (X)2,Y2) The coordinate values of point II in the camera coordinate system and the robot coordinate system are (X)3,Y3) And (X)4,Y4)。
According to (X)1,Y1)、(X2,Y2)、(X3,Y3) And (X)4,Y4) Determining the included angle between the camera coordinate system and the robot coordinate system, wherein α is Ax-Aj
Ax=arctan((Y1-Y3)/(X1-X3))
Aj=arctan((Y4-Y2)/(X4-X2))
α, the included angle between the camera coordinate system and the robot coordinate system;
Ax: under a camera coordinate system, an included angle between a connecting line of the point position I and the point position II and a first direction of the camera coordinate system;
Aj: and under a camera coordinate system, the connecting line of the point position I and the point position II forms an included angle with the first direction of the robot coordinate system.
According to actual requirements, a CCD camera is used for shooting to determine a point location III, wherein the point location III is a point location to which the robot moves, and the coordinate of the point location III at the moment is a coordinate value (X) in a camera coordinate system5,Y5) The point needs to be converted into a coordinate value in the robot coordinate system, specifically:
obtaining an included angle between a connecting line of the point location III and the point location II in the camera coordinate system and a first direction of the camera coordinate system, specifically: a. the3=A2
A2=arctan((Y5-Y3)/(X5-X3));
Wherein: a. the2Is the angle between the connecting line of the point location III and the point location II in the camera coordinate system and the first direction of the camera coordinate system.
The offset of point location III in the robot coordinate system is determined by the following equation.
x=L·cos(A2+α)
y=L·sin(A2+α)
Wherein: and L is the distance between the point II and the point III.
And adding the offset to the current point location of the robot in the robot coordinate system to obtain a coordinate value of the point location III in the robot coordinate system.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (7)

1. An intelligent identification method based on a CCD camera is characterized in that: the identification method specifically comprises the following steps:
moving the CCD camera to a point position I;
determining the type of an object to be identified by adopting a CCD camera, acquiring a grabbing point with the identified object, and acquiring a pixel coordinate point of the grabbing point;
moving the CCD camera to a point II;
determining coordinate values of the point location I and the point location II under a camera coordinate system and a robot coordinate system;
determining the coordinate value of the point location III in a camera coordinate system through a CCD camera;
and determining the coordinate value of the point position III in the robot coordinate system.
2. The identification method according to claim 1, characterized in that: the determination of the kind of the object to be recognized is based on the basic contour surface of the object to be recognized.
3. The identification method according to claim 2, characterized in that: the method for acquiring the pixel coordinate point comprises the following steps: the method comprises the steps of distinguishing different communication thresholds of an object to be identified in a threshold processing mode, further selecting an interval of a communication domain, determining an external rectangle according to the maximum values of the interval in a first direction and a second direction of a camera coordinate system, and determining the intersection point of two diagonal lines of the external rectangle as a pixel coordinate point.
4. The identification method according to claim 3, characterized in that: the method for obtaining the coordinate value of the point location III in the robot coordinate system includes the following steps:
the method comprises the following steps: obtaining an included angle between the camera coordinate system and the robot coordinate system;
step two: obtaining an included angle between a connecting line of the point location III and the point location II in the camera coordinate system and a first direction of the camera coordinate system;
step three: obtaining an included angle between a connecting line of the point location III and the point location II and a first direction of the robot coordinate system under the robot coordinate system through triangulation;
step four: obtaining the offset of the robot coordinate system;
step five: and adding the offset to the current coordinate value of the robot.
5. The identification method according to claim 4, characterized in that: the first step is specifically as follows:
α=Ax-Aj
Ax=arctan((Y1-Y3)/(X1-X3))
Aj=arctan((Y4-Y2)/(X4-X2))
α, forming an included angle between the camera coordinate system and the robot coordinate system;
Ax: under the camera coordinate system, an included angle between a connecting line of the point position I and the point position II and a first direction of the camera coordinate system is formed;
Aj: under the camera coordinate system, an included angle between a connecting line of the point position I and the point position II and a first direction of the robot coordinate system is formed;
the point positions of the point position I under the camera coordinate system and the robot coordinate system are respectively (X)1,Y1) And (X)2,Y2);
The point positions of the point position II under the camera coordinate system and the robot coordinate system are respectively (X)3,Y3) And (X)4,Y4)。
6. The identification method according to claim 4, characterized in that: the second step is specifically as follows:
A3=A2
A2=arctan((Y5-Y3)/(X5-X3));
wherein: (X)5,Y5) Is the coordinate value of the point location III in the camera coordinate system, A2Is the angle between the connecting line of the point location III and the point location II in the camera coordinate system and the first direction of the camera coordinate system.
7. The identification method according to claim 6, characterized in that: the third step is specifically as follows:
x=L·cos(A2+α)
y=L·sin(A2+α)
wherein: and L is the distance between the point II and the point III.
CN201911369932.2A 2019-12-26 2019-12-26 Intelligent identification method based on CCD camera Active CN111015664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911369932.2A CN111015664B (en) 2019-12-26 2019-12-26 Intelligent identification method based on CCD camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911369932.2A CN111015664B (en) 2019-12-26 2019-12-26 Intelligent identification method based on CCD camera

Publications (2)

Publication Number Publication Date
CN111015664A true CN111015664A (en) 2020-04-17
CN111015664B CN111015664B (en) 2023-05-30

Family

ID=70213940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911369932.2A Active CN111015664B (en) 2019-12-26 2019-12-26 Intelligent identification method based on CCD camera

Country Status (1)

Country Link
CN (1) CN111015664B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1480007A1 (en) * 2003-05-23 2004-11-24 Ligmatech Automationssysteme GmbH Method and apparatus for aligning plate-shaped workpieces
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
EP2255930A1 (en) * 2009-05-27 2010-12-01 Leica Geosystems AG Method and system for extremely precise positioning of at least one object in the end position in space
JP2012076216A (en) * 2010-09-06 2012-04-19 Toyota Auto Body Co Ltd Method for combining camera coordinate system and robot coordinate system in robot control system, image processing device, program, and storage medium
WO2012127845A1 (en) * 2011-03-24 2012-09-27 Canon Kabushiki Kaisha Robot control apparatus, robot control method, program, and recording medium
CN103753530A (en) * 2013-12-30 2014-04-30 西北工业大学 Extremely near visual servo control method for space tethered robot
CN103927739A (en) * 2014-01-10 2014-07-16 北京航天飞行控制中心 Patroller positioning method based on spliced images
CN104227250A (en) * 2014-09-16 2014-12-24 佛山市利迅达机器人系统有限公司 Plane-based robot three-dimensional addressing and correcting method
CN107516329A (en) * 2016-06-15 2017-12-26 北京科技大学 A kind of deceleration oil hole localization method
CN108122257A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method and device
CN108286970A (en) * 2017-12-31 2018-07-17 芜湖哈特机器人产业技术研究院有限公司 Mobile robot positioning system, method and device based on DataMatrix code bands
CN109335059A (en) * 2018-11-23 2019-02-15 重庆盟讯电子科技有限公司 A kind of robot device that SMD charging tray is put into magazine
CN109397249A (en) * 2019-01-07 2019-03-01 重庆大学 The two dimensional code positioning crawl robot system algorithm of view-based access control model identification
CN109648554A (en) * 2018-12-14 2019-04-19 佛山市奇创智能科技有限公司 Robot calibration method, device and system
CN110315525A (en) * 2018-03-29 2019-10-11 天津工业大学 A kind of robot workpiece grabbing method of view-based access control model guidance

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1480007A1 (en) * 2003-05-23 2004-11-24 Ligmatech Automationssysteme GmbH Method and apparatus for aligning plate-shaped workpieces
EP2255930A1 (en) * 2009-05-27 2010-12-01 Leica Geosystems AG Method and system for extremely precise positioning of at least one object in the end position in space
CN101630409A (en) * 2009-08-17 2010-01-20 北京航空航天大学 Hand-eye vision calibration method for robot hole boring system
JP2012076216A (en) * 2010-09-06 2012-04-19 Toyota Auto Body Co Ltd Method for combining camera coordinate system and robot coordinate system in robot control system, image processing device, program, and storage medium
WO2012127845A1 (en) * 2011-03-24 2012-09-27 Canon Kabushiki Kaisha Robot control apparatus, robot control method, program, and recording medium
CN103753530A (en) * 2013-12-30 2014-04-30 西北工业大学 Extremely near visual servo control method for space tethered robot
CN103927739A (en) * 2014-01-10 2014-07-16 北京航天飞行控制中心 Patroller positioning method based on spliced images
CN104227250A (en) * 2014-09-16 2014-12-24 佛山市利迅达机器人系统有限公司 Plane-based robot three-dimensional addressing and correcting method
CN107516329A (en) * 2016-06-15 2017-12-26 北京科技大学 A kind of deceleration oil hole localization method
CN108122257A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method and device
CN108286970A (en) * 2017-12-31 2018-07-17 芜湖哈特机器人产业技术研究院有限公司 Mobile robot positioning system, method and device based on DataMatrix code bands
CN110315525A (en) * 2018-03-29 2019-10-11 天津工业大学 A kind of robot workpiece grabbing method of view-based access control model guidance
CN109335059A (en) * 2018-11-23 2019-02-15 重庆盟讯电子科技有限公司 A kind of robot device that SMD charging tray is put into magazine
CN109648554A (en) * 2018-12-14 2019-04-19 佛山市奇创智能科技有限公司 Robot calibration method, device and system
CN109397249A (en) * 2019-01-07 2019-03-01 重庆大学 The two dimensional code positioning crawl robot system algorithm of view-based access control model identification

Also Published As

Publication number Publication date
CN111015664B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN103759648B (en) A kind of complicated angle welding method for detecting position based on Binocular stereo vision with laser
CN107263468B (en) SCARA robot assembly method using digital image processing technology
US9561593B2 (en) Working method using sensor and working system for performing same
CN104626169B (en) Robot part grabbing method based on vision and mechanical comprehensive positioning
JP5539138B2 (en) System and method for determining the posture of an object in a scene
CN105701492B (en) A kind of machine vision recognition system and its implementation
CN108364311A (en) A kind of metal parts automatic positioning method and terminal device
CN104923593A (en) Vision-based positioning method for top layer bending plate
CN113177983B (en) Fillet weld positioning method based on point cloud geometric features
CN105690393A (en) Four-axle parallel robot sorting system based on machine vision and sorting method thereof
CN111784655A (en) Underwater robot recovery positioning method
Zhu et al. Recognition of the initial position of weld based on the image pattern match technology for welding robot
CN108156359A (en) Intelligent industrial camera
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN111267094A (en) Workpiece positioning and grabbing method based on binocular vision
CN108520533B (en) Workpiece positioning-oriented multi-dimensional feature registration method
CN111015664A (en) Intelligent identification method based on CCD camera
CN105300280A (en) Connector dimension vision measurement method
JP5160366B2 (en) Pattern matching method for electronic parts
CN106271235A (en) Welding bead localization method based on machine vision and device
CN110969357A (en) Visual detection method for holes of aluminum alloy machined part
CN107579028B (en) Method and device for determining edge of incomplete wafer and scribing device
CN115770988A (en) Intelligent welding robot teaching method based on point cloud environment understanding
CN106530269A (en) Weld detection method
CN209182254U (en) Optical detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant