CN112208113A - Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof - Google Patents

Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof Download PDF

Info

Publication number
CN112208113A
CN112208113A CN202010810646.1A CN202010810646A CN112208113A CN 112208113 A CN112208113 A CN 112208113A CN 202010810646 A CN202010810646 A CN 202010810646A CN 112208113 A CN112208113 A CN 112208113A
Authority
CN
China
Prior art keywords
coordinate system
cotton
heat
ccd
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010810646.1A
Other languages
Chinese (zh)
Other versions
CN112208113B (en
Inventor
赵军
秦琮峻
黎锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Simiville Intelligent Equipment Co ltd
Original Assignee
Suzhou Simiville Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Simiville Intelligent Equipment Co ltd filed Critical Suzhou Simiville Intelligent Equipment Co ltd
Priority to CN202010810646.1A priority Critical patent/CN112208113B/en
Publication of CN112208113A publication Critical patent/CN112208113A/en
Application granted granted Critical
Publication of CN112208113B publication Critical patent/CN112208113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C65/00Joining or sealing of preformed parts, e.g. welding of plastics materials; Apparatus therefor
    • B29C65/78Means for handling the parts to be joined, e.g. for making containers or hollow articles, e.g. means for handling sheets, plates, web-like materials, tubular articles, hollow articles or elements to be joined therewith; Means for discharging the joined articles from the joining apparatus
    • B29C65/7802Positioning the parts to be joined, e.g. aligning, indexing or centring
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C66/00General aspects of processes or apparatus for joining preformed parts
    • B29C66/80General aspects of machine operations or constructions and parts thereof
    • B29C66/87Auxiliary operations or devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Electric Connection Of Electric Components To Printed Circuits (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a vision-guidance-based automatic heat-conducting cotton attaching device and an attaching method thereof, wherein the automatic heat-conducting cotton attaching device comprises a rack and a controller, a workpiece loading area and a heat-conducting cotton supply area are formed on the rack, workpieces to be attached and heat-conducting cotton supply mechanisms are respectively loaded at the workpiece loading area and the heat-conducting cotton supply area, a manipulator located beside the heat-conducting cotton supply mechanism is arranged on the rack, a first CCD opposite to the workpieces to be attached is arranged beside the workpiece loading area, a second CCD and a negative pressure sucker are arranged on the manipulator, a third CCD is arranged beside the heat-conducting cotton supply mechanism, and the first CCD, the second CCD, the third CCD, the manipulator and the heat-conducting cotton supply mechanism are all in communication connection with the controller. According to the invention, the attaching quality and the attaching efficiency are improved.

Description

Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof
Technical Field
The invention relates to the technical field of robot visual guidance assembly, in particular to a heat conduction cotton automatic attaching device based on visual guidance and an attaching method thereof.
Background
With the incredible iteration speed of the era, the global influence is inevitable in the world, electronic products are one of the important members of the global impact wave, the semiconductor industry has started to shift nowadays, and the automobiles have gradually tended to be electronized. From the growth period to the maturity period of the Chinese automobile industry in 2000-2019, the automobile yield in China is increased from 207 ten thousands to 3338 thousands, and the annual average growth rate can reach 18%. From 2000 to 2018, the total automobile yield in China has increased from 1986 million yuan to 28427 million yuan, and the annual average increase rate is 19%. Therefore, the living standard of people in China is improved, and the rigid demand for automobiles is also continuously high.
The heat-conducting cotton can also be called glass cotton, which belongs to a category of glass fiber and is a kind of artificial inorganic fiber. The glass wool is a material formed by fiberizing molten glass into wool, belongs to glass as a chemical component, is an inorganic fiber, and has the advantages of good forming, small volume density, thermal conductivity , heat preservation and insulation, good sound absorption performance, corrosion resistance and stable chemical performance. The application of the glass wool in the automobile electronic industry is mainly as follows: heat dissipation, heat absorption, anti-resonance, etc., glass wool has been used in large quantities in the electronic industry that is rapidly developing nowadays because its internal structure can absorb a large amount of heat and noise due to fibrosis.
In the field of non-standard automated assembly and the like, it is well known to use attachment devices of different structural forms to attach the thermally conductive cotton to the target workpiece. In the process of researching and implementing the heat-conducting cotton attachment, the inventor finds that the attachment device in the prior art has at least the following problems: because the existing attaching device has low automation degree and poor positioning precision, the attaching quality and the attaching efficiency are poor due to low final attaching.
In view of the above, there is a need to develop an automatic attaching device and an attaching method for heat conductive cotton based on visual guidance to solve the above problems.
Disclosure of Invention
In order to overcome the problems of the heat-conducting cotton attaching device, the invention provides a visual-guidance-based automatic heat-conducting cotton attaching device which can improve the attaching quality and the attaching efficiency.
The invention relates to an automatic heat-conducting cotton attaching device based on visual guidance, which aims to solve the technical problems and comprises a rack and a controller, wherein a workpiece loading area and a heat-conducting cotton supply area are formed on the rack, workpieces to be attached and heat-conducting cotton supply mechanisms are respectively loaded at the workpiece loading area and the heat-conducting cotton supply area, a manipulator located beside the heat-conducting cotton supply mechanism is arranged on the rack, a first CCD opposite to the workpieces to be attached is arranged beside the workpiece loading area, a second CCD and a negative pressure sucker are arranged on the manipulator, a third CCD is arranged beside the heat-conducting cotton supply mechanism, and the first CCD, the second CCD, the third CCD, the manipulator and the heat-conducting cotton supply mechanism are all in communication connection with the controller. The automatic attached device of this heat conduction cotton degree of automation is high, and positioning accuracy is high, has finally improved attached quality and attached efficiency.
Accordingly, another object of the present invention is to provide a method for attaching heat conductive cotton, which can improve the attaching quality and efficiency.
In terms of the dust-gas separation method, the heat-conducting cotton attaching method based on visual guidance to solve the technical problems comprises the following steps:
step S1, the heat-conducting cotton supply mechanism periodically peels off the heat-conducting cotton required by single attachment;
step S2, the conveyor belt conveys the workpieces to be attached to a workpiece loading area periodically, then the workpieces to be attached are loaded on the workpiece loading area for primary positioning, and the first CCD performs secondary positioning on the workpieces loaded on the workpiece loading area by photographing to judge whether the workpieces are loaded in place;
step S3, calibrating a geodetic coordinate system and a camera coordinate system of a second CCD on the manipulator, and determining the conversion relation between an image coordinate system and a world coordinate system;
step S4, calibrating the hand and eye of the manipulator, determining the conversion relation between the manipulator coordinate system and the image coordinate system, and calculating the conversion matrix between the manipulator coordinate system and the world coordinate system;
step S5, the manipulator drives the second CCD to move to a position right above the heat-conducting cotton stripped from the heat-conducting cotton supply mechanism, and the second CCD starts to shoot, identify and position the heat-conducting cotton;
and step S6, the negative pressure sucker on the manipulator sucks the heat conduction cotton from the top of the heat conduction cotton and then moves the heat conduction cotton to the position right above the third CCD, so that the third CCD can detect the defects of the pasting surface of the heat conduction cotton from the bottom of the heat conduction cotton, if the detection result shows that the pasting surface is unqualified, the manipulator discards the heat conduction cotton to a waste box, and if the detection result shows that the pasting surface is qualified, the manipulator transfers the heat conduction cotton to a workpiece loading area to paste the heat conduction cotton to the designated position of the workpiece.
Optionally, an LED calibration lamp is disposed in the heat-conducting cotton supply mechanism, in step S3, the second CCD photographs the LED calibration lamp multiple times from directly above the LED calibration lamp, converts the arrangement, brightness, and color information of the pixel bright spots into digital information, and performs matrix transformation on the digital information to determine the relationship between the camera coordinate system and the geodetic coordinate system.
Optionally, by calculating the internal and external matrix parameters of the second CCD, it may be determined that the corresponding relationship between the coordinate system of the camera and the geodetic coordinate system in step S3 is:
PI=CPg+T,
wherein, PgIs a point in the geodetic coordinate system, PIIs a point in the image coordinate system, C is a rotation matrix, which can be expressed by:
Figure BDA0002630847910000031
T=(tx,ty,tz) Is a translation vector, txDenotes the translation distance, t, on the X-axisyRepresenting the translation distance, t, in the Y-axiszRepresents the translation distance in the Z-axis;
a, b and c are three angles, namely an angle a rotating around the X axis of the camera coordinate system, an angle b rotating around the Y axis of the camera coordinate system and an angle c rotating around the Z axis of the camera coordinate system;
r is a rotation matrix, T is a translation vector, the transformation from the geodetic coordinate system to the camera coordinate system belongs to rigid body transformation, only translation and rotation exist, so the transformation relation of the point P from the geodetic coordinate system to the camera coordinate system is as follows:
Figure BDA0002630847910000032
optionally, the hand-eye calibration of the manipulator in step S4 is implemented by using a projection method, and the three-dimensional coordinates in the camera coordinate system are converted into plane coordinates by using the internal parameters and depth values of the camera, where the conversion formula is as follows:
Figure BDA0002630847910000033
where h represents the depth value of the target point, X, y represent the coordinates of the target point in the image coordinate system, Xc,Yc,ZcRepresenting the three-dimensional coordinates of the target point in the camera coordinate system, Tx,TyDenotes a focal length for describing a proportional relationship between a pixel unit and a three-dimensional coordinate unit, X0,Y0And the projection position is used for calculating the distance between the image origin and the coordinate system origin.
Optionally, in step S4, the conversion relationship between the image coordinate system and the pixel coordinate system is:
Figure BDA0002630847910000041
wherein the unit of the image coordinate system is mm and the unit of the pixel coordinate system is pixel, wherein dx,dyEach row and column respectively, and 1pixel is equal to 1dxmm, the above formula is represented by a matrix:
Figure BDA0002630847910000042
the transformation relation from the geodetic coordinate system to the pixel coordinate system is obtained through the transformation between the concentrated coordinate systems as follows:
Figure BDA0002630847910000043
one of the above technical solutions has the following advantages or beneficial effects: the first CCD is arranged in the workpiece loading area, so that the workpiece to be attached can be accurately positioned through the visual detection system; the manipulator is provided with a second CCD, the heat-conducting cotton supply mechanism is provided with an LED lamp, and the second CCD can convert a geodetic coordinate system into an image coordinate system by shooting images for multiple times; the negative pressure sucker can be accurately positioned above the heat conducting cotton by visual guidance, so that preparation is made for successfully sucking the heat conducting cotton; the position change in the motion process of sucking and transporting the heat-conducting cotton by the manipulator is considered, and the visual guidance is used for monitoring, so that the mounting precision is improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings of the embodiments will be briefly described below, and it is apparent that the drawings in the following description relate only to some embodiments of the present invention and are not limiting thereof, wherein:
fig. 1 is a perspective view of a thermally conductive cotton automatic attaching device based on visual guidance according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a thermal conductive cotton attachment method based on visual guidance according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a coordinate transformation projection model in a thermally conductive cotton self-attaching device based on visual guidance according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the drawings, the shape and size may be exaggerated for clarity, and the same reference numerals will be used throughout the drawings to designate the same or similar components.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in the description and claims of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprise" or "comprises", and the like, means that the element or item listed before "comprises" or "comprising" covers the element or item listed after "comprising" or "comprises" and its equivalents, and does not exclude other elements or items. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
In the following description, terms such as center, thickness, height, length, front, back, rear, left, right, top, bottom, upper, lower, etc., are defined with respect to the configurations shown in the respective drawings, and in particular, "height" corresponds to a dimension from top to bottom, "width" corresponds to a dimension from left to right, "depth" corresponds to a dimension from front to rear, which are relative concepts, and thus may be varied accordingly depending on the position in which it is used, and thus these or other orientations should not be construed as limiting terms.
Terms concerning attachments, coupling and the like (e.g., "connected" and "attached") refer to a relationship wherein structures are secured or attached, either directly or indirectly, to one another through intervening structures, as well as both movable or rigid attachments or relationships, unless expressly described otherwise.
Referring to fig. 1, cotton automatic attached device package frame of heat conduction and controller based on vision guide, be formed with work piece loading region 1 and heat conduction cotton supply region in the frame, work piece loading region 1 and heat conduction cotton supply region punishment do not load work piece 3 and the cotton feeding mechanism 6 of heat conduction that remain attached, be equipped with in the frame and be located the manipulator 5 of the cotton feeding mechanism 6 side of heat conduction, the side that work piece loading region 1 is equipped with and treats attached work piece 3 relative first CCD2, install second CCD4 and negative sucker on the manipulator 5, the side of the cotton feeding mechanism 6 of heat conduction is equipped with third CCD7, first CCD2, second CCD4, third CCD7, manipulator 5 and the cotton feeding mechanism 6 of heat conduction all with controller communication connection.
Further, the scheme also provides a heat conduction cotton attaching method for attaching heat conduction cotton by using the automatic heat conduction cotton attaching device based on visual guidance, which comprises the following steps:
step S1, the heat-conductive cotton supply mechanism 6 periodically peels off the heat-conductive cotton required for single attachment;
step S2, the conveyor belt periodically conveys the workpieces 3 to be attached to the workpiece loading area 1, and then loads the workpieces 3 to be attached to the workpiece loading area 1 for primary positioning, and the first CCD2 performs secondary positioning on the workpieces 3 loaded on the workpiece loading area 1 by photographing to determine whether the workpieces 3 are loaded in place;
step S3, calibrating a geodetic coordinate system and a camera coordinate system of the second CCD4 on the manipulator 5, and determining the conversion relation between an image coordinate system and a world coordinate system;
step S4, calibrating the hand and eye of the manipulator 5, determining the conversion relation between the manipulator coordinate system and the image coordinate system, and calculating the conversion matrix between the manipulator coordinate system and the world coordinate system;
step S5, the manipulator 5 drives the second CCD4 to move right above the heat-conducting cotton stripped from the heat-conducting cotton supply mechanism 6, and the second CCD4 starts to shoot, identify and position the heat-conducting cotton;
step S6, the negative pressure suction cup on the manipulator 5 sucks the heat conduction cotton from the top of the heat conduction cotton and then moves the heat conduction cotton to a position right above the third CCD7, so that the third CCD7 can perform defect detection on the adhering surface of the heat conduction cotton from the bottom of the heat conduction cotton, if the heat conduction cotton is determined to be unqualified after the detection, the manipulator 5 discards the heat conduction cotton to a waste bin, and if the heat conduction cotton is determined to be qualified after the detection, the manipulator 5 transfers the heat conduction cotton to the workpiece loading area 1 to adhere the heat conduction cotton to the specified position of the workpiece 3.
Further, an LED marker lamp is provided in the heat conductive cotton supply mechanism 6, and in step S3, the second CCD4 photographs the LED marker lamp several times from directly above the LED marker lamp, converts the arrangement, brightness, and color information of the pixel bright spots into digital information, and then performs matrix transformation on the digital information, thereby determining the relationship between the camera coordinate system and the terrestrial coordinate system.
Further, by calculating the internal and external matrix parameters of the second CCD4, it can be determined that the corresponding relationship between the coordinate system of the camera and the coordinate system of the earth in step S3 is:
PI=CPg+T,
wherein, PgIs a point in the geodetic coordinate system, PIIs a point in the image coordinate system, C is a rotation matrix, which can be expressed by:
Figure BDA0002630847910000071
T=tx,ty,tzis a translation vector, txDenotes the translation distance, t, on the X-axisyRepresenting the translation distance, t, in the Y-axiszRepresents the translation distance in the Z-axis;
a, b and c are three angles, namely an angle a rotating around the X axis of the camera coordinate system, an angle b rotating around the Y axis of the camera coordinate system and an angle c rotating around the Z axis of the camera coordinate system;
r is a rotation matrix, T is a translation vector, the transformation from the geodetic coordinate system to the camera coordinate system belongs to rigid body transformation, only translation and rotation exist, so the transformation relation of the point P from the geodetic coordinate system to the camera coordinate system is as follows:
Figure BDA0002630847910000072
further, the hand-eye calibration of the manipulator 5 in step S4 is implemented by using a projection method, and the three-dimensional coordinates in the camera coordinate system are converted into plane coordinates by using the internal parameters and depth values of the camera, where the conversion formula is as follows:
Figure BDA0002630847910000073
where h represents the depth value of the target point, X, y represent the coordinates of the target point in the image coordinate system, Xc,Yc,ZcRepresenting the three-dimensional coordinates of the target point in the camera coordinate system, Tx,TyDenotes a focal length for describing a proportional relationship between a pixel unit and a three-dimensional coordinate unit, X0,Y0And the projection position is used for calculating the distance between the image origin and the coordinate system origin.
Further, the conversion relationship between the image coordinate system and the pixel coordinate system in step S4 is:
Figure BDA0002630847910000074
wherein the unit of the image coordinate system is mm and the unit of the pixel coordinate system is pixel, wherein dx,dyEach row and column respectively, and 1pixel is equal to 1dxmm, the above formula is represented by a matrix:
Figure BDA0002630847910000081
the transformation relation from the geodetic coordinate system to the pixel coordinate system is obtained through the transformation between the concentrated coordinate systems as follows:
Figure BDA0002630847910000082
the invention is further illustrated by the following examples.
Example one
As shown in the figure, the invention provides a machine vision-based automatic adhesion method for heat-conducting cotton, which comprises the following steps:
as shown in fig. 1, 1 denotes an acoustic panel placement area, 2 is a CCD camera 1, 3 is an acoustic panel, 4 is a CCD camera 2, 5 is a robot arm, 6 is a feeding belt area, and 7 is a CCD camera 3. Place the region at the sound control board and be equipped with CCD camera 1, before entire system function, CCD camera 1 is confirmed the position of sound control board through shooing many times, and then the sound control board waits for the manipulator to absorb the heat conduction cotton and paste. The accurate positioning of the sound control board is an important step that the subsequent heat conduction cotton can be pasted, so a secondary positioning mode is adopted, including that the peripheral clamping jaws are firstly positioned, and then a visual detection system is used for secondary positioning.
After the CCD camera 1 and the sound control panel are installed, the CCD camera 2 is arranged at the head of the manipulator, the CCD camera 2 moves along with the manipulator, the feeding belt area is shot for multiple times in the moving process, and the calibration of the camera is completed after the collected images are processed. The manipulator sucks the heat conduction cotton and then sends the heat conduction cotton to the upper part of the CCD camera 3 for detection.
The camera 1 and the camera 3 are fixed in position, so that the field of vision of the cameras must be wide to capture a clear picture. The camera 2 moves along with the manipulator, and points in a geodetic coordinate system can be converted into points in an image coordinate system after calibration is completed, so that the position of the heat conducting cotton can be found.
The manipulator in this example is six, and the flexibility is high, and manipulator end effector is array elasticity negative sucker, can send the heat conduction cotton absorption to the region of pasting effectively.
For better image acquisition, light sources are provided in the sound panel placement area, the feeding belt area, and the CCD camera 3.
Calibration of camera
At a manipulatorThe upper end is provided with a CCD camera 2 for calibrating a geodetic coordinate system and an image coordinate system. Red LED lamps are arranged at four corners of the material supply belt, the camera 2 shoots the material supply belt area from the upper part for many times, then the information such as the arrangement, brightness and color of pixel bright spots is converted into digital information, and then the digital information is subjected to matrix conversion, so that the relation between an image coordinate system and a geodetic coordinate system can be determined; the origin (P) is the center of light in the image coordinate system0,P0) The robot coordinate system takes the manipulator base as the origin of coordinates.
Finding the coordinates of several fixed points in the image coordinate system and the manipulator coordinate system, the coordinate of the middle point in the image coordinate system is PcameraCoordinate of a point in the manipulator coordinates ProbotThen, by transforming the formula with the coordinate system, one can obtain:
Figure BDA0002630847910000091
wherein
Figure BDA0002630847910000092
Representing the derived camera to manipulator transformation matrix, which may include both translation and rotation, and Probot,PcameraIs the "homogeneous coordinates" after 1 is filled:
[a,b,c,1]T
Figure BDA0002630847910000093
derived from the above formula
Figure BDA0002630847910000094
That is, the image coordinate system is converted into the robot coordinate system conversion relation, and Rx,txThe rotation matrix, translation vector, representing the transformation of the image coordinate system to the robot coordinate system.
By calculating the internal and external matrix parameters of the second camera, the corresponding relation between the image coordinate system and the geodetic coordinate system can be determined as follows:
PI=CPg+T,
wherein P isgIs a point in the geodetic coordinate system, PIIs a point in the image coordinate system, where C is the rotation matrix and T is the translation vector.
Figure BDA0002630847910000095
T=(tx,ty,tz) Is a translation vector, txRepresenting the translation distance, t, on the X axisyRepresenting the translation distance in the Y-axis, tzRepresenting the translation distance in the Z-axis. The three angles a, b and c are respectively the rotation angle a around the X axis, the rotation angle b around the Y axis and the rotation angle c around the Z axis of the picture coordinate system.
Calibration of manipulator
Further, hand-eye calibration in the feed zone area:
to convert the camera coordinate system into the image coordinate system, a projection method, i.e. from 3D to 2D, is required. The image in the camera is three-dimensional, and if the three-dimensional coordinates in the camera coordinate system are to be converted into plane coordinates, the internal parameters and the depth values of the camera are used, and the conversion formula is as follows:
Figure BDA0002630847910000101
h in the formula represents the depth value of the target point; the lower x, y represent the coordinates of the target point in the image. u, v, w represent the three-dimensional coordinates of the target point in the camera coordinate system, Tx,TyDenotes a focal length for describing a proportional relationship between a pixel unit and a three-dimensional coordinate unit, X0,Y0Is the projected position used to calculate the distance between the image origin and the coordinate system origin.
Finding the coordinates of several fixed points in the image coordinate system and the manipulator coordinate system, the coordinate of the middle point in the image coordinate system is PcameraCoordinate of a point in the manipulator coordinates ProbotThen by transforming the formula with the coordinate system, canObtaining:
Figure BDA0002630847910000102
wherein
Figure BDA0002630847910000103
Representing the derived camera to manipulator transformation matrix, which may include both translation and rotation, and Probot,PcameraIs the "homogeneous coordinates" after 1 is filled:
[a,b,c,1]T
Figure BDA0002630847910000104
derived from the above formula
Figure BDA0002630847910000105
That is, the image coordinate system is converted into the robot coordinate system conversion relation, and Rx,txThe rotation matrix, translation vector, representing the transformation of the image coordinate system to the robot coordinate system.
Furthermore, a CCD camera 2 is arranged at the upper end of the manipulator to calibrate a geodetic coordinate system and a camera coordinate system. Red LED lamps are arranged at four corners of the feeding belt, the camera 2 shoots the feeding belt area from the upper part for many times, then the feeding belt area is converted into digital information according to the information of the arrangement, brightness, color and the like of pixel bright spots, and then the digital information is subjected to matrix conversion, so that the relation between a camera coordinate system and a geodetic coordinate system can be determined; by calculating the internal and external matrix parameters of the second camera, the corresponding relation between the camera coordinate system and the geodetic coordinate system can be determined as follows:
Pc=CPg+T,
wherein, PgIs a point in the geodetic coordinate system, PcIs a point in the image coordinate system, where C is the rotation matrix and T is the translation vector.
Figure BDA0002630847910000111
T=(tx,ty,tz) Is a translation vector, txRepresenting the translation distance, t, on the X axisyRepresenting the translation distance in the Y-axis, tzRepresenting the translation distance in the Z-axis. C represents a rotation matrix, and the three angles a, b and C are respectively the rotation angle a around the X axis, the rotation angle b around the Y axis and the rotation angle C around the Z axis of the picture coordinate system. R is a rotation matrix, T is the transformation of a translation vector from a geodetic coordinate system to a camera coordinate system belongs to rigid body transformation, and only translation and rotation exist.
Therefore, the transformation relationship of the point P from the geodetic coordinate system to the camera coordinate system is as follows:
Figure BDA0002630847910000112
further, hand-eye calibration in the feed zone area:
from the camera coordinate system to the image coordinate system, belong to a projective relationship, i.e. from 3D to 2D. This projection model is similar to pinhole imaging, and is characterized by the fact that the light in the entire area passes through a projection center, as shown in the figure, where XY is a coordinate system fixed to the photographic plane, the origin is at the projection center, and X is the center of the projectionCThe axis being parallel to the X-axis, YCThe axis being parallel to the Y axis, XCYCDistance oo between plane and image planecThe distance between is the focal length Tx. In actual photography, the image plane is located at a position behind the projection center by a focal length, so the relationship between the camera and the plane coordinate system is:
Figure BDA0002630847910000113
point p (x, y) in the plane coordinate system to point p (x) in the camera coordinate systemc,yc,zc) The conversion relationship of (1) is as follows:
the image in the camera is planar, and the conversion of the planar coordinates into three-dimensional coordinates in the camera coordinate system is to use the internal parameters and depth values of the camera, and the conversion formula is as follows:
Figure BDA0002630847910000114
h in the formula represents the depth value of the target point; x, y represent the coordinates of the target point in the image. Xc,Yc,ZcRepresenting the three-dimensional coordinates of the target point in the camera coordinate system, Tx,TyDenotes a focal length for describing a proportional relationship between a pixel unit and a three-dimensional coordinate unit, X0,Y0Is the projected position used to calculate the distance between the image origin and the coordinate system origin.
Conversion between image coordinate system and pixel coordinate system:
although both the image coordinate system and the pixel coordinate system are based on the imaging plane, the difference between the two coordinate systems is that the origin of the image coordinate system is a point of the photoelectric center on the imaging plane where the unit and the origin are located, the unit of the image coordinate system is mm, and the unit of the pixel coordinate system is pixel, so the conversion relationship between the two coordinate systems is:
Figure BDA0002630847910000121
wherein d isx,dyRepresenting each row and each column how many mm, i.e. 1pixel dxmm,
Figure BDA0002630847910000122
Through the transformation between the concentrated coordinate systems, the transformation relationship from the geodetic coordinate system to the pixel coordinate system can be obtained:
Figure BDA0002630847910000123
the four coordinate systems of the geodetic coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system can be converted by the formula.
Furthermore, the Z-shaped air pipe is controlled by the electromagnetic valve to spray air flow outwards, so that the heat conducting cotton is smoothly and uniformly peeled from the feeding belt. Specifically, a coil in the electromagnetic valve body is input with a pulse signal through a wire, the pulse valve is controlled by an output signal of a pulse injection control instrument, and the opening and closing of the pulse valve are realized by the flexural deformation of the rubber diaphragm through the pressure change of the front air chamber and the rear air chamber of the valve. The frequency of the pulse signal is 50Hz, when the pulse signal is input, the valve is opened, high-frequency and stable airflow exists in a short time, and the airflow rushes into the air nozzle to stably and uniformly peel the heat-conducting cotton from the feeding belt.
Furthermore, the manipulator absorbs the pre-adhered heat-conducting cotton by using the array type elastic negative pressure suction cups, and the Z-shaped air nozzle sprays air to the contact position of the adhesive tape and the heat-conducting cotton, so that the manipulator absorbs the heat-conducting cotton to leave the feeding adhesive tape conveniently.
Further, the manipulator absorbs the heat conduction cotton to move to the position above the CCD camera 3, and the camera 3 detects the defects of the heat conduction cotton. And the unqualified heat conduction cotton detected by the third camera is thrown into a waste material box, and the qualified heat conduction cotton is sucked by the manipulator and sent to the sound detection board for pasting.
Further, the manipulator in the machine vision-based automatic heat-conducting cotton attaching device disclosed by the embodiment comprises an operating system for controlling the operation of the workbench and a vision detection system for detection; the heat-conducting cotton mounting workbench is provided with a sound control panel placing area and a feeding system area, the sound control panel area is provided with a CCD camera 1, the feeding system area is provided with a CCD camera 3, and the industrial cameras are connected with a visual detection system; the operating system controls the manipulator to execute actions according to the attaching method and the vision system image processing result; the visual inspection system comprises a CCD camera 2, an image processing module and an image storage module, wherein the CCD camera 2 is arranged on a manipulator.
The number of apparatuses and the scale of the process described herein are intended to simplify the description of the present invention. Applications, modifications and variations of the present invention will be apparent to those skilled in the art.
The features of the different implementations described herein may be combined to form other embodiments not specifically set forth above. The components may be omitted from the structures described herein without adversely affecting their operation. Further, various individual components may be combined into one or more individual components to perform the functions described herein.
Furthermore, while embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in a variety of fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (6)

1. A vision-guidance-based automatic heat-conducting cotton attaching device comprises a machine frame and a controller, and is characterized in that, a workpiece loading area (1) and a heat conduction cotton supply area are formed on the frame, a workpiece (3) to be attached and a heat conduction cotton supply mechanism (6) are respectively loaded on the workpiece loading area (1) and the heat conduction cotton supply area, a mechanical arm (5) positioned beside the heat-conducting cotton supply mechanism (6) is arranged on the frame, a first CCD (2) opposite to the workpiece (3) to be attached is arranged beside the workpiece loading area (1), the manipulator (5) is provided with a second CCD (4) and a negative pressure sucker, a third CCD (7) is arranged beside the heat-conducting cotton supply mechanism (6), the first CCD (2), the second CCD (4), the third CCD (7), the manipulator (5) and the heat-conducting cotton supply mechanism (6) are all in communication connection with the controller.
2. A heat-conducting cotton attaching method for attaching heat-conducting cotton by using the automatic heat-conducting cotton attaching device based on visual guidance as claimed in claim 1, which is characterized by comprising the following steps:
step S1, the heat conduction cotton supply mechanism (6) periodically peels heat conduction cotton required by single attachment;
step S2, the conveyor belt conveys the workpieces (3) to be attached to the workpiece loading area (1) periodically, then the workpieces (3) to be attached are loaded at the workpiece loading area (1) for primary positioning, and the first CCD (2) performs secondary positioning on the workpieces (3) loaded on the workpiece loading area (1) through photographing to judge whether the workpieces (3) are loaded in place;
step S3, calibrating a geodetic coordinate system and a camera coordinate system of the second CCD (4) on the manipulator (5) and determining the conversion relation between an image coordinate system and a world coordinate system;
step S4, calibrating the hand and eye of the manipulator (5), determining the conversion relation between the manipulator coordinate system and the image coordinate system, and calculating the conversion matrix of the manipulator coordinate system and the world coordinate system;
step S5, the manipulator (5) drives the second CCD (4) to move to a position right above the heat-conducting cotton stripped from the heat-conducting cotton supply mechanism (6), and the second CCD (4) starts to photograph, identify and position the heat-conducting cotton;
step S6, the negative pressure suction cup on the manipulator (5) sucks the heat conduction cotton from the top of the heat conduction cotton and then moves the heat conduction cotton to the position right above the third CCD (7), so that the third CCD (7) can detect the defects of the adhering surface of the heat conduction cotton from the bottom of the heat conduction cotton, if the adhering surface is determined to be unqualified after detection, the manipulator (5) discards the heat conduction cotton to a waste bin, if the adhering surface is determined to be qualified after detection, the manipulator (5) transfers the heat conduction cotton to the workpiece loading area (1) so as to adhere the heat conduction cotton to the specified position of the workpiece (3).
3. The method for attaching heat conductive cotton to an automatic heat conductive cotton attaching device based on visual guidance as claimed in claim 2, wherein an LED calibration lamp is provided in the heat conductive cotton supplying mechanism (6), and in step S3, the second CCD (4) photographs the LED calibration lamp several times from directly above the LED calibration lamp, converts the arrangement, brightness and color information of the bright spots of the pixels into digital information, and then performs matrix transformation on the digital information to determine the relationship between the camera coordinate system and the geodetic coordinate system.
4. The thermally conductive cotton attaching method of the thermally conductive cotton automatic attaching device based on visual guidance as claimed in claim 3, wherein the corresponding relation between the camera coordinate system and the geodetic coordinate system in step S3 can be determined by calculating the internal and external matrix parameters of the second CCD (4):
PI=CPg+T,
wherein, PgIs a point in the geodetic coordinate system, PI is a point in the image coordinate system, C is a rotation matrix, which can be expressed by:
Figure FDA0002630847900000021
T=(tx,ty,tz) Is a translation vector, txDenotes the translation distance, t, on the X-axisyRepresenting the translation distance, t, in the Y-axiszRepresents the translation distance in the Z-axis;
a, b and c are three angles, namely an angle a rotating around the X axis of the camera coordinate system, an angle b rotating around the Y axis of the camera coordinate system and an angle c rotating around the Z axis of the camera coordinate system;
r is a rotation matrix, T is a translation vector, the transformation from the geodetic coordinate system to the camera coordinate system belongs to rigid body transformation, only translation and rotation exist, so the transformation relation of the point P from the geodetic coordinate system to the camera coordinate system is as follows:
Figure FDA0002630847900000022
5. the method for attaching the thermally conductive cotton to the thermally conductive cotton automatic attaching device based on visual guidance as claimed in claim 2, wherein the calibration of the hand and eye of the manipulator (5) in step S4 is implemented by using a projection method, and the three-dimensional coordinates in the camera coordinate system are converted into the planar coordinates by using the internal parameters and the depth values of the camera, and the conversion formula is as follows:
Figure FDA0002630847900000023
where h represents the depth value of the target point, X, y represent the coordinates of the target point in the image coordinate system, Xc,Yc,ZcRepresenting the three-dimensional coordinates of the target point in the camera coordinate system, Tx,TyDenotes a focal length for describing a proportional relationship between a pixel unit and a three-dimensional coordinate unit, X0,Y0And the projection position is used for calculating the distance between the image origin and the coordinate system origin.
6. The method for heat transfer cotton attachment by an automatic heat transfer cotton attachment apparatus based on visual guidance as claimed in claim 5, wherein the conversion relationship between the image coordinate system and the pixel coordinate system in step S4 is as follows:
Figure FDA0002630847900000031
wherein the unit of the image coordinate system is mm and the unit of the pixel coordinate system is pixel, wherein dx,dyEach row and column respectively, and 1pixel is equal to 1dxmm, the above formula is represented by a matrix:
Figure FDA0002630847900000032
the transformation relation from the geodetic coordinate system to the pixel coordinate system is obtained through the transformation between the concentrated coordinate systems as follows:
Figure FDA0002630847900000033
CN202010810646.1A 2020-08-13 2020-08-13 Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof Active CN112208113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010810646.1A CN112208113B (en) 2020-08-13 2020-08-13 Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010810646.1A CN112208113B (en) 2020-08-13 2020-08-13 Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof

Publications (2)

Publication Number Publication Date
CN112208113A true CN112208113A (en) 2021-01-12
CN112208113B CN112208113B (en) 2022-09-06

Family

ID=74058967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010810646.1A Active CN112208113B (en) 2020-08-13 2020-08-13 Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof

Country Status (1)

Country Link
CN (1) CN112208113B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108107837A (en) * 2018-01-16 2018-06-01 三峡大学 A kind of glass processing device and method of view-based access control model guiding
CN207657218U (en) * 2017-12-14 2018-07-27 江苏科瑞恩自动化科技有限公司 A kind of foam adsorption locating device
CN108766894A (en) * 2018-06-07 2018-11-06 湖南大学 A kind of chip attachment method and system of robot vision guiding
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
CN208484258U (en) * 2018-06-27 2019-02-12 昆山市三信塑胶实业有限公司 Electronic product dorsal shield is pressed overlay film integrated equipment
CN109719734A (en) * 2019-03-12 2019-05-07 湖南大学 A kind of the mobile phone flashlight package system and assemble method of robot vision guidance
CN109848998A (en) * 2019-03-29 2019-06-07 砚山永盛杰科技有限公司 One kind being used for 3C industry vision four axis flexible robot
CN110497187A (en) * 2019-07-31 2019-11-26 浙江大学山东工业技术研究院 The sun embossing die of view-based access control model guidance assembles match system
CN110842928A (en) * 2019-12-04 2020-02-28 中科新松有限公司 Visual guiding and positioning device and method for compound robot
CN111145272A (en) * 2020-01-13 2020-05-12 苏州沃特维自动化系统有限公司 Manipulator and camera hand-eye calibration device and method
WO2020121396A1 (en) * 2018-12-11 2020-06-18 株式会社Fuji Robot calibration system and robot calibration method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207657218U (en) * 2017-12-14 2018-07-27 江苏科瑞恩自动化科技有限公司 A kind of foam adsorption locating device
CN108107837A (en) * 2018-01-16 2018-06-01 三峡大学 A kind of glass processing device and method of view-based access control model guiding
CN108766894A (en) * 2018-06-07 2018-11-06 湖南大学 A kind of chip attachment method and system of robot vision guiding
CN208484258U (en) * 2018-06-27 2019-02-12 昆山市三信塑胶实业有限公司 Electronic product dorsal shield is pressed overlay film integrated equipment
CN109018591A (en) * 2018-08-09 2018-12-18 沈阳建筑大学 A kind of automatic labeling localization method based on computer vision
WO2020121396A1 (en) * 2018-12-11 2020-06-18 株式会社Fuji Robot calibration system and robot calibration method
CN109719734A (en) * 2019-03-12 2019-05-07 湖南大学 A kind of the mobile phone flashlight package system and assemble method of robot vision guidance
CN109848998A (en) * 2019-03-29 2019-06-07 砚山永盛杰科技有限公司 One kind being used for 3C industry vision four axis flexible robot
CN110497187A (en) * 2019-07-31 2019-11-26 浙江大学山东工业技术研究院 The sun embossing die of view-based access control model guidance assembles match system
CN110842928A (en) * 2019-12-04 2020-02-28 中科新松有限公司 Visual guiding and positioning device and method for compound robot
CN111145272A (en) * 2020-01-13 2020-05-12 苏州沃特维自动化系统有限公司 Manipulator and camera hand-eye calibration device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭杰荣,刘长青等: "《光电信息技术试验教程》", 30 September 2015, 西安电子科技大学出版社 *

Also Published As

Publication number Publication date
CN112208113B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
TWI699846B (en) Apparatus and method for repairing led substrate
EP2914080A1 (en) Electronic component mounting apparatus and electronic component mounting method
JP7018341B2 (en) Manufacturing method of die bonding equipment and semiconductor equipment
EP2787804B1 (en) Component mounting apparatus
TWI532117B (en) Grain bonding device and its joint head device, and collet position adjustment method
CN115249758B (en) Pixel die bonder
TW200304189A (en) Electronic component mounting apparatus and electronic component mounting method
CN112208113B (en) Automatic heat-conducting cotton attaching device based on visual guidance and attaching method thereof
TWI574022B (en) Die Detection Apparatus And Die Delivery Method
CN109041568A (en) Automate chip mounter
CN112129809A (en) Copper sheet thermal resistivity detection device based on visual guidance and detection method thereof
CN113021391A (en) Integrated vision robot clamping jaw and using method thereof
CN211401101U (en) High-precision 3D contour modeling equipment
CN110077848A (en) A kind of automatic identification grabbing device and method that capacitor is miniaturized
TWI778870B (en) Dynamic image positioning method and system for robot feeding
CN209546230U (en) Automate chip mounter
US8096047B2 (en) Electronic component mounting apparatus
WO2021125102A1 (en) Mounting system and mounting method
CN212263890U (en) PCB detects automatic placement system of assembly line based on visual positioning
CN108770233B (en) A kind of component correction system
CN210100014U (en) Automatic pose recognition device for miniaturized materials
JP2001358179A (en) Apparatus for mounting electronic component, and method therefor
JP2021141270A (en) Die bonding device and manufacturing method of semiconductor device
JP6952623B2 (en) Manufacturing method of die bonding equipment and semiconductor equipment
CN219850897U (en) Sorting device for automatic visual inspection of semiconductor laser device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant